G05B2219/40532

Automatic robot perception programming by imitation learning

Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.

Learning device, learning method, learning model, detection device and grasping system

An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.

DEVICE AND METHOD FOR CONTROLLING A ROBOT
20230226699 · 2023-07-20 ·

A method for controlling a robot device. The method includes acquiring an image(s) of in a workspace of the robot device; determining, by a neural network, object hierarchy information specifying stacking relations of the objects with respect to each other in the workspace of the robot device and confidence information for the object hierarchy information from the image(s); if the confidence information indicates a confidence above a confidence threshold, manipulating an object of the objects; if the confidence information indicates a confidence lower than the confidence threshold, acquiring an additional image of the objects and determining, by the neural network, additional object hierarchy information specifying stacking relations of the objects with respect to each other in the workspace of the robot device and additional confidence information for the additional object hierarchy information from the additional image and control the robot using the additional object hierarchy information.

ROBOTIC LAUNDRY SORTING DEVICES, SYSTEMS, AND METHODS OF USE

Devices, systems, and methods for autonomously sorting dirty laundry articles into batched loads for washing are described. For example, an autonomous sorting system includes an enclosed channel including a stationary floor extending between an inlet end and an outlet end of the channel, a plurality of arms disposed in series along the enclosed channel for selectively grasping at least one of the plurality of deformable articles in sequence. The system includes an outlet orifice adjacent the outlet end through which each separated deformable article exits the enclosed channel upon release by the terminal gripper of the one of the plurality of arms, and one or more conveyors disposed adjacent the outlet end configured for receiving thereon a plurality of bins for collecting for washing together two or more articles of the plurality of deformable articles released through the outlet orifice having a common sensor-detected one or more characteristics.

Information processing apparatus, information processing method, and system

An information processing apparatus includes an acquisition unit acquiring a first image and a second image, the first image being an image of a target area in an initial state, the second image being an image of the target area where a first object conveyed from a supply area is placed, an estimation unit estimating one or more second areas in the target area, based on a feature of a first area estimated using the first image and the second image, the first area being where the first object is placed, the one or more second areas each being an area where an object in the supply area can be placed and being different from the first area. A control unit controls a robot to convey a second object different from the first object from the supply area to any of the one or more second areas.

SYSTEMS AND METHODS FOR A VISION GUIDED END EFFECTOR

Systems and method for an object from a plurality of objects are disclosed. An image of a scene containing the plurality of objects is obtained, and a segmentation map is generated for the objects in the scene. The shapes of the objects are determined based on the segmentation map. An end effector is adjusted in response to determining the shapes of the objects. The adjusting the end effector includes shaping the end effector according to at least one of the shapes of the objects. The plurality of objects is approached in response to the shaping of the end effector, and one of the plurality of objects is picked with the end effector.

Process for Painting a Workpiece Comprising Generating a Trajectory Suitable for the Actual Workpiece
20230103030 · 2023-03-30 ·

The invention relates to a process for painting a workpiece using a painting robot including a robot arm equipped with a paint spraying device, the process including, an operation S1 of modeling a realistic 3D model corresponding to the workpiece as deformed and positioned in a paint cell, the realistic 3D model including paint trajectory information suitable for the workpiece as deformed and positioned in the paint cell, and a paint spraying operation S2 during which the paint spraying device is moved along the paint trajectory opposite the workpiece.

METHOD AND DEVICE FOR ESTIMATING POSE OF ELECTRIC VEHICLE CHARGING SOCKET AND AUTONOMOUS CHARGING ROBOT EMPLOYING THE SAME

The present disclosure provides is a method and device for accurately estimating a pose of a charging socket of an electric vehicle regardless of a shape of the charging socket, so that an electric vehicle charging robot may precisely move a charging connector toward the charging socket of the electric vehicle and couple the charging connector to the charging socket. According to an aspect of an exemplary embodiment, a method of estimating the pose of the charging socket of an electric vehicle includes: acquiring an RGB image and a depth map of the charging socket; detecting a keypoint of the charging socket based on the RGB image; deriving a first estimated pose of the charging socket based on the depth map; and deriving a second estimated pose of the charging socket based on the keypoint of the charging socket and the first estimated pose.

ROBOT MASTER CONTROL SYSTEM

The present disclosure relates to a robot master control system. The robot master control system includes: a master controller, configured to control at least one dual-robot control system, where each of the least one dual-robot control system includes a first robot, a second robot, and a sub-controller controlling the first robot and the second robot, and the sub-controller is controlled by the master controller. In the present disclosure, multiple robots may be coordinated and comprehensively controlled to grab and move objects. Compared with a single robot, the efficiency of the multiple robots operation is greatly improved. In addition, each dual-robot control system may be individually configured, thereby improving the work efficiency of coordinated work of dual-robot control systems.

Method and system for machine concept understanding

A system and method for machine understanding, using program induction, includes a visual cognitive computer including a set of components designed to execute predetermined primitive functions. The method includes determining programs using a program induction engine that interfaces with the visual cognitive computer to discover programs using the predetermined primitive functions and/or executes the discovered programs based on an input.