B25J9/1612

GRASP LEARNING USING MODULARIZED NEURAL NETWORKS
20220388162 · 2022-12-08 ·

A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.

TOOL FOR PALLETIZING MIXED LOAD PRODUCTS, PALLETIZING ROBOT INCLUDING THE TOOL, AND METHOD THEREFOR

A tool, for palletizing mixed load products, includes a frame for mounting the tool to a robot, a support assembly having a support member forming a support surface disposed in a predetermined reference orientation so as to support product, and a gripping assembly, mounted to the frame, with an actuator and a grip press operably coupled to the actuator so as to move the grip press relative to the frame in an actuation direction opposite the support surface so as to clamp the product between the support surface in the predetermined reference orientation and the grip press. The support assembly coupling to the frame has a configuration fixing the support member relative to the frame in the actuation direction with the support surface in the predetermined reference orientation, and is movably released in at least another direction so that the support member is movable away from the predetermined reference orientation.

Handling device, control device, and holding method

A handling device according to an embodiment has an arm, a holder, a storage, and a controller. The arm includes at least one joint. The holder is attached to the arm and is configured to hold an object. The storage stores a function map including at least one of information about holdable positions of the holder and information about possible postures of the holder. The detector is configured to detect information about the object. The controller is configured to generate holdable candidate points on the basis of the information detected by the detector, to search the function map for a position in an environment in which the object is present, the position being associated with the generated holdable candidate points, and to determine a holding posture of the holder on the basis of the searched position. The function map associates a manipulability with each position in the environment in which the object is present. The manipulability is a parameter calculated from at least one joint angle of the holder.

Management System and Control Method for Management System
20220379491 · 2022-12-01 ·

Provided is a management system for managing storage and retrieval of items. The management system includes a transfer robot that includes a drive mechanism and a sensor, the drive mechanism being configured to move a shelf along a transfer route to a region where any one of operations of carrying in an item, carrying out the item, and transferring the item between shelves is enabled to be performed, and place the shelf at a predetermined position, the sensor being configured to detect a position of the transfer robot in a space where the transfer robot is allowed to be moved, a device configured to perform at least either the operation or assistance in the operation, and a first controller configured to generate control data for controlling the device and output the control data to the device, the control data being generated on the basis of an error between a position of the shelf transferred by the transfer robot and a target position on the transfer route, the error being calculated by using the position of the transfer robot detected by the sensor.

OBJECT BIN PICKING WITH ROTATION COMPENSATION
20220383538 · 2022-12-01 ·

A system and method for identifying an object to be picked up by a robot. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning convolutional neural network that performs an image segmentation process that extracts features from the RGB image, assigns a label to the pixels so that objects in the segmentation image have the same label and rotates the object using the orientation of the object in the segmented image. The method then identifies a location for picking up the object using the segmentation image and the depth map image and rotates the object when it is picked up.

LIQUID SUPPLY DEVICE AND LIQUID SUPPLY METHOD
20220380190 · 2022-12-01 · ·

Provided is a liquid supply device including: a plug having a plug side liquid channel; a socket having a socket side liquid channel; a hand configured to grip the socket and arrange the socket in a predetermined attitude at a three-dimensional position within the motion range; and an image capturing unit configured to recognize the orientation of the plug axis of the plug. The hand grips the socket so that the socket and the plug are in an attitude where the orientation of a socket axis matches the orientation of the plug axis recognized by the image capturing unit, and the socket gripped by the hand is inserted in the plug to couple the socket side liquid channel to the plug side liquid channel.

DATA SELECTION BASED ON UNCERTAINTY QUANTIFICATION

Apparatuses, systems, and techniques generate poses of an object based on data of the object observed from a first viewpoint and a second viewpoint. The poses can be evaluated to determine a portion of the data usable by an estimator to generate a pose of the object.

ROBOT
20220379467 · 2022-12-01 ·

A robot includes a robot body, a hand, an arm, and a controller. The hand includes a fixed frame that is fixed to the arm, a first camera that is attached to the fixed frame, a movable frame that is rotatable with respect to the fixed frame, gripping portions that are attached to the movable frame to grip an article having a front surface facing the robot and a back surface opposite to the front surface, and a driver that rotates the movable frame. The gripping portions grip the article in a state where the back surface is opened, and shift from a first state where the article is gripped to a second state where the back surface of the article is able to be captured by the first camera by the rotation of the movable frame.

System and method for robotic bin picking

A method and computing system comprising identifying one or more candidate objects for selection by a robot. A path to the one or more candidate objects may be determined based upon, at least in part, a robotic environment and at least one robotic constraint. A feasibility of grasping a first candidate object of the one or more candidate objects may be validated. If the feasibility is validated, the robot may be controlled to physically select the first candidate object. If the feasibility is not validated, at least one of a different grasping point of the first candidate object, a second path, or a second candidate object may be selected.

Optimization of Motion Paths of a Robot Using Vision Data
20220371195 · 2022-11-24 ·

An example computer-implemented method includes receiving, from one or more vision components in an environment, vision data that captures features of the environment, including object features of an object that is located in the environment, and prior to a robot manipulating the object: (i) determining based on the vision data, at least one first adjustment to a programmed trajectory of movement of the robot operating in the environment to perform a task of transporting the object, and (ii) determining based on the object features of the object, at least one second adjustment to the programmed trajectory of movement of the robot operating in the environment to perform the task, and causing the robot to perform the task, in accordance with the at least one first adjustment and the at least one second adjustment to the programmed trajectory of movement of the robot.