G05B2219/40053

Method and system for robotic pick-and-place comprising a container floor mounted to a transformable end of a lift mechanism and a set of container walls to define a container working volume with a working depth extends beyond a picking workspace of a robotic arm
11591194 · 2023-02-28 · ·

The system can include: a container 110, a set of sensors 120, and a controller 130. The system can optionally include a robot 140. However, the system 100 can additionally or alternatively include any other suitable set of components. The system functions to monitor and/or maintain a fullness level of a container. The system can additionally or alternatively function to enable robotic picking out of the container (e.g., in a pick-and-place setting). The system can additionally function to maintain candidate objects within reach of the robot's end effector to increase robot uptime while minimizing the extent of the robot's required motion (e.g., in the z-axis).

LEARNING DATASET GENERATION DEVICE AND LEARNING DATASET GENERATION METHOD
20230005250 · 2023-01-05 · ·

A learning dataset generation device includes: a memory that stores three-dimensional CAD data of a workpiece and a container; and one or more processors including hardware, wherein the one or more processors are configured to use the three-dimensional CAD data of the workpiece and the container, stored in the memory, to generate, in a three-dimensional virtual space, a plurality of imaging objects in which a plurality of the workpieces are bulk-loaded in different forms inside the container, acquire a plurality of virtual distance images by measuring each of the generated imaging objects by means of a virtual three-dimensional measurement machine disposed in the three-dimensional virtual space, accept at least one teaching position for each of the acquired virtual distance images, and generate a learning dataset by associating the accepted teaching position with each of the virtual distance images.

DEVICE AND METHOD FOR CONTROLLING A ROBOT
20230226699 · 2023-07-20 ·

A method for controlling a robot device. The method includes acquiring an image(s) of in a workspace of the robot device; determining, by a neural network, object hierarchy information specifying stacking relations of the objects with respect to each other in the workspace of the robot device and confidence information for the object hierarchy information from the image(s); if the confidence information indicates a confidence above a confidence threshold, manipulating an object of the objects; if the confidence information indicates a confidence lower than the confidence threshold, acquiring an additional image of the objects and determining, by the neural network, additional object hierarchy information specifying stacking relations of the objects with respect to each other in the workspace of the robot device and additional confidence information for the additional object hierarchy information from the additional image and control the robot using the additional object hierarchy information.

Systems, devices, components, and methods for a compact robotic gripper with palm-mounted sensing, grasping, and computing devices and components
11559900 · 2023-01-24 · ·

Disclosed are various embodiments of a three-dimensional perception and object manipulation robot gripper configured for connection to and operation in conjunction with a robot arm. In some embodiments, the gripper comprises a palm, a plurality of motors or actuators operably connected to the palm, a mechanical manipulation system operably connected to the palm, a plurality of fingers operably connected to the motors or actuators and configured to manipulate one or more objects located within a workspace or target volume that can be accessed by the fingers. A depth camera system is also operably connected to the palm. One or more computing devices are operably connected to the depth camera and are configured and programmed to process images provided by the depth camera system to determine the location and orientation of the one or more objects within a workspace, and in accordance therewith, provide as outputs therefrom control signals or instructions configured to be employed by the motors or actuators to control movement and operation of the plurality of fingers so as to permit the fingers to manipulate the one or more objects located within the workspace or target volume. The gripper can also be configured to vary controllably at least one of a force, a torque, a stiffness, and a compliance applied by one or more of the plurality of fingers to the one or more objects.

Adaptive grasp planning for bin picking
11701777 · 2023-07-18 · ·

An adaptive robot grasp planning technique for bin picking. Workpieces in a bin having random positions and poses are to be grasped by a robot and placed in a goal position and pose. The workpiece shape is analyzed to identify a plurality of robust grasp options, each grasp option having a position and orientation. The workpiece shape is also analyzed to determine a plurality of stable intermediate poses. Each individual workpiece in the bin is evaluated to identity a set of feasible grasps, and the workpiece is moved to the goal pose if such direct movement is possible. If direct movement is not possible, a search problem is formulated, where each stable intermediate pose is a node. The search problem is solved by evaluating the feasibility and optimality of each link between nodes. Feasibility of each link is evaluated in terms of collision avoidance constraints and robot joint motion constraints.

Handling device and computer program product

A handling device according to an embodiment includes a manipulator, a normal grid generation unit, a hand kernel generation unit, a calculation unit, and a control unit. The normal grid generation unit converts a depth image into a point cloud, generates spatial data including an object to be grasped that is divided into a plurality of grids from the point cloud, and calculates a normal vector of the point cloud included in the grid using spherical coordinates. The hand kernel generation unit generates a hand kernel of each suction pad. The calculation unit calculates ease of grasping the object to be grasped by a plurality of suction pads based on a 3D convolution calculation using a grid including the spatial data and the hand kernel. The control unit controls a grasping operation of the manipulator based on the ease of grasping the object to be grasped by the plurality of suction pads.

Systems and methods for pre-plating structural members

Pre-plating systems and related methods are disclosed. A pre-plating system includes a press, an infeed robot configured to deliver a structural member to the press, and an outfeed robot configured to remove the structural member from the press. The press is configured to secure a plate to the structural member while the structural member is held in position by at least one of the infeed robot or the outfeed robot. A pre-plating system includes a press, a transfer pedestal, a plate picking robot, and a press loading robot. The plate picking robot is configured to pick a plate from a container and position the plate on the transfer pedestal. The press loading robot is configured to transfer the plate to the press. The press is configured to press the plate into a structural member positioned within the press.

SYSTEMS AND METHODS FOR PICKING OBJECTS USING 3-D GEOMETRY AND SEGMENTATION

A method for controlling a robotic system includes: capturing, by an imaging system, one or more images of a scene; computing, by a processing circuit including a processor and memory, one or more instance segmentation masks based on the one or more images, the one or more instance segmentation masks detecting one or more objects in the scene; computing, by the processing circuit, one or more pickability scores for the one or more objects; selecting, by the processing circuit, an object among the one or more objects based on the one or more pickability scores; computing, by the processing circuit, an object picking plan for the selected object; and outputting, by the processing circuit, the object picking plan to a controller configured to control an end effector of a robotic arm to pick the selected object.

Method and system for performing image classification for object recognition

Systems and methods for classifying at least a portion of an image as being textured or textureless are presented. The system receives an image generated by an image capture device, wherein the image represents one or more objects in a field of view of the image capture device. The system generates one or more bitmaps based on at least one image portion of the image. The one or more bitmaps describe whether one or more features for feature detection are present in the at least one image portion, or describe whether one or more visual features for feature detection are present in the at least one image portion, or describe whether there is variation in intensity across the at least one image portion. The system determines whether to classify the at least one image portion as textured or textureless based on the one or more bitmaps.

MULTI-ANGLE END EFFECTOR
20220402127 · 2022-12-22 ·

Embodiments of the present disclosure are directed towards robotic systems and methods. The robot may include an end effector, a tool flange of the robot, and a joint. The end effector may include a contacting part configured to contact a workpiece. The joint may be positioned between, and connected to, the tool flange and the end effector. The joint may include a variable angle between the tool flange and the end effector.