Patent classifications
B25J9/1669
Robotic grasping prediction using neural networks and geometry aware object representation
Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.
Vehicle body assembly station
The vehicle body assembly station comprises main transport assembly for conveying a vehicle body along a first direction D1 in which at least one assembly robot is provided to move along a second direction D2, and temporary transport assembly whose operation is more accurate than that of the main transport assembly for moving the vehicle body independently from the main transport assembly while the assembly robot is performing operations on the vehicle body, whereby a new coordinate reference system is created by the temporary transport assembly.
Exoskeleton system, control device, and method
An exoskeleton system includes a first exoskeleton unit configured to support a first body part, a second exoskeleton unit configured to support a second body part, and a control device. The first exoskeleton unit and the second exoskeleton unit are mechanically decoupled from each other. The control device is configured to control, based on a control model, at least one of the first exoskeleton unit and the second exoskeleton unit. The control model is based on a multibody system that models the first exoskeleton unit, the second exoskeleton unit, and at least one of the first body part and the second body part.
Deep machine learning methods and apparatus for robotic grasping
Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
System for item placement into non-rigid containers
Examples provide a system and method for autonomously placing items into non-rigid containers. An image analysis component analyzes image data generated by one or more cameras associated with picked items ready for bagging and/or a non-rigid container, such as, but not limited to, a bag. The image analysis component generates dynamic placement data identifying how much space is available inside the bag, bag tension, and/or contents of the bag. A dynamic placement component generates a per-item assigned placement for a selected item ready for bagging based on a per-bag placement sequence and the dynamic placement data. Instructions, including the per-item assigned placement designating a location within the interior of the non-rigid container to the selected item and an orientation for the selected item after bagging, is sent to at least one robotic device. The robotic device places the selected item into the non-rigid container in accordance with the instructions.
ON DEMAND CREATION OF MATERIAL MOVEMENT TRACK FOR WAREHOUSE
A method, computer system, and a computer program product for track creation are provided. A computer receives a notification of at least one object to be moved. The at least one object is disposed at a first position. The computer receives a determination of a second position for the at least one object. The computer generates a track plan for a first track for transporting the at least one object from the first position to the second position. The computer transmits a first instruction message to a first robot. The instruction message instructs the first robot to build a track according to the track plan.
Apparatus and method for generating robot interaction behavior
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
Gripping system
An object of the present invention is to provide a technique for a gripping system having an arm mechanism and a hand mechanism attached to the arm mechanism, by which an operation of the arm mechanism can be stopped as soon as the hand mechanism contacts an object. In the gripping system according to the present invention, the hand mechanism is provided with a contact detection unit for detecting that a predetermined site of the hand mechanism has come into contact with the object. The hand mechanism is also provided with a signal transmission unit that is electrically connected to an arm control device. The signal transmission unit transmits a command signal to stop the operation of the arm mechanism directly to the arm control device at the point where the contact detection unit detects that the predetermined site of the hand mechanism has come into contact with the object.
SYSTEM FOR COMPONENT FASTENING SYSTEM USING COOPERATIVE ROBOT AND FASTENING METHOD THEREOF
A parts fastening system using a cooperative robot that fastens a module part to a fastening target includes: a jig to load the module part at a predetermined position; a loading robot to grip the module part loaded on the jig, and to move and align the module part to a fastening area in which the module part is fastened to the fastening target; a fastening robot including a first camera, the fastening robot to fasten the module part to the fastening target; and a control device to control movements of the loading robot and the fastening robot.
Autonomous mobile robots for movable production systems
A system for performing autonomous agriculture within an agriculture production environment includes one or more agriculture pods, a stationary robot system, and one or more mobile robots. The agriculture pods include one or more plants and one or more sensor modules for monitoring the plants. The stationary robot system collects sensor data from the sensor modules, performs farming operations on the plants according to an operation schedule based on the collected sensor data, and generates a set of instruction for transporting the agriculture pods within the agriculture production environment. The stationary robot system communicates the set of instructions to the agriculture pods. The mobile robots transport the agriculture pods between the stationary robot system and one or more other locations within the agriculture production environment according to the set of instructions.