G05B2219/39536

METHOD AND APPARATUS FOR MOTION PLANNING OF ROBOT, METHOD AND APPARATUS FOR PATH PLANNING OF ROBOT, AND METHOD AND APPARATUS FOR GRASPING OF ROBOT

The present disclosure discloses a method and apparatus for motion planning of a robot, a method and apparatus for path planning of a robot, and a method and apparatus for grasping of a robot. The method includes: when the robot operates on an object to be operated, performing, in combination with a space model of a real scene where the object is located, collision detection on the object and a collision subject in the space model; and determining a motion planning scheme for the robot corresponding to a result of the collision detection based on collision sensitivity of the object and collision sensitivity of the collision subject, the motion planning scheme being formed by the robot operating on the object.

Efficient robot control based on inputs from remote client devices
11724398 · 2023-08-15 · ·

Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.

Machine learning methods and apparatus for semantic robotic grasping

Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).

Robotic system with handling mechanism and method of operation thereof
11767181 · 2023-09-26 · ·

A gripper including: an orientation sensor configured to generate an orientation reading for a target object; a first grasping blade and a second grasping blade configured to secure the target object in conjunction with the first grasping blade and at an opposite end of the target object relative to the first grasping blade; a first position sensor, of the first grasping blade, configured to generate a first position reading of the first grasping blade relative to the target object; a second position sensor, of the second grasping blade, configured to generate a second position reading of the second grasping blade relative to the target object; and a blade actuator configured to secure the target object with the first grasping blade and the second grasping blade based on a valid orientation of the orientation reading and based on the first position reading and the second position reading indicating a stable condition.

CONTROL DEVICE AND PROGRAM

A control device comprises a first interface, a second interface, a processor, and a third interface. A first interface is configured to acquire a captured image of an article. A second interface is configured to transmit and receive data to and from an input/output device. A processor is configured to cause the input/output device to display an article image based on the captured image, receive an input of a position and an angle of a grip portion model of a grip portion that grips the article from the input/output device through the second interface, display the grip portion model on the article image, and generate a gripping plan. A third interface configured to transmit the gripping plan to a control unit of a gripping device including the grip portion.

Efficient robot control based on inputs from remote client devices
11213953 · 2022-01-04 · ·

Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.

Utilizing Prediction Models of an Environment
20230008007 · 2023-01-12 ·

A method, system and product for utilizing prediction models of an environment. In one embodiment, using a model of an environment and based on a first scene of the environment, a predicted second scene of the environment is predicted. An observed second scene is obtained and compared to the predicted second scene. Based on the comparison between the predicted second scene and the observed second scene, an action is performed.

ONLINE AUGMENTATION OF LEARNED GRASPING
20230339107 · 2023-10-26 ·

Systems and methods for online augmentation for learned grasping are provided. In one embodiment, a method is provided that includes identifying an action from a discrete action space. The method includes identifying a second set of grasps of the agent utilizing a transition model based on the action and at least one contact parameter. The at least one contact parameter defines allowed states of contact for the agent. The method includes applying a reward function to evaluate each grasp of the second set of grasps based on a set of contact forces within a friction cone that minimizes a difference between an actual net wrench on the object and a predetermined net wrench. The reward function is optimized online using a lookahead tree. The method includes selecting a next grasp from the second set. The method includes causing the agent to execute the next grasp.

Systems and methods for robotic control under contact
11548152 · 2023-01-10 · ·

A system comprises a database; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive at least one of sensory data from a robot and images from a camera, identify and build models of objects in an environment, wherein the model encompasses immutable properties of identified objects including mass and geometry, and wherein the geometry is assumed not to change, estimate the state including position, orientation, and velocity, of the identified objects, determine based on the state and model, potential configurations, or pre-grasp poses, for grasping the identified objects and return multiple grasping configurations per identified object, determine an object to be picked based on a quality metric, translate the pregrasp poses into behaviors that define motor forces and torques, communicate the motor forces and torques to the robot.

A ROBOTIC SYSTEM FOR PICKING AND PLACING OBJECTS FROM AND INTO A CONSTRAINED SPACE
20220274256 · 2022-09-01 ·

A system comprising: a database configured to store a multi-body model of a robot, the robot comprising a plurality of manipulators, and a plurality of joints and plurality of actuators and actuator motors configured to move the joints, and wherein the multi-body model includes a kinematic and geometric model of each manipulator, a catalog of models for objects to be manipulated, the models comprising a current configuration and a target configuration, and a functional mapping of sensory data to configurations of the robot and the manipulators needed to manipulate the objects; at least one hardware processor coupled with the database; and one or more software modules that, when executed by the at least one hardware processor, receive sensory data from within a constrained space, identify objects in the constrained space based on the received sensory data and the catalog of models, determine a target pose for the joints and the manipulators based on the sensory data and the current and target configurations associated with the identified object, and compute joint space positions to necessary to realize the target pose.