G05B2219/37567

MACHINE LEARNING DEVICE, ROBOT SYSTEM, AND MACHINE LEARNING METHOD FOR LEARNING OBJECT PICKING OPERATION

A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of workpieces placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each workpiece, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the workpiece by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the workpiece, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.

Machine learning device, robot system, and machine learning method for learning object picking operation

A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of objects placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each object, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the object by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the object, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.

VIRTUAL TEACH AND REPEAT MOBILE MANIPULATION SYSTEM

A method for performing a task by a robotic device includes mapping a group of task image pixel descriptors associated with a first group of pixels in a task image of a task environment to a group of teaching image pixel descriptors associated with a second group of pixels in a teaching image based on positioning the robotic device within the task environment. The method also includes determining a relative transform between the task image and the teaching image based on mapping the plurality of task image pixel descriptors. The relative transform indicates a change in one or more of points of 3D space between the task image and the teaching image. The method also includes performing the task associated with the set of parameterized behaviors based on updating one or more parameters of a set of parameterized behaviors associated with the teaching image based on determining the relative transform.

Device and method for calibrating coordinate system of 3D camera and robotic arm
11548156 · 2023-01-10 · ·

A calibration device fora 3D (three dimensional) camera and a robotic arm includes three plates with fixed relative positions and non-parallel separation disposed on a mount. Three spatial planes extending from the three plates intersect at a positioning point for external parameter correction. A calibration method for coordinates of a 3D (three dimensional) camera and a robotic arm using the calibration device is also specified in detail.

SAFETY IN DYNAMIC 3D HEALTHCARE ENVIRONMENT

The present invention relates to safety in a dynamic 3D healthcare environment. The invention in particular relates to a medical safety-system for dynamic 3D healthcare environments, a medical examination system with motorized equipment, an image acquisition arrangement, and a method for providing safe movements in dynamic 3D healthcare environments. In order to provide improved safety in dynamic 3D healthcare environments with a facilitated adaptability, a medical safety-system (10) for dynamic 3D healthcare environments is provided, comprising a detection system (12), a processing unit (14), and an interface unit (16). The detection system comprises at least one sensor arrangement (18) adapted to provide depth information of at least a part of an observed scene (22). The processing unit comprises a correlation unit (24) adapted to correlate the depth information. The processing unit comprises a generation unit (26) adapted to generate a 3D free space model (32). The interface unit is adapted to provide the 3D free space model.

Autonomous task performance based on visual embeddings

A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.

DEVICE AND METHOD FOR CALIBRATING COORDINATE SYSTEM OF 3D CAMERA AND ROBOTIC ARM
20220080597 · 2022-03-17 ·

A calibration device fora 3D (three dimensional) camera and a robotic arm includes three plates with fixed relative positions and non-parallel separation disposed on a mount. Three spatial planes extending from the three plates intersect at a positioning point for external parameter correction. A calibration method for coordinates of a 3D (three dimensional) camera and a robotic arm using the calibration device is also specified in detail.

Robot apparatus and method of controlling robot apparatus

A robot apparatus includes a grasping section that grasps an object, a recognition section that recognizes a graspable part and a handing-over area part of the object, and a grasp planning section that plans a path of the grasping section for handing over the object to a recipient by the handing-over area part. The robot apparatus further includes a grasp control section that controls grasp operation of the object by the grasping section in accordance with the planned path.

REMOTE CONTROL SYSTEM AND REMOTE CONTROL METHOD
20210178598 · 2021-06-17 · ·

A remote control system includes: an imaging unit that shoots an environment in which a device to be operated including an end effector is located; a recognition unit that recognizes objects that can be grasped by the end effector based on a shot image of the environment shot by the imaging unit; an operation terminal that displays the shot image and receive handwritten input information input to the displayed shot image; and an estimation unit that, based on the objects that can be grasped and the handwritten input information input to the shot image, estimates an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimates a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.

Training methods for deep networks

A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.