G05B2219/39391

ROBOT SYSTEM, PARALLEL LINK MECHANISM, CONTROL METHOD, CONTROL DEVICE, AND STORAGE MEDIUM

A robot system according to an embodiment includes an arm mechanism that is articulated, a parallel link mechanism, an end effector, a detector, and a control device. The parallel link mechanism includes a fixed part mounted to a distal part of the arm mechanism, and a movable part that is mounted to the fixed part via multiple parallel links and is movable with respect to the fixed part. The end effector is mounted to the movable part. The detector is provided for detecting a position or orientation of a control point. The control device controls the arm mechanism and the parallel link mechanism. The control device performs a first operation of setting a posture of the control point to a first posture, and a second operation of setting the posture of the control point to a task posture in which the end effector performs a task.

Viewpoint invariant visual servoing of robot end effector using recurrent neural network

Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.

Communication system and method for controlling communication system

A communication system according to the present disclosure includes a camera configured to be able to photograph a user who is a communication partner and a microphone configured to be able to form a beam-forming in a specific direction. The control unit identifies a position of the mouth of a user using an image of the user taken by the camera and controls a position of a head part so that the identified position of the mouth of the user is included in a region of the beam-forming.

Operation system

A robot system including a robot, a marker unit, a sensor, storage device, and a control device. The robot performs an operation with regard to a workpiece. The marker unit is attached to a measurement object and includes a base section and a plurality of markers attached to the base section. The sensor detects identification information and three-dimensional positions of the plurality of markers. The storage device stores teaching data including operation data and attachment position data indicating a correspondence relationship between the identification information of each of the markers and an attachment position of the corresponding marker. The control device calculates a three-dimensional position of the measurement object based on the three-dimensional positions of the plurality of markers and the attachment position data and controls the robot based on the three-dimensional position of the measurement object and the operation data so as to make the robot perform the operation.

Image processing apparatus that processes image picked up by image pickup apparatus attached to robot, control method therefor, and storage medium storing control program therefor

An image processing apparatus capable of simplifying operations for determining an image pickup posture of an image pickup apparatus attached to a robot. The image processing apparatus processes an image that an image pickup apparatus attached to a robot picks up. The image processing apparatus includes a memory device that stores a set of instructions, and at least one processor that executes the set of instructions to specify a working area of the robot based on teaching point information showing a plurality of designated teaching points, specify an image pickup area of the image pickup apparatus so as to include the specified working area; and determine an image pickup posture of the robot based on the specified image pickup area.

Techniques for detecting errors or loss of accuracy in a surgical robotic system
11602401 · 2023-03-14 · ·

Systems and methods for operating a robotic surgical system are provided. The system includes a surgical tool, a manipulator comprising links for controlling the tool, a navigation system includes a tracker and a localizer to monitor a state of the tracker. Controller(s) determine a relationship between one or more components of the manipulator and one or more components of the navigation system by utilizing kinematic measurement data from the manipulator and navigation data from the navigation system. The controller(s) utilize the relationship to determine whether an error has occurred relating to at least one of the manipulator and the navigation system. The error is at least one of undesired movement of the manipulator, undesired movement of the localizer, failure of any one or more components of the manipulator or the localizer, and/or improper calibration data.

DEVICE AND METHOD FOR CONTROLLING A ROBOT TO INSERT AN OBJECT INTO AN INSERTION
20220331964 · 2022-10-20 ·

A method for controlling a robot to insert an object into an insertion. The method includes controlling the robot to hold the object, generating an estimate of a target position to insert the object into the insertion, controlling the robot to move to the estimated target position, taking a camera image using a camera mounted on the robot after having controlled the robot to move to the estimated target position, feeding the camera image into a neural network which is trained to derive, from camera images, movement vectors which specify movements from the positions at which the camera images are taken to insert objects into insertions and controlling the robot to move according to the movement vector derived by the neural network from the camera image.

DEVICE AND METHOD FOR TRAINING A NEURAL NETWORK FOR CONTROLLING A ROBOT FOR AN INSERTING TASK
20220335710 · 2022-10-20 ·

A method for training a neural network to derive, from an image of a camera mounted on a robot, a movement vector for the robot to insert an object into an insertion. The method includes controlling the robot to hold the object, bringing the robot into a target position in which the object is inserted in the insertion, for a plurality of positions different from the target position controlling the robot to move away from the target position to the position, taking a camera image by the camera and labelling the camera image by a movement vector to move back from the position to the target position and training the neural network using the labelled camera images.

Method for gripping an object and suction gripper

The invention relates to a method for gripping an object by a handling system, including a robot with at least one robot arm, a gripping device which is connected to the robot arm and has a pneumatically operated suction gripper having an elastically deformable contact portion for contact with an outer surface of the object to be gripped, an identifier for identifying the outer surface of the object to be gripped and a controller which interacts with the identifier and is designed to control the robot.

Robot device controller for controlling position of robot
11679508 · 2023-06-20 · ·

A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.