Patent classifications
G05B2219/40604
Information processing system, method, and program
An apparatus is provided with a first sensor unit that obtains two-dimensional information or three-dimensional information about a target object with a first position and orientation, a second sensor unit that obtains the two-dimensional information about the target object, a three-dimensional position and orientation measurement unit that measures three-dimensional position and orientation of the target object based on the information obtained by the first sensor unit, a second sensor position and orientation determination unit that calculates second position and orientation based on a measurement result with the three-dimensional position and orientation measurement unit and model information about the target object, and a three-dimensional position and orientation measurement unit that measures the three-dimensional position and orientation of the target object based on the information obtained by the second sensor unit with the second position and orientation and the model information about the target object.
ROBOTIC MULTI-GRIPPER ASSEMBLIES AND METHODS FOR GRIPPING AND HOLDING OBJECTS
A system and method for operating a transport robot to simultaneously grasp and transfer multiple objects is disclosed. The transport robot includes a multi-gripper assembly having an array of addressable vacuum regions each configured to independently provide a vacuum. The robotic system receives image data representative of a group of objects. Individual target objects are identified in the group based on the received image data. Addressable vacuum regions are selected based on the identified target objects. The transport robot is command to cause the selected addressable vacuum regions to simultaneously grasp and transfer multiple target objects.
Robot system and adjustment method therefor
The present disclosure is a robot system including a robot that performs work on a workpiece; a controller that controls the robot; a first camera that captures an image of the workpiece while being moved relative to the workpiece by means of the operation of the robot; and a second camera that is capable of acquiring, in synchronization with image capturing by the first camera, an image that represents the relative positional relationship between the first camera and the workpiece. The controller includes a correcting unit that corrects, on the basis of the image acquired by the second camera, the image-capturing timing of the first camera so that an image is captured at a position at which the workpiece is appropriately captured in the field of view of the first camera.
Robotic multi-gripper assemblies and methods for gripping and holding objects
A method for operating a transport robot includes receiving image data representative of a group of objects. One or more target objects are identified in the group based on the received image data. Addressable vacuum regions are selected based on the identified one or more target objects. The transport robot is command to cause the selected addressable vacuum regions to hold and transport the identified one or more target objects. The transport robot includes a multi-gripper assembly having an array of addressable vacuum regions each configured to independently provide a vacuum. A vision sensor device can capture the image data, which is representative of the target objects adjacent to or held by the multi-gripper assembly.
Information processing apparatus, information processing method, and information processing system
There is provided an information processing apparatus to estimate a position of a distal end of a movable unit with a reduced processing load, the information processing including a position computer that computes, on the basis of first positional information obtained from reading of a projected marker by a first visual sensor and second positional information including positional information obtained from reading of the marker by a second visual sensor that moves relative to the first visual sensor, a position of a movable unit in which the second visual sensor is disposed. This makes it possible to estimate the position of the distal end of the movable unit with a reduced processing load.
ROBOTIC MULTI-GRIPPER ASSEMBLIES AND METHODS FOR GRIPPING AND HOLDING OBJECTS
A method for operating a transport robot includes receiving image data representative of a group of objects. One or more target objects are identified in the group based on the received image data. Addressable vacuum regions are selected based on the identified one or more target objects. The transport robot is command to cause the selected addressable vacuum regions to hold and transport the identified one or more target objects. The transport robot includes a multi-gripper assembly having an array of addressable vacuum regions each configured to independently provide a vacuum. A vision sensor device can capture the image data, which is representative of the target objects adjacent to or held by the multi-gripper assembly.
Wide-Field-of-View Anti-Shake High-Dynamic Bionic Eye
The present application discloses a wide-field-of-view anti-shake high-dynamic bionic eye. A trajectory tracking method based on a bionic eye robot includes: establishing a linear model according to a bionic eye robot; establishing a full state feedback control system on the basis of the linear model; in the full state feedback control system, acquiring an angle and an angular acceleration required for a joint in a target tracking process of the bionic eye on the basis of a preset trajectory expectation value and a preset joint angle expectation value; the method further includes: adopting a linear quadratic regulator (LQR) to calculate a parameter K in the full state feedback control system, and minimizing energy consumption by establishing an energy function, so as to optimize the coordinated head-eye motion control of the linear bionic eye. The present application achieves the optimal control of the target tracking.
Positioning a Robot Sensor for Object Classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
Robotic multi-gripper assemblies and methods for gripping and holding objects
A system and method for operating a transport robot to simultaneously grasp and transfer multiple objects is disclosed. The transport robot includes a multi-gripper assembly having an array of addressable vacuum regions each configured to independently provide a vacuum. The robotic system receives image data representative of a group of objects. Individual target objects are identified in the group based on the received image data. Addressable vacuum regions are selected based on the identified target objects. The transport robot is command to cause the selected addressable vacuum regions to simultaneously grasp and transfer multiple target objects.
ROBOT CONTROLLER THAT CONTROLS ROBOT, LEARNED MODEL, METHOD OF CONTROLLING ROBOT, AND STORAGE MEDIUM
A robot controller that controls a robot by automatically obtaining a controller capable of suitably controlling a wide range of robots. An image is acquired from an image capturing apparatus that photographs an environment including the robot. The robot is driven based on an output result obtained by inputting the image to a neural network. The neural network is updated according to a reward generated in a case where a plurality of virtual images photographed while changing an environmental condition of a virtual environment generated by virtualizing the environment and a state of a virtual robot are input to the neural network, and a policy of the virtual robot, which is output from the neural network, satisfies a predetermined condition.