Patent classifications
G05B2219/40607
ROBOT DEVICE FOR DETECTING INTERFERENCE OF CONSTITUENT MEMBER OF ROBOT
This robot device comprises a robot including a plurality of constituent members, and a control device. The control device stores three-dimensional shape data of the constituent members of the robot. A setting unit of the control device sets some of the constituent members to determine interference in accordance with an operation state of the robot. A determination unit of the control device determines, on the basis of the three-dimensional shape data of the constituent members set by the setting unit, whether the constituent members of the robot interfere with a container storing a workpiece.
Determining final grasp pose of robot end effector after traversing to pre-grasp pose
Grasping of an object, by an end effector of a robot, based on a final grasp pose, of the end effector, that is determined after the end effector has been traversed to a pre-grasp pose. An end effector vision component can be utilized to capture instance(s) of end effector vision data after the end effector has been traversed to the pre-grasp pose, and the final grasp pose can be determined based on the end effector vision data. For example, the final grasp pose can be determined based on selecting instance(s) of pre-stored visual features(s) that satisfy similarity condition(s) relative to current visual features of the instance(s) of end effector vision data, and determining the final grasp pose based on pre-stored grasp criteria stored in association with the selected instance(s) of pre-stored visual feature(s).
COUNTER UNIT, COUNTER UNIT CONTROL METHOD, CONTROL DEVICE, AND CONTROL SYSTEM
A counter unit, a counter unit control method, a control device, and a control system are provided. A counter unit (10) executes an output to an actuator (40) at a time point when a waiting time indicated by a timing adjustment value (Ta) received from PLC (20) has elapsed since it was determined that an actual measurement value obtained by using a pulse signal has matched with a target value.
SYSTEMS AND METHODS FOR DEFORMABLE OBJECT MANIPULATION USING AIR
Systems and methods for manipulating deformable objects using air are disclosed. In one embodiment, a system for manipulating a deformable object, the system includes a first robot arm having a first gripper, a second robot arm having a second gripper, a blower robot arm having an air pump, and one or more processors programmed to control the first robot arm and the second robot arm to grasp the deformable object using the first gripper and the second gripper, respectively, and to control the blower robot arm to perform a plurality of blowing actions onto the deformable object until an objective is satisfied.
CONTROL DEVICE AND ALIGNMENT DEVICE
A control device includes a first statistical processing unit a second statistical processing unit and a movement control unit. The first statistical processing unit acquires relative positions of the object calculated by the visual sensor and performs statistical processing on the acquired relative positions of the object. The second statistical processing unit acquires from the position sensor relative positions of the holding device corresponding to each of the relative positions of the object calculated by the visual sensor, and performs statistical processing on the acquired relative positions of the holding device. The movement control unit performs feedback control of the moving device based on the relative positions of the object and the relative positions of the holding, device and performs alignment of the object with the target position while moving the object closer to the target position.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
An information processing device includes processing circuitry. The processing circuitry is configured to acquire one or more pieces of first state information representing a state of each of one or more second subjects related to a first subject to be a subject of inference at first time, and one or more pieces of second state information representing a state of each of the one or more second subjects at second time; and generate learning data for use in reinforcement learning of a machine learning model for use in inference. The learning data includes the first state information at least part of which is replaced with any of the one or more pieces of the second state information, and the second state information at least part of which is replaced with any of the one or more pieces of the first state information.
UTILIZING PAST CONTACT PHYSICS IN ROBOTIC MANIPULATION (E.G., PUSHING) OF AN OBJECT
Utilization of past dynamics sample(s), that reflect past contact physics information, in training and/or utilizing a neural network model. The neural network model represents a learned value function (e.g., a Q-value function) and that, when trained, can be used in selecting a sequence of robotic actions to implement in robotic manipulation (e.g., pushing) of an object by a robot. In various implementations, a past dynamics sample for an episode of robotic manipulation can include at least two past images from the episode, as well as one or more past force sensor readings that temporally correspond to the past images from the episode.
SYSTEM AND METHOD FOR SEQUENCING ASSEMBLY TASKS
One embodiment can provide a method and system for configuring a robotic system. During operation, the system can present to a user on a graphical user interface an image of a work scene comprising a plurality of components and receive, from the user, a sequence of operation commands. A respective operation command can correspond to a pixel location in the image. For each operation command, the system can determine, based on the image, a task to be performed at a corresponding location in the work scene and generate a directed graph based on the received sequence of operation commands. Each node in the directed graph can correspond to a task, and each directed edge in the directed graph can correspond to a task-performing order, thereby facilitating the robotic system to perform a sequence of tasks based on the sequence of operation commands.
SYNTHETIC REPRESENTATION OF A SURGICAL ROBOT
A system comprises a first robotic arm adapted to support and move a tool and a second robotic arm adapted to support and move a camera configured to capture an image of a camera field of view. The system further comprises an input device, a display, and a processor. The processor is configured to display a first synthetic image including a first synthetic image of the tool. The first synthetic image of the tool includes a portion of the tool outside of the camera field of view. The processor is also configured to receive a user input at the input device and responsive to the user input, change the display of the first synthetic image to a display of a second synthetic image including a second synthetic image of the tool that is different from the first synthetic image of the tool.
METHOD FOR PICKING UP AN OBJECT FROM A BIN
There is provided a method for picking up an object from a bin to be implemented by a robotic arm including a controller and a picking-up module. The method includes: recognizing, by the controller, at least one object in the bin based on an image of an interior of the bin so as to determine a type of each object; determining, by the controller, a score for each object based on the type thereof; determining, by the controller, whether a greatest score among the score(s) of the at least one TBT object is greater than a predetermined value; and by the picking-up module, picking up one of the at least one TBT object that has the greatest score when it is determined that the greatest score is greater than the predetermined value.