Patent classifications
B25J9/1653
STATE ESTIMATION FOR A ROBOT EXECUTION SYSTEM
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for state estimation in a robotics system. One of the systems includes an execution subsystem configured to drive one or more robots in an operating environment including continually evaluating a plurality of execution predicates, wherein each execution predicate comprises a rule having a predicate value, and wherein, whenever a state value that satisfies the predicate value of the predicate is detected by the execution subsystem, the execution subsystem is configured to trigger a corresponding action to be performed in the operating environment by the one or more robots. A state estimator is configured to continually execute a state estimation function using one or more sensor values or status messages obtained from the operating environment and to automatically update a discrete state value for a first execution predicate of the plurality of execution predicates evaluated by the execution subsystem.
SYSTEM AND METHOD FOR ERROR CORRECTION AND COMPENSATION FOR 3D EYE-TO-HAND COORDINATON
One embodiment can provide a robotic system. The system can include a machine-vision module, a robotic arm comprising an end-effector, a robotic controller configured to control movements of the robotic arm, and an error-compensation module configured to compensate for pose errors of the robotic arm by determining a controller-desired pose corresponding to a camera-instructed pose of the end-effector such that, when the robotic controller controls the movements of the robotic arm based on the controller-desired pose, the end-effector achieves, as observed by the machine-vision module, the camera-instructed pose. The error-compensation module can include a machine learning model configured to output an error matrix that correlates the camera-instructed pose to the controller-desired pose.
Method and system for robot action imitation learning in three-dimensional space
The present invention provides a method for robot action imitation learning in a three-dimensional space and a system thereof, relates to the technical fields of artificial intelligence and robots. A method based on a series-parallel multi-layer backpropagation (BP) neural network is designed for robot action imitation learning in a three-dimensional space, which applies an imitation learning mechanism to a robot learning system, under the framework of the imitation learning mechanism, to train and learn by transmitting demonstrative information generated from a mechanical arm to the series-parallel multi-layer BP neural network representing a motion strategy. The correspondence between a state characteristic matrix set of the motion and an action characteristic matrix set of the motion is learned, to reproduce the demonstrative action, and generalize the actions and behaviors, so that when facing different tasks, the method does not need to carry out action planning separately, thereby achieving high intelligence.
Systems, apparatuses, and methods for detecting escalators
Systems and methods for detecting an escalator in a surrounding environment by a robotic apparatus are disclosed herein. According to at least one exemplary embodiment, an escalator may be determined based on an escalator detection parameter being met. The escalator detection parameter my further require detection of two side walls separated by a distance equal to a width of an escalator and detection of a depreciation in a floor equal to that observed between a stationary portion and a moving first step of an escalator.
Handling apparatus, control apparatus, and recording medium
A handling apparatus has an arm having a joint; a holding portion attached to the arm and configured to hold an object; a sensor configured to detect a plurality of the objects; and a control apparatus configured to control the arm and the holding portion, wherein the control apparatus is configured to calculate an ease of holding the object by the holding portion as a score based on information acquired by the sensor with respect to each object and each holding method, select the object to hold and the holding method according to the score, and calculate a position for holding the selected object and an orientation of the arm.
METHOD OF CONTROLLING THE FORCE OF A PNEUMATIC ACTUATING DEVICE
A method is for controlling an actuation force exerted by an actuating device having a first working chamber and a second working chamber supplied with pressurized air from a source of pressurized air by a first pressure regulator and a second pressure regulator. The method includes calculating, by an optimization algorithm based on a dynamic model of the actuating device and of the first and second pressure regulators, desired values for control signals for the first and second pressure regulators to generate an actuation force equal to a desired value for the actuation force. An estimated value for the actuation force, estimated values for pressures inside the first and second working chambers and for first derivatives of the pressures, are determined by a state observer based on a measured value for the actuation force and on measured values for the pressures in the first and second working chambers.
ENHANCED ROBOTIC CAMERA CONTROL
This disclosure describes systems, methods, and devices related to robot camera control. A robotic device may receive a user input to control a camera operatively connected to the robot device; identify a live-motion filter applied to the camera; identify a filter setpoint associated with the live-motion filter; generate filtered position control data for the camera based on the user input, the live-motion filter, and the filter setpoint; generate joint data for the robot device based on the filtered position control data; and cause the camera to move according to the joint data.
METHOD, COMPUTER PROGRAM PRODUCT AND ROBOT CONTROLLER FOR CONFIGURING A ROBOT-OBJECT SYSTEM ENVIRONMENT, AND ROBOT
In order to be able to automatically eliminate discrepancies, arising in the course of the configuration of a robot-object system environment, between the reality of the robot-object system environment and its digital representation as a CAD model, without manual on-site commissioning of the robot-object system environment with adaptation of the CAD model to the reality, the following is proposed for configuring a robot-object system environment having at least one object and having a robot for object manipulation and object sensing: synchronizing a digital robot twin, which digitally represents the robot-object system environment and controls the robot for the object manipulation on the basis of a control program, for expedient use of the robot in the robot-object system environment during the object manipulation, appropriately and, in this regard, in one or two stages.
CONTROL DEVICE, CONTROL SYSTEM, ROBOT SYSTEM, AND CONTROL METHOD
A control device includes: first circuitry that generates a command to cause a robot to autonomously grind a grinding target portion; second circuitry that generates a command to cause the robot to grind a grinding target portion according to manipulation information from an operation device; third circuitry that controls operation of the robot according to the command; storage that stores image data of a grinding target portion and operation data of the robot corresponding to the command; and forth circuitry that performs machine learning by using image data of a grinding target portion and the operation data for the grinding target portion, receives the image data as input data, and outputs an operation correspondence command corresponding to the operation data as output data. The first circuitry generates the command, based on the operation correspondence command.
AUTONOMOUS AND TELEOPERATED SENSOR POINTING ON A MOBILE ROBOT
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations. The operations include receiving a sensor pointing command that commands the robot to use a sensor to capture sensor data of a location in an environment of the robot. The sensor is disposed on the robot. The operations include determining, based on an orientation of the sensor relative to the location, a direction for pointing the sensor toward the location, and an alignment pose of the robot to cause the sensor to point in the direction toward the location. The operations include commanding the robot to move from a current pose to the alignment pose. After the robot moves to the alignment pose and the sensor is pointing in the direction toward the location, the operations include commanding the sensor to capture the sensor data of the location in the environment.