G05B2219/39298

Automatic robot perception programming by imitation learning

Apparatus, systems, methods, and articles of manufacture for automatic robot perception programming by imitation learning are disclosed. An example apparatus includes a percept mapper to identify a first percept and a second percept from data gathered from a demonstration of a task and an entropy encoder to calculate a first saliency of the first percept and a second saliency of the second percept. The example apparatus also includes a trajectory mapper to map a trajectory based on the first percept and the second percept, the first percept skewed based on the first saliency, the second percept skewed based on the second saliency. In addition, the example apparatus includes a probabilistic encoder to determine a plurality of variations of the trajectory and create a collection of trajectories including the trajectory and the variations of the trajectory. The example apparatus also includes an assemble network to imitate an action based on a first simulated signal from a first neural network of a first modality and a second simulated signal from a second neural network of a second modality, the action representative of a perceptual skill.

Mitigating reality gap through optimization of simulated hardware parameter(s) of simulated robot

Mitigating the reality gap through optimization of one or more simulated hardware parameters for simulated hardware components of a simulated robot. Implementations generate and store real navigation data instances that are each based on a corresponding episode of locomotion of a real robot. A real navigation data instance can include a sequence of velocity control instances generated to control a real robot during a real episode of locomotion of the real robot, and one or more ground truth values, where each of the ground truth values is a measured value of a corresponding property of the real robot (e.g., pose). The velocity control instances can be applied to a simulated robot, and one or more losses can be generated based on comparing the ground truth value(s) to corresponding simulated value(s) generated from applying the velocity control instances to the simulated robot. The simulated hardware parameters and environmental parameters can be optimized based on the loss(es).

Operation prediction system and operation prediction method
11701772 · 2023-07-18 ·

The automatic operation system includes a plurality of learned imitation models and a model selecting unit. The learned imitation models are constructed by machine learning of operation history data, the operation history data being classified into several groups by an automatic classification system algorithm, the operation history data of each group being learned by the imitation model corresponding to the group. The operation history data include data indicating a surrounding environment and data indicating an operation of an operator in the surrounding environment. The model selecting unit selects one imitation model from several imitation models based on a result of classifying data indicating a given surrounding environment by the automatic classification algorithm of the classification system. The automatic operation system inputs data indicating the surrounding environment to the imitation model selected by the model selecting unit to predict an operation of the operator with respect to the surrounding environment.

Method and apparatus for performing control of a movement of a robot arm
11554486 · 2023-01-17 · ·

A method for computing joint torques applied by actuators to perform a control of a movement of a robot arm having several degrees of freedom is provided. The method includes the act of providing, by a trajectory generator, trajectory vectors specifying a desired trajectory of the robot arm for each degree of freedom. The trajectory vectors are mapped to corresponding latent representation vectors that capture inherent properties of the robot arm using basis functions with trained parameters. The latent representation vectors are multiplied with trained core tensors to compute the joint torques for each degree of freedom.

Vibration display device, operation program creating device, and system
11534912 · 2022-12-27 · ·

A vibration display device including a vibration acquisition unit that acquires a vibration state of a distal end section of a robot that is a robot in a simulation or in a real world, the distal end section being moved based on an operation program, and a vibration trajectory drawing unit that draws, on a display device, the vibration state along a trajectory of the distal end section of the robot or that draws, on the display device, the vibration state as the trajectory.

LEARNING TO ACQUIRE AND ADAPT CONTACT-RICH MANIPULATION SKILLS WITH MOTION PRIMITIVES
20220402140 · 2022-12-22 ·

A computer-implemented method comprising, receiving data representing a successful trajectory for an insertion task using a robot to insert a connector into a receptacle, performing a parameter optimization process for the robot to perform the insertion task. This parameter optimization includes defining an objective function that measures a similarity of a current trajectory generated with a current set of parameters to the successful trajectory and repeatedly modifying the current set of parameters and evaluating the modified set of parameters according to the objective function until generating a final set of parameters.

CONTROL DEVICE, CONTROL SYSTEM, ROBOT SYSTEM, AND CONTROL METHOD

A control device includes: first circuitry that generates a command to cause a robot to autonomously grind a grinding target portion; second circuitry that generates a command to cause the robot to grind a grinding target portion according to manipulation information from an operation device; third circuitry that controls operation of the robot according to the command; storage that stores image data of a grinding target portion and operation data of the robot corresponding to the command; and forth circuitry that performs machine learning by using image data of a grinding target portion and the operation data for the grinding target portion, receives the image data as input data, and outputs an operation correspondence command corresponding to the operation data as output data. The first circuitry generates the command, based on the operation correspondence command.

POLICY LAYERS FOR MACHINE CONTROL

Apparatuses, systems, and techniques provide a policy that can be executed to cause a machine to move. In at least one embodiment, a first policy layer is provided to cause the machine to execute a first motion that causes the machine to accelerate to reach an unbiased state. A second policy layer is provided to cause the machine to execute a second motion without influencing the unbiased state to be reached by machine. The policy can comprise the first and second policy layers.

OFFLINE PROGRAMMING DEVICE AND OFFLINE PROGRAMMING METHOD

An offline programming device includes an input unit that receives input of a plurality of teaching points, a creation unit that determines intermediate point located between adjacent teaching points and creates an operation program for the robot, a simulation unit that simulates a movement trajectory of the robot when the operation program is executed, and a display unit that displays a GUI screen representing the movement trajectory. The GUI screen includes a first display area showing a time series sequence of the plurality of teaching points and a second display area. When an error is detected in the movement trajectory, a section between the teaching points including the point in time when the error occurs is displayed in the first display area according to a first error display method.

Generating simulated training examples for training of machine learning model used for robot control
11494632 · 2022-11-08 · ·

Implementations are directed to generating simulated training examples for training of a machine learning model, training the machine learning model based at least in part on the simulated training examples, and/or using the trained machine learning model in control of at least one real-world physical robot. Implementations are additionally or alternatively directed to performing one or more iterations of quantifying a “reality gap” for a robotic simulator and adapting parameter(s) for the robotic simulator based on the determined reality gap. The robotic simulator with the adapted parameter(s) can further be utilized to generate simulated training examples when the reality gap of one or more iterations satisfies one or more criteria.