Patent classifications
B25J9/163
Tactile information estimation apparatus, tactile information estimation method, and program
According to some embodiments, a tactile information estimation apparatus may include one or more memories and one or more processors. The one or more processors are configured to input at least first visual information of an object acquired by a visual sensor to a model. The model is generated based on visual information and tactile information linked to the visual information. The one or more processors are configured to extract, based on the model, a feature amount relating to tactile information of the object.
Generation method for training dataset, model generation method, training data generation apparatus, inference apparatus, robotic controller, model training method and robot
One aspect of the present disclosure relates to a generation method for a training dataset, comprising: capturing, by one or more processors, a target object to which a marker unit recognizable under a first illumination condition is provided; and acquiring, by the one or more processors, a first image where the marker unit is recognizable and a second image obtained by capturing the target object under a second illumination condition.
Industrial robotics systems and methods for continuous and automated learning
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may maintain an first dataset configured to select pick points for objects. The apparatus may receive, from a user device, a user dataset including a user selected pick point associated with at least one object and a first image of the at least one first object. The apparatus may generate a second dataset based at least in part on the first dataset and the user dataset. The apparatus may receive a second image of a second object. The apparatus may select a pick point for the second object using the second dataset and the second image of the second object. The apparatus may send information associated with the pick point selected for the second object to a robotics device for picking up the second object.
Method and device for training manipulation skills of a robot system
A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.
Artificial intelligence system for efficiently learning robotic control policies
A machine learning system builds and uses control policies for controlling robotic performance of a task. Such control policies may be trained using targeted updates. For example, two trials identified as similar may be compared and evaluated to determine which trial achieved a greater degree of task success; a control policy update may then be generated based on identified differences between the two trials.
Mitigating reality gap through optimization of simulated hardware parameter(s) of simulated robot
Mitigating the reality gap through optimization of one or more simulated hardware parameters for simulated hardware components of a simulated robot. Implementations generate and store real navigation data instances that are each based on a corresponding episode of locomotion of a real robot. A real navigation data instance can include a sequence of velocity control instances generated to control a real robot during a real episode of locomotion of the real robot, and one or more ground truth values, where each of the ground truth values is a measured value of a corresponding property of the real robot (e.g., pose). The velocity control instances can be applied to a simulated robot, and one or more losses can be generated based on comparing the ground truth value(s) to corresponding simulated value(s) generated from applying the velocity control instances to the simulated robot. The simulated hardware parameters and environmental parameters can be optimized based on the loss(es).
Distributed autonomous robot interfacing systems and methods
Described in detail herein is an automated fulfilment system including a computing system programmed to receive requests from disparate sources for physical objects disposed at one or more locations in a facility. The computing system can combine the requests, and group the physical objects in the requests based on object types or expected object locations. Autonomous robot devices can receive instructions from the computing system to retrieve a group of the physical objects and deposit the physical objects in storage containers.
Robot control system and robot control method
A robot control system includes a state candidate generation unit that generates a state candidate that is a state transition destination of a robot at next time, a control amount estimation unit that estimates a control amount for transitioning to the state candidate, a state candidate evaluation unit that calculates a distance between the target state of the robot and the state candidate, calculates a coincidence degree between (i) a state at next time estimated from a state at current time of the robot and the control amount and (ii) the state candidate, and sets a sum of the distance and the coincidence degree to be an evaluation value, and a selection unit that selects a state candidate with a minimum evaluation value from state candidates and generate a motion corresponding to the selected state candidate.
Initial reference generation for robot optimization motion planning
A robot optimization motion planning technique using a refined initial reference path. When a new path is to be computed using motion optimization, a candidate reference path is selected from storage which was previously computed and which has similar start and goal points and collision avoidance environment constraints to the new path. The candidate reference path is adjusted at all state points along its length to account for the difference between the start and goal points of the new path compared to those of the previously-computed path, to create the initial reference path. The initial reference path, adjusted to fit the start and goal points, is then used as a starting state for the motion optimization computation. By using an initial reference path which is similar to the final converged new path, the optimization computation converges more quickly than if a naïve initial reference path is used.
Robotic end effector interface systems
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.