Patent classifications
G05B2219/40116
MOTION TRAJECTORY PLANNING METHOD FOR ROBOTIC MANIPULATOR, ROBOTIC MANIPULATOR AND COMPUTER-READABLE STORAGE MEDIUM
A motion trajectory planning method for a robotic manipulator having a visual inspection system, includes: in response to a command instruction, obtaining environmental data collected by the visual inspection system; determining an initial DS model motion trajectory of the robotic manipulator according to the command instruction, the environmental data, and a preset teaching motion DS model library, wherein the teaching motion DS model library includes at least one DS model motion trajectory generated based on human teaching activities; and at least based on a result of determining whether there is an obstacle, whose pose is on the initial DS model motion trajectory, in a first object included in the environmental data, correcting the initial DS model motion trajectory to obtain a desired motion trajectory of the robotic manipulator.
Robot and method of recognizing mood using the same
A robot includes an output unit including at least one of a display or a speaker, a camera, and a processor configured to control the output unit to output content, to acquire an image including a plurality of users through the camera while the content is output, to determine a mood of a group including the plurality of users based on the acquired image, and to control the output unit to output feedback based on the determined mood.
DEVICE AND METHOD FOR CONTROLLING A ROBOTIC DEVICE
A device and a method for controlling a robotic device, including a control model. The control model includes a robot trajectory model, which for the pickup includes a hidden semi-Markov model with one or multiple initial states, a precondition model, which for each initial state of the robot trajectory model includes a probability distribution of robot configurations before the pickup is carried out, and an object pickup model, which for a depth image outputs a plurality of pickup robot configurations having a respective associated probability of success.
Transformer-Based Meta-Imitation Learning Of Robots
A training system for a robot includes: a model having a transformer architecture and configured to determine how to actuate at least one of arms and an end effector of the robot; a training dataset including sets of demonstrations for the robot to perform training tasks, respectively; and a training module configured to: meta-train a policy of the model using first ones of the sets of demonstrations for first ones of the training tasks, respectively; and optimize the policy of the model using second ones of the sets of demonstrations for second ones of the training tasks, respectively, where the sets of demonstrations for the training tasks each include more than one demonstration and less than a first predetermined number of demonstrations.
Robotic kitchen systems and methods with one or more electronic libraries for executing robotic cooking operations
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
METHOD, SYSTEM AND NONVOLATILE STORAGE MEDIUM
Disclosed herein is a method, system, and non-volatile storage medium for simplifying the automation of a process of flow. The method may include determining a machine-independent process model based on data representing a handling of a work tool for performing a process flow. The process flow may include a plurality of sub-processes and the process model may link a process activity with spatial information for each sub-process. The method may also include mapping the machine-independent process model to a machine-specific control model of a machine using a model of the machine. The machine-specific control model may define an operating point of the machine for each sub-process, and the operating point may correspond to the process activity and to the spatial information.
Verbal-based focus-of-attention task model encoder
Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
Information processing device, robot manipulating system and robot manipulating method
A robot manipulating system includes a game terminal having a game computer, a game controller, and a display configured to display a virtual space, a robot configured to perform a work in a real space based on robot control data, and an information processing device configured to mediate between the game terminal and the robot. The information processing device supplies game data associated with a content of work to the game terminal, acquires game manipulation data including a history of an input of manipulation accepted by the game controller while a game program to which the game data is reflected is executed, converts the game manipulation data into the robot control data based on a given conversion rule, and supplies the robot control data to the robot.
Method for controlling a robot device and robot device controller
A method for controlling a robot device. The method includes performing an initial training of an actor neural network by imitation learning of demonstrations, controlling the robot device by the initially trained actor neural network to generate multiple trajectories, wherein each trajectory comprises a sequence of actions selected by the initially actor neural network in a sequence of states, and observing the return for each of the selected actions, performing an initial training of a critic neural network by supervised learning, wherein the critic neural network is trained to determine the observed returns of the actions selected by the initially actor neural network, training the actor neural network and the critic neural network by reinforcement learning starting from the initially trained actor neural network and the initially trained critic neural network and controlling the robot device by the trained actor neural network and trained critic neural network.
SKILL TEMPLATES FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using skill templates for robotic demonstration learning. One of the methods includes receiving a skill template for a task to be performed by a robot, wherein the skill template defines a state machine having a plurality of subtasks and one or more respective transition conditions between one or more of the subtasks. Local demonstration data for a demonstration subtask of the skill template is received, where the local demonstration data is generated from a user demonstrating how to perform the demonstration subtask with the robot. A machine learning model is refined for the demonstration subtask and the skill template is executed on the robot, causing the robot to transition through the state machine defined by the skill template to perform the task.