Patent classifications
G05B2219/40391
ROBOTIC KITCHEN SYSTEMS AND METHODS IN AN INSTRUMENTED ENVIRONMENT WITH ELECTRONIC COOKING LIBRARIES
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
Learning from Demonstration for Determining Robot Perception Motion
A method includes determining, for a robotic device that comprises a perception system, a robot planner state representing at least one future path for the robotic device in an environment. The method also includes determining a perception system trajectory by inputting at least the robot planner state into a machine learning model trained based on training data comprising at least a plurality of robot planner states corresponding to a plurality of operator-directed perception system trajectories. The method further includes controlling, by the robotic device, the perception system to move through the determined perception system trajectory.
WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE
A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.
TASK AND PROCESS MINING BY ROBOTIC PROCESS AUTOMATIONS ACROSS A COMPUTING ENVIRONMENT
Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.
HUMAN ROBOT COLLABORATION FOR FLEXIBLE AND ADAPTIVE ROBOT LEARNING
Example implementations described herein involve systems and methods for training and managing machine learning models in an industrial setting. Specifically, by leveraging the similarity across certain production areas, example implementations can group together these areas to train models efficiently that use human pose data to predict human activities or specific task(s) the workers are engaged in. The example implementations do away with previous methods of independent model construction for each production area and takes advantage of the commonality amongst different environments.
Multi-sensor array including an IR camera as part of an automated kitchen assistant system for recognizing and preparing food and related methods
An automated kitchen assistant system inspects a food preparation area in the kitchen environment using a novel sensor combination. The combination of sensors includes an Infrared (IR) camera that generates IR image data and at least one secondary sensor that generates secondary image data. The IR image data and secondary image data are processed to obtain combined image data. A trained convolutional neural network is employed to automatically compute an output based on the combined image data. The output includes information about the identity and the location of the food item. The output may further be utilized to command a robotic arm, kitchen worker, or otherwise assist in food preparation. Related methods are also described.
METHOD FOR CONTROLLING A ROBOT AND ROBOT CONTROLLER
A method for controlling a robot. The method includes providing demonstrations for performing each of a plurality of skills; training from the demonstrations, a robot trajectory model for each skill, each trajectory model is a hidden semi-Markov model having one or more initial states and one or more final states; training, from the demonstrations, a precondition model for each skill comprising, for each initial state, a probability distribution of robot configurations before executing the skill, and a final condition model for each skill comprising, for each final state, a probability distribution of robot configurations after executing the skill; receiving a description of a task, the task includes performing the skills of the plurality of skills in sequence and/or branches; generating a composed robot trajectory model; and controlling the robot according to the composed robot trajectory model to execute the task.
Skill template distribution for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
Distributed robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
Engagement Detection and Attention Estimation for Human-Robot Interaction
A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.