Patent classifications
G05B2219/40391
Machine learning device, robot system, and machine learning method for learning motion of robot engaged in task performed by human and robot in cooperate with each other
A machine learning device for learning a motion of a robot engaged in a task performed by a human and a robot in cooperation with each other, including a state observation unit that observes a state variable indicating a state of the robot when the human and the robot cooperate with each other and perform a task; a reward calculation unit that calculates a reward based on control data and the state variable for controlling the robot and on an action of the human; and a value function update unit that updates an action value function for controlling a motion of the robot, based on the reward and the state variable.
Position/force controller, and position/force control method and storage medium
A position/force controller includes a function-dependent force/speed distribution conversion unit that, on the basis of speed, position and force information relating to a position based on an action of an actuator and control reference information, performs a conversion to distribute control energy to at least one of speed or position energy and force energy according to a function that is being realized. A control amount calculation unit calculates at least one of a speed or position control amount and a force energy on the basis of at least one of the speed or position energy and the force energy distributed by the force/speed distribution conversion unit. An integration unit integrates speed or position control amount with force control amount and, to return an output to the actuator, performs a reverse conversion on the speed or position control amount and the force control amount and determines an input to the actuator.
ROBOTIC MANIPULATION METHODS AND SYSTEMS FOR EXECUTING A DOMAIN-SPECIFIC APPLICATION IN AN INSTRUMENTED ENVIORNMENT WITH ELECTRONIC MINIMANIPULATION LIBRARIES
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
Robotic manipulation methods and systems for executing a domain-specific application in an instrumented environment with electronic minimanipulation libraries
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
METHODS AND SYSTEMS FOR FOOD PREPARATION IN A ROBOTIC COOKING KITCHEN
The present disclosure is directed to methods, computer program products, and computer systems for instructing a robot to prepare a food dish by replacing the human chef's movements and actions. Monitoring a human chef is carried out in an instrumented application-specific setting, a standardized robotic kitchen in this instance, and involves using sensors and computers to watch, monitor, record and interpret the motions and actions of the human chef, in order to develop a robot-executable set of commands robust to variations and changes in the environment, capable of allowing a robotic or automated system in a robotic kitchen to prepare the same dish to the standards and quality as the dish prepared by the human chef.
Action imitation method and robot and computer readable storage medium using the same
The present disclosure provides action imitation method as well as a robot and a computer readable storage medium using the same. The method includes: collecting at least a two-dimensional image of a to-be-imitated object; obtaining two-dimensional coordinates of each key point of the to-be-imitated object in the two-dimensional image and a pairing relationship between the key points of the to-be-imitated object; converting the two-dimensional coordinates of the key points of the to-be-imitated object in the two-dimensional image into space three-dimensional coordinates corresponding to the key points of the to-be-imitated object through a pre-trained first neural network model, and generating an action control instruction of a robot based on the space three-dimensional coordinates corresponding to the key points of the to-be-imitated object and the pairing relationship between the key points, where the action control instruction is for controlling the robot to imitate an action of the to-be-imitated object.
Systems and methods for robotic process automation of mobile platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
Task and process mining by robotic process automations across a computing environment
Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.
Visualization Of a Robot Motion Path and Its Use in Robot Path Planning
A method of responsive robot path planning implemented in a robot controller, including: providing a plurality of potential motion paths of a robot manipulator, wherein the potential motion paths are functionally equivalent with regard to at least one initial or final condition, a transportation task and/or a workpiece processing task; causing an operator interface to visualize the potential motion paths, wherein the operator interface is associated with an operator sharing a workspace with the robot manipulator; obtaining operator behavior during the visualization; and selecting at least one preferred motion path based on the operator behavior. A method in an operator interface, including obtaining from a robot controller a plurality of potential motion paths of the robot manipulator; visualizing the potential motion paths; sensing operator behavior during the visualization; and making the operator behavior available to the robot controller.
WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE
A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.