Patent classifications
G05B2219/40391
Engagement Detection and Attention Estimation for Human-Robot Interaction
A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
Systems and methods for robotic process automation of mobile platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
MACHINE LEARNING DEVICE, ROBOT CONTROLLER, ROBOT SYSTEM, AND MACHINE LEARNING METHOD FOR LEARNING ACTION PATTERN OF HUMAN
A machine learning device for a robot that allows a human and the robot to work cooperatively, the machine learning device including a state observation unit that observes a state variable representing a state of the robot during a period in that the human and the robot work cooperatively; a determination data obtaining unit that obtains determination data for at least one of a level of burden on the human and a working efficiency; and a learning unit that learns a training data set for setting an action of the robot, based on the state variable and the determination data.
ROBOT TASK MANAGEMENT METHOD, ROBOT USING THE SAME AND COMPUTER READABLE STORAGE MEDIUM
The present disclosure provides a task management method for a robot, a robot using the same, and a computer readable storage medium. The method includes: obtaining a current task of the robot, in response to receiving a request for executing a new task of the robot; querying the preset state table according to the new task and the current task to determine whether to switch the robot from the current task to the new task: and switching the robot from the current task to the new task, in response to determining to switch. In this way, the stability of the operation of the robot can be improved, and the efficiency of the robot to execute tasks can be improved.
Systems, devices, articles, and methods for using trained robots
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a senor. The second set of autonomous robot control instructions when executed causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
TASK AND PROCESS MINING BY ROBOTIC PROCESS AUTOMATIONS ACROSS A COMPUTING ENVIRONMENT
Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.
SYSTEM FOR TESTING AND TRAINING ROBOT CONTROL
A method for training and/or testing a robot control module. The method includes generating an instruction specified by a robot control module configured for robot training and/or testing, the instruction indicating how a human-driven robot task is to be performed when training and/or testing the robot control module; providing the instruction to a mixed reality device worn by a human data collector, the mixed device rendering the instruction in a manner that shows the human data collector how to perform the human-driven robot task; collecting performance data and environmental data in response to the human data collector attempting to perform the human-driven robot task using the data collection device; receiving feedback data in response to the human data collector attempting to perform the human-driven robot task specified by the instruction; and updating the robot control module using the feedback data and the collected performance and environmental data.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
Provided are a device and a method for presenting an evaluation score of teaching data and necessary teaching data to a user in an easy-to-understand manner in a configuration for performing learning processing using teaching data. A teaching data execution unit generates, as learning data, a camera-captured image corresponding to movement of a robot by a user operation based on the teaching data and movement position information of the robot, a learning processing unit executes machine learning by inputting learning data generated by the teaching data execution unit and generates a teaching data set including an image and a robot behavior rule as learning result data, a feedback information generation unit executes evaluation of teaching data by inputting the learning data generated by the teaching data execution unit and the learning result data generated by the learning processing unit, and generates and outputs numerical feedback information and visual feedback information based on an evaluation result.
Action learning method, medium, and electronic device
An action learning method, including: acquiring human body moving image data; determining three-dimensional human body pose action data corresponding to the human body moving image data; matching the three-dimensional human body pose action data with atomic actions in a robot atomic action library to determine robot action sequence data corresponding to the human body moving image data; performing action continuity stitching on all robot sub-actions in the robot action sequence data sequentially; determining a continuous action learned by a robot from the robot action sequence data subjected to the action continuity stitching.
POSITION/FORCE CONTROLLER, AND POSITION/FORCE CONTROL METHOD AND STORAGE MEDIUM
A position/force controller includes a function-dependent force/speed distribution conversion unit that, on the basis of speed, position and force information relating to a position based on an action of an actuator and control reference information, performs a conversion to distribute control energy to at least one of speed or position energy and force energy according to a function that is being realized. A control amount calculation unit calculates at least one of a speed or position control amount and a force energy on the basis of at least one of the speed or position energy and the force energy distributed by the force/speed distribution conversion unit. An integration unit integrates speed or position control amount with force control amount and, to return an output to the actuator, performs a reverse conversion on the speed or position control amount and the force control amount and determines an input to the actuator.