Patent classifications
G05B2219/40391
NON-TRANSITORY STORAGE MEDIUM AND METHOD AND SYSTEM OF CREATING CONTROL PROGRAM FOR ROBOT
A non-transitory computer-readable storage medium storing a computer program controls a processor to execute (a) processing of recognizing a worker motion from an image of one or more worker motions captured by an imaging apparatus, (b) processing of recognizing hand and finger positions in a specific hand and finger motion when the worker motion contains the specific hand and finger motion, (c) processing of recognizing a position of a workpiece after work, and (d) processing of generating a control program for a robot using the worker motion, the hand and finger positions, and the position of the workpiece.
ACTION LEARNING METHOD, MEDIUM, AND ELECTRONIC DEVICE
An action learning method, including: acquiring human body moving image data; determining three-dimensional human body pose action data corresponding to the human body moving image data; matching the three-dimensional human body pose action data with atomic actions in a robot atomic action library to determine robot action sequence data corresponding to the human body moving image data; performing action continuity stitching on all robot sub-actions in the robot action sequence data sequentially; determining a continuous action learned by a robot from the robot action sequence data subjected to the action continuity stitching.
Systems And Methods For Robotic Process Automation Of Mobile Platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g., smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.
Robotic kitchen systems and methods with one or more electronic libraries for executing robotic cooking operations
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
Robot control method and apparatus and robot using the same
The present disclosure discloses a robot control method as well as an apparatus, and a robot using the same. The method includes: obtaining a human pose image; obtaining pixel information of key points in the human pose image; obtaining three-dimensional positional information of key points of a human arm according to the pixel information of the preset key points; obtaining a robotic arm kinematics model of a robot; obtaining an angle of each joint in the robotic arm kinematics model according to the three-dimensional positional information of the key points of the human arm and the robotic arm kinematics model; and controlling an arm of the robot to perform a corresponding action according to the angle of each joint. The control method does not require a three-dimensional stereo camera to collect three-dimensional coordinates of a human body, which reduces the cost to a certain extent.
METHOD, SYSTEM AND NONVOLATILE STORAGE MEDIUM
Disclosed herein is a method, system, and non-volatile storage medium for simplifying the automation of a process of flow. The method may include determining a machine-independent process model based on data representing a handling of a work tool for performing a process flow. The process flow may include a plurality of sub-processes and the process model may link a process activity with spatial information for each sub-process. The method may also include mapping the machine-independent process model to a machine-specific control model of a machine using a model of the machine. The machine-specific control model may define an operating point of the machine for each sub-process, and the operating point may correspond to the process activity and to the spatial information.
Verbal-based focus-of-attention task model encoder
Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
MANIPULATOR AND METHOD FOR CONTROLLING THEREOF
A manipulator and a method for controlling the manipulator are disclosed. The manipulator includes: a plurality of links respectively corresponding to a user’s upper arm, fore arm, and hand, a plurality of motors rotating the plurality of links, a communication interface comprising communication circuitry, a memory storing at least one instruction, and a processor configured to execute the at least one instruction, wherein the processor is configured to: based on first rotation angle information for motors corresponding to the upper arm and the fore arm among the plurality of motors, obtain information for a body frame of a link corresponding to the fore arm, obtain equilibrium angle information that positions the body frame in equilibrium with a specified reference frame, based on receiving a sensing value indicating the posture of the hand from an external sensor through the communication interface, obtain second rotation angle information for motors corresponding to the hand among the plurality of motors based on the sensing value and the equilibrium angle information, and control the motors corresponding to the hand based on the second rotation angle information.
Robotic system having shuttle
A robotic system includes a robot having a picking arm to grasp an inventory item and a shuttle. The shuttle includes a platform adapted to receive the inventory item from the picking arm of the robot. The platform is moveable between a pick-up location located substantially adjacent to the robot and an end location spaced a distance apart from the pick-up location. The system improves efficiency as transportation of the item from the pick-up location to the end location is divided between the robot and the shuttle.
Systems and methods for robotic process automation of mobile platforms
In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g. smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI.