Patent classifications
G05B2219/36442
Controlling an automation assembly
The invention relates to a method for controlling an automation assembly which has a robot assembly with at least one robot (10) and a detection means assembly with at least one detection means (21-23), said method having the following at least partly automated steps: providing (S10) a first sequence of first ordinate data (q.sub.1, q.sub.2, dq.sub.2/dt, τ.sub.1, τ.sub.2) assigned to successive first abscissa points (t) on the basis of first training data (q.sub.1, q.sub.2, τ.sub.1, X.sub.2); identifying (S20) a first event point (t.sub.E) within the first abscissa points of the first sequence; and determining (S30) a first event criterion on the basis of the first sequence and the first event point.
DISTRIBUTED ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
SKILL TEMPLATE DISTRIBUTION FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
Product knitting systems and methods
The systems and methods provide an action recognition and analytics tool for use in manufacturing, health care services, shipping, retailing and other similar contexts. Machine learning action recognition can be utilized to determine cycles, processes, actions, sequences, objects and or the like in one or more sensor streams. The sensor streams can include, but are not limited to, one or more video sensor frames, thermal sensor frames, infrared sensor frames, and or three-dimensional depth frames. The analytics tool can provide for kitting products, including real time verification of packing or unpacking by action and image recognition.
Robotic device, control method for robotic device, and program
A mode setting unit sets any one of operation modes in an operation mode group including at least a coaching mode and a learning mode. In the coaching mode, a control unit receives a posture instruction and controls a storage unit to store the posture instruction. In the learning mode, the control unit derives a control mode of a drive mechanism by learning while reflecting, in a posture of the robotic device, the posture instruction received in the coaching mode.
Robot Teaching Device and Work Teaching Method
An operation is generated in which a robot can perform work without causing interference with a surrounding structure within a movable range of the robot. The robot teaching device relates to a work teaching device that teaches work to a robot that holds and moves a held object. The device includes a teaching pose measurement unit configured to measure and/or calculate a teaching pose that is a pose of the held object during teaching work, and a robot operation generation unit configured to generate a joint displacement sequence of the robot such that a pose of a held object of the same type as the held object whose teaching pose is measured becomes the same pose as the teaching pose.
SKILL TEMPLATE DISTRIBUTION FOR ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
SYSTEM AND METHOD FOR EMBODIED AUTHORING OF HUMAN-ROBOT COLLABORATIVE TASKS WITH AUGMENTED REALITY
A system and method for authoring and performing Human-Robot-Collaborative (HRC) tasks is disclosed. The system and method adopt an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. The system and method utilize an intuitive workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. The system and method utilize a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration.
Programming of a robotic arm using a motion capture system
An example method includes receiving position data indicative of position of a demonstration tool. Based on the received position data, the method further includes determining a motion path of the demonstration tool, wherein the motion path comprises a sequence of positions of the demonstration tool. The method additionally includes determining a replication control path for a robotic device, where the replication control path includes one or more robot movements that cause the robotic device to move a robot tool through a motion path that corresponds to the motion path of the demonstration tool. The method also includes providing for display of a visual simulation of the one or more robot movements within the replication control path.
Intelligent apparatus for guidance and data capture during physical repositioning of a patient on a sleep platform
A system for guiding and evaluating physical positioning, orientation and motion of the human body, comprising: a cloud computing-based subsystem including an artificial neural network and spatial position analyzer said cloud computing-based subsystem adapted for data storage, management and analysis; at least one motion sensing device wearable on the human body, said at least one motion sensing device adapted to detect changes in at least one of spatial position, orientation, and rate of motion; a mobile subsystem running an application program (app) that controls said at least one motion sensing device, said mobile subsystem adapted to capture activity data quantifying said changes in at least one of spatial position, orientation, and rate of motion, said mobile subsystem further adapted to transfer said activity data to said cloud computing-based subsystem, wherein said cloud computing-based subsystem processes, stores, and analyzes said activity data.