G05B2219/36442

TELEMETRY HARVESTING AND ANALYSIS FROM EXTENDED REALITY STREAMING

A method for producing an optimized instruction set for guiding a robot through a service procedure includes fitting human operators with an XR headset and controllers, instructing the human operators to perform the same service procedure through a series of individual steps, monitoring the operator's movements, and recording the XR telemetry data produced by the headset and the controllers as the operator performs the series of steps within the service procedure. The XR telemetry data is analyzed, optimized and translated into an optimized set of instructions to enable a robot to perform the service procedure. In some aspects, machine learning and neural networks are used to acquire, aggregate, analyze and optimize the XR telemetry data.

Teaching apparatus for performing teaching operation for robot
10906176 · 2021-02-02 · ·

A teaching apparatus configured to include a display device and perform a teaching operation for a robot includes a template storage section configured to store a plurality of templates corresponding to a plurality of programs of the robot, a program explanatory content storage section configured to store plural pieces of explanatory content for explaining the respective plurality of programs, a template display section configured to display the plurality of templates stored in the template storage section on the display device, a template selection section configured to select one template from the plurality of templates displayed on the template display section, and a program explanatory content display section configured to read out the explanatory content of the program corresponding to the one template selected by the template selection section from the program explanatory content storage section and configured to display the explanatory content on the display device.

Systems and methods for line balancing

In various embodiments, a method includes receiving one or more sensor streams with an engine. The engine identifies one or more actions that are performed at first and second stations of a plurality of stations within the sensor stream(s). The received sensor stream(s) and identified one or more actions performed at the first and second stations are stored in a data structure. The identified one or more actions are mapped to the sensor stream(s). The engine characterizes each of the identified one or more actions performed at each of the first and second stations to produce determined characterizations thereof. Based on one or more of the determined characterizations, automatically producing a recommendation, either dynamically or post-facto, to move at least one of the identified one or more actions performed at one of the stations to another station to reduce cycle time.

Traceability systems and methods

The systems and methods provide an action recognition and analytics tool for use in manufacturing, health care services, shipping, retailing, restaurants and other similar contexts. Machine learning action recognition can be utilized to determine cycles, processes, actions, sequences, objects and or the like in one or more sensor streams. The sensor streams can include, but are not limited to, one or more video sensor frames, thermal sensor frames, infrared sensor frames, and or three-dimensional depth frames. The analytics tool can provide for establishing traceability.

ROBOTIC DEVICE, CONTROL METHOD FOR ROBOTIC DEVICE, AND PROGRAM

A mode setting unit sets any one of operation modes in an operation mode group including at least a coaching mode and a learning mode. In the coaching mode, a control unit receives a posture instruction and controls a storage unit to store the posture instruction. In the learning mode, the control unit derives a control mode of a drive mechanism by learning while reflecting, in a posture of the robotic device, the posture instruction received in the coaching mode.

Teaching device and control information generation method
10754307 · 2020-08-25 · ·

A teaching device capable of teaching not only movement work but also more detailed working content. The teaching device is provided with input section for inputting work information such as work of pinching workpieces which is carried out by a robot arm at a working position. When carrying out motion capture by moving jig (an object which mimics the robot arm) which is provided with marker section, a user manipulate input section at an appropriate timing to input the working content to be performed by the robot arm as work information, and thus it is possible to set fine working content of the robot arm in teaching device. Accordingly, teaching device is capable of linking positional information of jig and the like and work information generating control information for controlling the robot arm.

METHOD FOR THE SURFACE TREATMENT OF AN ARTICLE

A method for the surface treatment of an article (2) by means of a robotic device (3) comprising a robotic arm (5) and a spraying head (4) fitted on the robotic arm (5); the method comprises a learning step, during which the operator moves the spraying head (4) by means of a handling device (9) and the movements made by the spraying head (4) are stored by a storage unit (8); and a reproduction step, which is subsequent to the learning step and during which the robotic arm (5) is operated so that the spraying head (4) repeats the movements stored by the storage unit (8).

WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE

A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.

Machine learning device for learning assembly operation and component assembly system

A machine learning device includes a state observation unit for observing state variables that include at least one of the state of an assembly constituted of first and second components, an assembly time and information on a force, the result of a continuity test on the assembly, and at least one of position and posture command values for at least one of the first and second components and direction, speed and force command values for an assembly operation; and a learning unit for learning, in a related manner, at least one of the state of the assembly, the assembly time and the information on the force, the result of the continuity test on the assembly, and at least one of the position and posture command values for at least one of the first and second components and the direction, speed and force command values for the assembly operation.

INTELLIGENT APPARATUS FOR PATIENT GUIDANCE AND DATA CAPTURE DURING PHYSICAL THERAPY AND WHEELCHAIR USAGE
20190290209 · 2019-09-26 ·

A system for guiding and evaluating physical positioning, orientation and motion of the human body, comprising: a cloud computing-based subsystem including an artificial neural network and spatial position analyzer said cloud computing-based subsystem adapted for data storage, management and analysis; at least one motion sensing device wearable on the human body, said at least one motion sensing device adapted to detect changes in at least one of spatial position, orientation, and rate of motion; a mobile subsystem running an application program (app) that controls said at least one motion sensing device, said mobile subsystem adapted to capture activity data quantifying said changes in at least one of spatial position, orientation, and rate of motion, said mobile subsystem further adapted to transfer said activity data to said cloud computing-based subsystem, wherein said cloud computing-based subsystem processes, stores, and analyzes said activity data.