G05B2219/40413

Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
20210205986 · 2021-07-08 · ·

A system enables teleoperation of a robot based on a pose of a subject. The system includes an image capturing device and an operator system controller that are remotely located from a robotic system controller and a robot. The image capturing device captures images of the subject. The operator system controller maps a processed version of the captured image to a three-dimensional skeleton model of the subject and generates body pose information of the subject in the captured image. The robotic system controller communicates with the operator system controller over a network. The robotic system controller generates a plurality of kinematic parameters for the robot and causes the robot to take a pose corresponding to the pose of the subject in the captured image.

Robot control device and robot control method

A robot control device includes a memory and a processor configured to acquire first environmental information regarding a surrounding environment of a robot, specify a first appropriate level associated with a first activity based on the first environmental information by referring to a control policy in which activity information on an activity which has been conducted by the robot, environmental information at a time of conduction of the activity, and appropriate level determined based on a reaction to the activity are associated with each other, and when the first appropriate level information of the first activity does not satisfy a specific condition, deter conduction of the first activity by the robot.

DETERMINING AND EVALUATING DATA REPRESENTING AN ACTION TO BE PERFORMED BY A ROBOT
20200278755 · 2020-09-03 ·

In one embodiment, a processor accesses sensor input data received from one or more sensors. The sensor input data represents one or more gestures. The processor determines, based on the sensor input data representing the one or more gestures, action data representing an action to be performed by a robot. The action includes physical movements of the robot. The processor evaluates the action data representing the action to be performed by the robot in light of evaluation data.

Determining and evaluating data representing an action to be performed by a robot

In one embodiment, a processor accesses sensor input data received from one or more sensors. The sensor input data represents one or more gestures. The processor determines, based on the sensor input data representing the one or more gestures, action data representing an action to be performed by a robot. The action includes physical movements of the robot. The processor evaluates the action data representing the action to be performed by the robot in light of evaluation data.

A HUMAN INTENTION DETECTION SYSTEM FOR MOTION ASSISTANCE
20200170547 · 2020-06-04 ·

A device and method for human intention detection Sensor Band (HID). In preferred embodiments, it makes use of an array of force sensing resistors (FSRs) which are embedded inside a flexible band, which is capable of reading the muscle activity for different motion type and muscle forcein a human user. In one implementation of the invention two of such bands are attached to the forearm and the upper arm. From the readings of the sensors, the patterns for motion type and muscle force are then distinguished autonomously by machine learning, a Support Vector Machine (SVM) algorithm, or a neural network. The method is advantageous e.g. the detection of dexterous motion of the arms, upon which assistive exoskeleton can be controlled for motion assistance. The invention can also be applicable to hand gestures recognition and bilateral rehabilitation, besides this the invention can be used to control lower body exoskeleton as well.

Brain-Computer Interface Based Robotic Arm Self-Assisting System and Method
20190387995 · 2019-12-26 ·

Disclosed are a brain-computer interface based robotic arm self-assisting system and method. The system comprises a sensing layer, a decision-making layer and an execution layer. The sensing layer comprises an electroencephalogram acquisition and detection module and a visual identification and positioning module and is used for analyzing and identifying the intent of a user and identifying and locating positions of a corresponding cup and the user's mouth based on the user intent. The execution layer comprises a robotic arm control module that performs trajectory planning and control for a robotic arm based on an execution instruction received from a decision-making module. The decision-making layer comprises the decision-making module that is connected to the electroencephalogram acquisition and detection module, the visual identification and positioning module and the robotic arm control module to implement the acquisition and transmission of data of an electroencephalogram signal, a located position and a robotic arm status and the sending of the execution instruction for the robotic arm. The system combines the visual identification and positioning technology, a brain-computer interface and a robotic arm to facilitate paralyzed patients to drink water by themselves, improving the quality of life of the paralyzed patients.

Methods and systems for controlling a semiconductor fabrication process

Software for controlling processes in a heterogeneous semiconductor manufacturing environment may include a wafer-centric database, a real-time scheduler using a neural network, and a graphical user interface displaying simulated operation of the system. These features may be employed alone or in combination to offer improved usability and computational efficiency for real time control and monitoring of a semiconductor manufacturing process. More generally, these techniques may be usefully employed in a variety of real time control systems, particularly systems requiring complex scheduling decisions or heterogeneous systems constructed of hardware from numerous independent vendors.

Method for automatic load compensation for a cobot or an upper limb exoskeleton

A control method for controlling an actuator (11) connected to a load (50) for handling, the method comprising the steps of: detecting an intention to handle the load (50); applying an increasing command to the actuator (11) until detecting a movement of the actuator (11); storing the value reached by the command when a movement of the actuator (11) is detected; using the stored value reached by the command to determine an estimate of the opposing force exerted by the load (50) for handling; and controlling the actuator by means of a force servocontrol relationship using the estimate of the opposing force exerted by the load (50) for handling in order to establish the commands to be applied to the actuator (11). A cobot (1) arranged to perform the method.

ERRONEOUS OPERATION-PREVENTABLE ROBOT, ROBOT CONTROL METHOD, AND RECORDING MEDIUM
20180376069 · 2018-12-27 · ·

A robot includes an operation unit, an imager, an operation controller, a determiner, and an imager controller. The imager is disposed at a predetermined part of the robot and captures an image of a subject. The operation controller controls the operation unit to move the predetermined part. The determiner determines whether the operation controller is moving the predetermined part or not while the imager captures the image of the subject. The imager controller controls the imager or recording of the image of the subject that is captured by the imager, in a case in which the determiner determines that the operation controller is moving the predetermined part, so as to prevent motion of the predetermined part from affecting the image of the subject and causing the operation unit to perform an erroneous operation.

DETERMINING AND EVALUATING DATA REPRESENTING AN ACTION TO BE PERFORMED BY A ROBOT
20180356895 · 2018-12-13 ·

In one embodiment, a processor accesses sensor input data received from one or more sensors. The sensor input data represents one or more gestures. The processor determines, based on the sensor input data representing the one or more gestures, action data representing an action to be performed by a robot. The action includes physical movements of the robot. The processor evaluates the action data representing the action to be performed by the robot in light of evaluation data.