Patent classifications
G05B2219/40413
METHOD FOR AUTOMATIC LOAD COMPENSATION FOR A COBOT OR AN UPPER LIMB EXOSKELETON
A control method for controlling an actuator (11) connected to a load (50) for handling, the method comprising the steps of: detecting an intention to handle the load (50); applying an increasing command to the actuator (11) until detecting a movement of the actuator (11); storing the value reached by the command when a movement of the actuator (11) is detected; using the stored value reached by the command to determine an estimate of the opposing force exerted by the load (50) for handling; and controlling the actuator by means of a force servocontrol relationship using the estimate of the opposing force exerted by the load (50) for handling in order to establish the commands to be applied to the actuator (11).
A cobot (1) arranged to perform the method.
Brain-computer interface based robotic arm self-assisting system and method
Disclosed are a brain-computer interface based robotic arm self-assisting system and method. The system comprises a sensing layer, a decision-making layer and an execution layer. The sensing layer comprises an electroencephalogram acquisition and detection module and a visual identification and positioning module and is used for analyzing and identifying the intent of a user and identifying and locating positions of a corresponding cup and the user's mouth based on the user intent. The execution layer comprises a robotic arm control module that performs trajectory planning and control for a robotic arm based on an execution instruction received from a decision-making module. The decision-making layer comprises the decision-making module that is connected to the electroencephalogram acquisition and detection module, the visual identification and positioning module and the robotic arm control module to implement the acquisition and transmission of data of an electroencephalogram signal, a located position and a robotic arm status and the sending of the execution instruction for the robotic arm. The system combines the visual identification and positioning technology, a brain-computer interface and a robotic arm to facilitate paralyzed patients to drink water by themselves, improving the quality of life of the paralyzed patients.
Predictive control method of a robot and related control system
This disclosure relates to a method of controlling a collaborative robot, or “cobot”. According to the disclosed method the “cobot” is controlled so as to make it ready to perform a task in collaboration with the human operator only when the latter is about to move into a work sector to carry out the task collaborating with the robot. The control method of the present disclosure can be implemented by means of a control system comprising detection devices, such as for example one or more cameras or a mat equipped with sensors, which detect the position of the hands or of the entire body of the operator in the space of work, a memory in which to store identification data of the sectors of work engaged by the human operator, of the times of permanence in them and of the successive sectors of work in which the human operator moves, as well as a control microprocessor unit which processes this data stored in the memory according to the method of this disclosure to predict in which work sector the operator will move his hands and when that will happen, and which controls a robot based on this prediction information. The method of this disclosure can be implemented by means of software executed by a microprocessor unit.
DETERMINING AND EVALUATING DATA REPRESENTING AN ACTION TO BE PERFORMED BY A ROBOT
In one embodiment, a processor accesses sensor input data received from one or more sensors. The sensor input data represents one or more gestures. The processor determines, based on the sensor input data representing the one or more gestures, action data representing an action to be performed by a robot. The action includes physical movements of the robot. The processor evaluates the action data representing the action to be performed by the robot in light of evaluation data.
Physical human-robot interaction (pHRI)
A robot for physical human-robot interaction may include a number of sensors, a processor, a controller, an actuator, and a joint. The sensors may receive a corresponding number of sensor measurements. The processor may reduce a dimensionality of the number of sensor measurements based on temporal sparsity associated with the number of sensors and spatial sparsity associated with the number of sensors and generate an updated sensor measurement dataset. The processor may receive an action associated with a human involved in pHRI with the robot. The processor may generate a response for the robot based on the updated sensor measurement dataset and the action. The controller may implement the response via an actuator within a joint of the robot.
Determining and evaluating data representing an action to be performed by a robot
In one embodiment, a processor accesses sensor input data received from one or more sensors. The sensor input data represents one or more gestures. The processor determines, based on the sensor input data representing the one or more gestures, action data representing an action to be performed by a robot. The action includes physical movements of the robot. The processor evaluates the action data representing the action to be performed by the robot in light of evaluation data.
A PREDICTIVE CONTROL METHOD OF A ROBOT AND RELATED CONTROL SYSTEM
This disclosure relates to a method of controlling a collaborative robot, or “cobot”. According to the disclosed method the “cobot” is controlled so as to make it ready to perform a task in collaboration with the human operator only when the latter is about to move into a work sector to carry out the task collaborating with the robot. The control method of the present disclosure can be implemented by means of a control system comprising detection devices, such as for example one or more cameras or a mat equipped with sensors, which detect the position of the hands or of the entire body of the operator in the space of work, a memory in which to store identification data of the sectors of work engaged by the human operator, of the times of permanence in them and of the successive sectors of work in which the human operator moves, as well as a control microprocessor unit which processes this data stored in the memory according to the method of this disclosure to predict in which work sector the operator will move his hands and when that will happen, and which controls a robot based on this prediction information. The method of this disclosure can be implemented by means of software executed by a microprocessor unit.
System and method for instructing a device
A system and method of instructing a device is disclosed. The system includes a signal source for providing at least one visual signal where the at least one visual signal is substantially indicative of at least one activity to be performed by the device. A visual signal capturing element captures the at least one visual signal and communicates the at least one visual signal to the device where the device interprets the at least one visual signal and performs the activity autonomously and without requiring any additional signals or other information from the signal source.
Teleoperating of robots with tasks by mapping to human operator pose
A system enables teleoperation of a robot based on a pose of a subject. The system includes an image capturing device and an operator system controller that are remotely located from a robotic system controller and a robot. The image capturing device captures images of the subject. The operator system controller maps a processed version of the captured image to a three-dimensional skeleton model of the subject and generates body pose information of the subject in the captured image. The robotic system controller communicates with the operator system controller over a network. The robotic system controller generates a plurality of kinematic parameters for the robot and causes the robot to take a pose corresponding to the pose of the subject in the captured image.
PHYSICAL HUMAN-ROBOT INTERACTION (pHRI)
A robot for physical human-robot interaction may include a number of sensors, a processor, a controller, an actuator, and a joint. The sensors may receive a corresponding number of sensor measurements. The processor may reduce a dimensionality of the number of sensor measurements based on temporal sparsity associated with the number of sensors and spatial sparsity associated with the number of sensors and generate an updated sensor measurement dataset. The processor may receive an action associated with a human involved in pHRI with the robot. The processor may generate a response for the robot based on the updated sensor measurement dataset and the action. The controller may implement the response via an actuator within a joint of the robot.