Patent classifications
B25J13/003
Device, method, and program
A device communicates with a human through voice recognition of voice of the human. The device includes: a drive mechanism that drives the device; and a processor. The processor controls the drive mechanism to drive the device to a waiting place for the device to contact the human, and the waiting place is determined based on contact information that is history of contact between the device and the human.
Method for controlling robot based on brain-computer interface and apparatus for controlling meal assistance robot thereof
The present disclosure relates to technology that controls a robot based on brain-computer interface, and a robot control method acquires a first biosignal indicating an intention to start the operation of the robot from a user to operate the robot, provides the user with visual stimulation of differently set signal cycles corresponding to a plurality of objects for which the robot executes motions, acquires a second biosignal evoked by the visual stimulation from the user to identify an object selected by the user, and acquires a third biosignal corresponding to a motion for the identified object from the user to induce the robot to execute the corresponding motion.
Surgical system user interface using cooperatively-controlled robot
According to some embodiments of the present invention, a cooperatively controlled robot includes a robotic actuator assembly comprising a tool holder and a force sensor, a control system adapted to communicate with the robotic actuator assembly and the force sensor, and an output system in communication with the control system. The tool holder is configured to receive a tool to be manipulated by a user. The control system is configured to receive an instruction from a user to switch from a robot control mode into a user interface control mode. The force sensor is configured to measure at least one of a force and a torque applied to the tool, and the control system is configured to receive an indication of the at least one of a force and a torque applied to the tool and manipulate the output system based on the indication.
System and method for semantic processing of natural language commands
A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
MASSAGE SYSTEM AND MASSAGE DEVICE
Provided is a massage system for a massage facility in which a robot arm is provided and a massage is performed with respect to a user by the robot arm, the system including a control unit configured to cause the robot arm to perform a massage operation in response to a program, and a setting unit configured to receive a setting operation of a setting position used for the massage operation from an operator, in which the control unit is configured to cause the robot arm to perform the massage operation with respect to a treatment position based on the setting position.
ARTIFICIAL INTELLIGENCE ROBOT FOR PROVIDING VOICE RECOGNITION FUNCTION AND METHOD OF OPERATING THE SAME
An artificial intelligence robot for providing a voice recognition service includes a memory configured to store voice identification information, a microphone configured to receive a voice command; and a processor configured to extract voice identification information from a wake-up command included in the voice command and used to activate the voice recognition service and operate the voice recognition function in a deactivation state when the extracted voice identification information does not match the voice identification information stored in the memory.
ARTIFICIAL INTELLIGENCE ROBOT AND METHOD OF OPERATING THE SAME
An artificial intelligence robot includes a camera configured to acquire image data, a memory configured to store an object recognition model used to recognize an object from the image data, and a processor configured to acquire a speech command, determine whether an intention of the acquired speech command is object search, recognize an object from the image data based on the object recognition model during traveling when the intention of object search, and output a notification indicating that the object has been recognized when the recognized object is an intended object.
ARTIFICIAL INTELLIGENCE (AI) ROBOT AND CONTROL METHOD THEREOF
Disclosed is a method of controlling a robot, comprising: switching to a surrounding environment concentration mode according to a sound of surrounding environment in a display off mode; searching a user in the surrounding environment concentration mode and switching to a user concentration mode when the user is searched; switching to a user conversation mode from the user concentration mode according to a sound received from the user; and entering the display off mode again after passing through a play alone mode, when the user is not searched in the surrounding environment concentration mode. Accordingly, the robot can operate in an optimal mode according to the change of the surrounding environment by setting various modes of the robot.
VERBAL-BASED FOCUS-OF-ATTENTION TASK MODEL ENCODER
Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
HEALTHCARE ROBOT AND CONTROL METHOD THEREFOR
A robot is disclosed. The robot comprises: a body including a transport means; a head disposed on the body and including a plurality of sensors; and a processor which controls the head on the basis of at least one sensor among the plurality of sensors and acquires information related to a user's health through the sensor of the controlled head.