Patent classifications
B25J13/003
ROBOT AND CONTROL METHOD THEREOF
Disclosed is a robot. The robot comprises: a driving unit including a motor, and a processor configured to: determine a driving level of the robot based on surrounding environment information of the robot based on receiving a command for performing a task of the robot, determine, based on information about a maximum allowable torque and information about a maximum allowable speed which are preset for each driving level, a maximum allowable torque and a maximum allowable speed corresponding to the driving level of the robot, calculate the maximum allowable acceleration of the robot based on the maximum allowable torque, control the driving unit to control the robot to control the moving speed of the robot to reach the maximum allowable speed based on the maximum allowable acceleration, and control the robot to perform tasks while the robot is moving at the maximum allowable speed.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing apparatus and an information processing method that can provide more useful information for an action plan of an autonomous mobile body, the information processing apparatus including an action recommendation unit configured to present a recommended action recommended to an autonomous mobile body, to the autonomous mobile body that performs an action plan based on situation estimation. The action recommendation unit determines the recommended action on the basis of an action history collected from a plurality of the autonomous mobile bodies, and on the basis of a situation summary received from a target autonomous mobile body that is a target of recommendation. The information processing method includes presenting, by a processor, a recommended action recommended to an autonomous mobile body, to the autonomous mobile body that performs an action plan based on situation estimation.
SYSTEMS AND METHODS FOR SCHEDULING MOBILE ROBOT MISSIONS
- Shannon Amelia Case ,
- Alex Wong ,
- Josua Gonzales-Neal ,
- David C. Palafox ,
- James Jackson ,
- Nick Cuneo ,
- Josie-Dee Seagren ,
- Victoria Liceaga ,
- Craig Michael Butterworth ,
- Orjeta Taka ,
- Christopher V. Jones ,
- Steven J. Baron ,
- David M. McSweeney ,
- Kenrick E. Drew ,
- Ryan Schneider ,
- Isaac Vandermeulen ,
- Michael Foster
Described herein are systems, devices, and methods for scheduling and controlling a mobile robot based on user location, user behavior, or other contextual information. In an example, a mobile cleaning robot comprises a drive system configured to move the mobile cleaning robot about an environment in a user's residence, and a controller circuit configured to receive an indication of a user entering or exiting a pre-defined geographical zone with respect to a location of the user's residence. Such indication may be detected using location and geofencing services of a mobile device. Based on the indication of the user entering or exiting the geofence, the controller circuit may generate a motion control signal to navigate the mobile cleaning robot to conduct a mission in the environment.
AUTOMATED MANIPULATION OF OBJECTS USING A VISION-BASED METHOD FOR DETERMINING COLLISION-FREE MOTION PLANNING
In accordance with various aspects and embodiments of the invention, a system and method are provided for manipulation and movement of objects. In accordance with one aspect of the invention, the system includes a robotic arm that grabs and manipulates objects along a collision-free path. The objects can be in a randomly arranged pile or in an orderly arranged location. In accordance with various aspects and embodiments of the invention, the objects are moved from an orderly location to a storage location.
ROBOT SWARM
There are provided robot swarms and methods of operating thereof. Such a swarm may include two or more robots each having a microphone, a speaker, and a communication terminal for sending or receiving an electromagnetic signal to or from a communication partner to affect electromagnetic communication with the communication partner. Upon detection of a disruption to the electromagnetic communication, a given robot of the two or more robots may switch from electromagnetic communication to acoustic communication by exchanging an acoustic signal between one of the speaker and the microphone of the given robot and one of a corresponding microphone and a corresponding speaker of a corresponding communication partner.
Verbal-based focus-of-attention task model encoder
Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
Voice-Activated, Compact, and Portable Robotic System
In a joint movement device (100) for selective flexion and extension of a joint (20), a tendon (120) is disposed adjacent to the first and second joint members. A tendon securing device (112) is secured to the second joint member (12), the tendon (120) being secured to the tendon securing device (112). At least one phalange ring (110) is secured to a joint member and includes a tending routing mechanism (113) configured to route the tendon through the phalange ring (110). An actuator (140) is coupled to the tendon (120) and pulls the tendon (120) inwardly to cause the joint (20) to flex. An elastic member (130) is coupled to the phalange ring (110) and tendon securing device (112) and applies an extension force thereto, thereby causing the joint (20) to extend when the actuator (140) releases the tendon (120).
Method for controlling robot and robot device
A method for controlling a robot includes: establishing a reference coordinate system; capturing a user's gaze direction of an indicated target; acquiring a sight line angle of the robot; acquiring a position of the robot; acquiring a linear distance between the robot and the user; calculating in real time a gaze plane in a user's gaze direction relative to the reference coordinate system based on the sight line angle of the robot, the position of the robot and the liner distance between the robot and the user; and smoothly scanning the gaze plane by the robot to search for the indicated target in the user's gaze direction.
System and method for semantic processing of natural language commands
A system, method and computer-readable storage devices are for processing natural language commands, such as commands to a robotic arm, using a Tag & Parse approach to semantic parsing. The system first assigns semantic tags to each word in a sentence and then parses the tag sequence into a semantic tree. The system can use statistical approach for tagging, parsing, and reference resolution. Each stage can produce multiple hypotheses, which are re-ranked using spatial validation. Then the system selects a most likely hypothesis after spatial validation, and generates or outputs a command. In the case of a robotic arm, the command is output in Robot Control Language (RCL).
Robot and method for controlling the same
A robot and operation method is disclosed. The robot according to the present disclosure may include a sensor, a microphone, and a controller. The robot may execute an artificial intelligence (AI) algorithm and/or a machine learning algorithm, and may communicate with other electronic devices in a 5G communication environment. An embodiment may include detecting a movement of the robot to a location; detecting an obstacle within a predetermined range from the robot; estimating an occupation area of the obstacle in space; and identifying a sound signal received from the estimated occupation area of the obstacle from among a plurality of sound signals received by a plurality of microphones of the robot at the location.