G05B2219/39441

AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING DEVICE
20220355470 · 2022-11-10 ·

The present technology relates to an autonomous mobile body, an information processing method, a program, and an information processing device that enable a user to experience discipline for the autonomous mobile body. The autonomous mobile body includes: a recognition unit that recognizes an instruction given; an action planning unit that plans an action on the basis of the instruction recognized; and an operation control unit that controls execution of the action planned, in which the action planning unit changes a detail of a predetermined action as an action instruction that is an instruction for the predetermined action is repeated. The present technology can be applied to a robot, for example.

METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR CONTROLLING A ROBOT
20220357721 · 2022-11-10 · ·

A method for controlling a robot is provided. The method includes the steps of: acquiring at least one of sound information and action information for a robot from a user in a serving place; determining identification information on the user on the basis of at least one of the sound information and the action information; and determining an operation to be performed by the robot on the basis of the identification information.

Handling assembly comprising a handling device for carrying out at least one work step, method, and computer program

A handling assembly having a handling device for carrying out at least one working step with and/or on a workpiece in a working region of the handling device, stations being situated in the working region, with at least one monitoring sensor for the optical monitoring of the working region and for provision as monitoring data, with a localization module, the localization module being designed to recognize the stations and to determine a station position for each of the stations.

System and method for piece-picking or put-away with a mobile manipulation robot

A method and system for piece-picking or piece put-away within a logistics facility. The system includes a central server and at least one mobile manipulation robot. The central server is configured to communicate with the robots to send and receive piece-picking data which includes a unique identification for each piece to be picked, a location within the logistics facility of the pieces to be picked, and a route for the robot to take within the logistics facility. The robots can then autonomously navigate and position themselves within the logistics facility by recognition of landmarks by at least one of a plurality of sensors. The sensors also provide signals related to detection, identification, and location of a piece to be picked or put-away, and processors on the robots analyze the sensor information to generate movements of a unique articulated arm and end effector on the robot to pick or put-away the piece.

VERBAL-BASED FOCUS-OF-ATTENTION TASK MODEL ENCODER

Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.

System and method for piece picking or put-away with a mobile manipulation robot

A method and system for picking or put-away within a logistics facility. The system includes a central server and at least one mobile manipulation robot. The central server is configured to communicate with the robots to send and receive picking data which includes a unique identification for each item to be picked, a location within the logistics facility of the items to be picked, and a route for the robot to take within the logistics facility. The robots can then autonomously navigate and position themselves within the logistics facility by recognition of landmarks by at least one of a plurality of sensors. The sensors also provide signals related to detection, identification, and location of a item to be picked or put-away, and processors on the robots analyze the sensor information to generate movements of a unique articulated arm and end effector on the robot to pick or put-away the item.

Verbal-based focus-of-attention task model encoder

Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.

Conversational systems and methods for robotic task identification using natural language

This disclosure relates generally to human-robot interaction (HRI) to enable a robot to execute tasks that are conveyed in a natural language. The state-of-the-art is unable to capture human intent, implicit assumptions and ambiguities present in the natural language to enable effective robotic task identification. The present disclosure provides accurate task identification using classifiers trained to understand linguistic and semantic variations. A mixed-initiative dialogue is employed to resolve ambiguities and address the dynamic nature of a typical conversation. In accordance with the present disclosure, the dialogues are minimal and directed to the goal to ensure human experience is not degraded. The method of the present disclosure is also implemented in a context sensitive manner to make the task identification effective.

Method for waking up robot and robot thereof
11276402 · 2022-03-15 · ·

A method for waking up a robot includes: acquiring sight range information when a voice command issuer issues a voice command; if the sight range information of the voice command issuer when issuing the voice command is acquired, determining, based on the sight range information, whether the voice command issuer gazes the robot when the voice command is issued; and determining that the robot is called if the voice command issuer gazes the robot.

Voice control of components of a facility

Devices, methods, and systems for voice control of components of a facility are described herein. One computing device apparatus includes a memory, and a processor configured to execute executable instructions stored in the memory to receive a voice command or voice query from a user, determine location context information associated with the computing device, and determine which component or components of the facility are associated with the voice command or voice query based, at least in part, on the location context information associated with the computing device.