G05D1/2285

Voice controlled material handling mobile robotic system

An AMU system includes an Autonomous Mobile Unit (AMU), base station, lanyard, and Warehouse Management System (WMS) configured to communicate with one another over a network. The AMU includes a microphone configured to receive verbal commands from an individual. The individual can further provide verbal commands through the base station and the lanyard when worn by the individual. The lanyard can also provide a geo-fence around the individual where the AMU slows down to enhance safety.

Voice controlled material handling mobile robotic system

An AMU system includes an Autonomous Mobile Unit (AMU), base station, lanyard, and Warehouse Management System (WMS) configured to communicate with one another over a network. The AMU includes a microphone configured to receive verbal commands from an individual. The individual can further provide verbal commands through the base station and the lanyard when worn by the individual. The lanyard can also provide a geo-fence around the individual where the AMU slows down to enhance safety.

ROBOT AND METHOD FOR CONTROLLING THE ROBOT

A robot and a method for controlling the robot is provided. The robot includes: at least one sensor; a speaker; a microphone; a driver; at least one memory storing one or more instructions; and at least one processor configured to execute the one or more instructions, wherein the one or more instructions, when executed by the at least one processor, cause the robot to: generate a map comprising information regarding a plurality of objects based on sensing information obtained through the at least one sensor, generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, based on receiving a user voice through the microphone, obtain information on an intensity of the user voice for each of a plurality of directions, obtain information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions, obtain priority order information for the plurality of candidate directions based on a position of the robot and the stored reflectivity information, and obtain information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information.

ROBOT AND METHOD FOR CONTROLLING THE ROBOT

A robot and a method for controlling the robot is provided. The robot includes: at least one sensor; a speaker; a microphone; a driver; at least one memory storing one or more instructions; and at least one processor configured to execute the one or more instructions, wherein the one or more instructions, when executed by the at least one processor, cause the robot to: generate a map comprising information regarding a plurality of objects based on sensing information obtained through the at least one sensor, generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, based on receiving a user voice through the microphone, obtain information on an intensity of the user voice for each of a plurality of directions, obtain information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions, obtain priority order information for the plurality of candidate directions based on a position of the robot and the stored reflectivity information, and obtain information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information.

Stationary service appliance for a poly functional roaming device
12282342 · 2025-04-22 · ·

A method for autonomously servicing a first cleaning component of a battery-operated mobile device, including: inferring, with a processor of the mobile device, a value of at least one environmental characteristic based on sensor data captured by a sensor disposed on the mobile device; actuating, with a controller of the mobile device, a first actuator interacting with the first cleaning component to at least one of: turn on, turn off, reverse direction, and increase or decrease in speed such that the first cleaning component engages or disengages based on the value of at least one environmental characteristic or at least one user input received by an application of a smartphone paired with the mobile device; and dispensing, by a maintenance station, water from a clean water container of the maintenance station for washing the first cleaning component when the mobile device is docked at the maintenance station.

Stationary service appliance for a poly functional roaming device
12282342 · 2025-04-22 · ·

A method for autonomously servicing a first cleaning component of a battery-operated mobile device, including: inferring, with a processor of the mobile device, a value of at least one environmental characteristic based on sensor data captured by a sensor disposed on the mobile device; actuating, with a controller of the mobile device, a first actuator interacting with the first cleaning component to at least one of: turn on, turn off, reverse direction, and increase or decrease in speed such that the first cleaning component engages or disengages based on the value of at least one environmental characteristic or at least one user input received by an application of a smartphone paired with the mobile device; and dispensing, by a maintenance station, water from a clean water container of the maintenance station for washing the first cleaning component when the mobile device is docked at the maintenance station.

Mobile cleaning robot artificial intelligence for situational awareness

A mobile cleaning robot includes a cleaning head configured to clean a floor surface in an environment, and at least one camera having a field of view that extends above the floor surface. The at least one camera is configured to capture images that include portions of the environment above the floor surface. The robot includes a recognition module is configured to recognize objects in the environment based on the images captured by the at least one camera, in which the recognition module is trained at least in part using the images captured by the at least one camera. The robot includes a storage device is configured to store a map of the environment. The robot includes a control module configured to control the mobile cleaning robot to navigate in the environment using the map and operate the cleaning head to perform cleaning tasks taking into account of the objects recognized by the recognition module.

AUTONOMOUS MOBILE ROBOT CONTROL METHOD AND APPARATUS, DEVICE AND READABLE STORAGE MEDIUM

An autonomous mobile robot control method and apparatus, a device and a readable storage medium. An autonomous mobile robot determines a sound source direction according to a voice signal from a user, and determines moving objects around the autonomous mobile robot itself. The autonomous mobile robot determines, from the moving objects, a target object located in the sound source direction, determines a working area according to the target object, and moves to the working area, and executes a task.

AUTONOMOUS MOBILE ROBOT CONTROL METHOD AND APPARATUS, DEVICE AND READABLE STORAGE MEDIUM

An autonomous mobile robot control method and apparatus, a device and a readable storage medium. An autonomous mobile robot determines a sound source direction according to a voice signal from a user, and determines moving objects around the autonomous mobile robot itself. The autonomous mobile robot determines, from the moving objects, a target object located in the sound source direction, determines a working area according to the target object, and moves to the working area, and executes a task.

Autonomous versatile vehicle system

Provided is a system for robotic collaboration, including a first robotic chassis including a memory storing instructions that when executed a processor effectuates operations including capturing data of an environment and data indicative of movement, generating a first map of the environment based on the captured data, and the robot executing a first part of a task. The system also includes a second robot including a memory storing instructions that when executed by a processor effectuates operations including capturing data of an environment and the second robot executing a second part of the task after the first robot completes the first part of the task, the completion of the first part of the task being indicated by a signal received with the processor of the second robot.