Patent classifications
G05D1/2285
MOBILE CLEANING ROBOT ARTIFICIAL INTELLIGENCE FOR SITUATIONAL AWARENESS
A mobile cleaning robot includes a cleaning head configured to clean a floor surface in an environment, and at least one camera having a field of view that extends above the floor surface. The at least one camera is configured to capture images that include portions of the environment above the floor surface. The robot includes a recognition module is configured to recognize objects in the environment based on the images captured by the at least one camera, in which the recognition module is trained at least in part using the images captured by the at least one camera. The robot includes a storage device is configured to store a map of the environment. The robot includes a control module configured to control the mobile cleaning robot to navigate in the environment using the map and operate the cleaning head to perform cleaning tasks taking into account of the objects recognized by the recognition module.
Mobile object control device, mobile object control method, training device, training method, generation device, and storage medium
A mobile object control device including a storage medium and a processor connected to the storage medium is presented. The processor acquires a photographed image, which is obtained by photographing surroundings of a mobile object by a camera mounted on the mobile object, and an input instruction sentence, which is input by a user of the mobile object; detects a stop position of the mobile object corresponding to the input instruction sentence in the photographed image by inputting at least the photographed image and the input instruction sentence into a trained model including a pre-trained visual-language model, the trained model being trained so as to receive input of at least an image and an instruction sentence to output a stop position of the mobile object corresponding to the instruction sentence in the image; and causes the mobile object to travel to the stop position.
Mobile object control device, mobile object control method, and storage medium
A mobile object control device acquires a captured image obtained by capturing an image of surroundings of the mobile object by a camera mounted on a mobile object and an input directive sentence input by a user of the mobile object, inputs, when an image and a directive sentence are input, the captured image and the input directive sentence into a learned model learned to output one or more objects corresponding to the directive sentence in the image together with corresponding degrees of certainty to detect the one or more objects and the corresponding degrees of certainty, sequentially selects the one or more objects based on at least the degree of certainty and makes an inquiry to a user of the mobile object, and causes the mobile object to travel to an indicated position in the input directive sentence, which is specified based on a result of the inquiry.
NAVIGATION USER INTERFACES
The present disclosure generally relates to navigation user interfaces.
STATIONARY SERVICE APPLIANCE FOR A POLY FUNCTIONAL ROAMING DEVICE
A method for autonomously servicing a first cleaning component of a battery-operated mobile device, including: inferring, with a processor of the mobile device, a value of at least one environmental characteristic based on sensor data captured by a sensor disposed on the mobile device; actuating, with a controller of the mobile device, a first actuator interacting with the first cleaning component to at least one of: turn on, turn off, reverse direction, and increase or decrease in speed such that the first cleaning component engages or disengages based on the value of at least one environmental characteristic or at least one user input received by an application of a smartphone paired with the mobile device; and dispensing, by a maintenance station, water from a clean water container of the maintenance station for washing the first cleaning component when the mobile device is docked at the maintenance station.
STATIONARY SERVICE APPLIANCE FOR A POLY FUNCTIONAL ROAMING DEVICE
A method for autonomously servicing a first cleaning component of a battery-operated mobile device, including: inferring, with a processor of the mobile device, a value of at least one environmental characteristic based on sensor data captured by a sensor disposed on the mobile device; actuating, with a controller of the mobile device, a first actuator interacting with the first cleaning component to at least one of: turn on, turn off, reverse direction, and increase or decrease in speed such that the first cleaning component engages or disengages based on the value of at least one environmental characteristic or at least one user input received by an application of a smartphone paired with the mobile device; and dispensing, by a maintenance station, water from a clean water container of the maintenance station for washing the first cleaning component when the mobile device is docked at the maintenance station.
FOOT CONTACT PATTERN(S) AS INTERFACE FOR LANGUAGE TO CONTROL ROBOT(S)
Various implementations are provided which include receiving an instance of natural language (NL) text input indicating a task for a multi-legged robot to perform in an environment. In many implementations, the system can process the NL text input using a large language model (LLM) to generate a foot contact pattern, indicating a sequence of leg positions of the robot relative to the surface, where one or more of the legs of the robot are in contact with the surface. Additionally or alternatively, the system can generate low-level robot control output by processing the foot contact pattern using a locomotion controller.
FOOT CONTACT PATTERN(S) AS INTERFACE FOR LANGUAGE TO CONTROL ROBOT(S)
Various implementations are provided which include receiving an instance of natural language (NL) text input indicating a task for a multi-legged robot to perform in an environment. In many implementations, the system can process the NL text input using a large language model (LLM) to generate a foot contact pattern, indicating a sequence of leg positions of the robot relative to the surface, where one or more of the legs of the robot are in contact with the surface. Additionally or alternatively, the system can generate low-level robot control output by processing the foot contact pattern using a locomotion controller.
Control method for drone inspection of chemical production plant, electronic device and storage medium
Provided is a control method, system and apparatus for drone inspection of a chemical production plant, relating to the field of chemical fiber intelligent technology. The control method includes: determining a first drone for performing a preset inspection task in a target scene, wherein the first drone is responsible for the preset inspection task of the chemical production plant; obtaining data of a target object collected by the first drone in the target scene; determining a state of the target object in the target scene based on the data of the target object; and outputting prompt information matching a preset state in the target scene when the state represents that the target object is in the preset state of the target scene.
Control method for drone inspection of chemical production plant, electronic device and storage medium
Provided is a control method, system and apparatus for drone inspection of a chemical production plant, relating to the field of chemical fiber intelligent technology. The control method includes: determining a first drone for performing a preset inspection task in a target scene, wherein the first drone is responsible for the preset inspection task of the chemical production plant; obtaining data of a target object collected by the first drone in the target scene; determining a state of the target object in the target scene based on the data of the target object; and outputting prompt information matching a preset state in the target scene when the state represents that the target object is in the preset state of the target scene.