Patent classifications
G05D2111/20
AUTONOMOUS MOBILE ROBOT CONTROL METHOD AND APPARATUS, DEVICE AND READABLE STORAGE MEDIUM
An autonomous mobile robot control method and apparatus, a device and a readable storage medium. An autonomous mobile robot determines a sound source direction according to a voice signal from a user, and determines moving objects around the autonomous mobile robot itself. The autonomous mobile robot determines, from the moving objects, a target object located in the sound source direction, determines a working area according to the target object, and moves to the working area, and executes a task.
Intelligent voice controlled pipeline and air duct video inspection robotic system
An intelligent voice-controlled pipeline and air duct video inspection robotic system comprises an image acquisition unit, a control unit, a voice control device, and an in-pipe crawling device. The image acquisition unit is positioned on the in-pipe crawling device, and the in-pipe crawling device is connected to the control unit through a cable. The in-pipe crawling device is configured to move in a pipeline, collect images in the pipeline using the image collecting unit, and transmit collected image information to the control unit through the cable. The voice control device is connected to the control unit.
Robotic assisted wall painting apparatus and method
The embodiments herein discloses a semi-autonomous mobile robotic apparatus (100) that can apply primers and paints and perform other operations such as wall sanding, drawing abstract wall art on the interior walls of buildings. The disclosed semi-autonomous mobile robotic apparatus (100) apparatus comprises of at least 13 numbers of type-1 (403) and at least 2 numbers of type-2 (405) ultrasonic sensors coupled to the apparatus (100), at least 2 Light Detection and Ranging (LiDAR) sensors (401 and 402) coupled to the apparatus (100), a human machine interface module (102) adapted to receive one or more inputs from a user (101) and provide the data to the microprocessor (103) for processing inside the apparatus (100) and provide output to one or more modules (104 and 105) to perform the relevant painting, sanding, putty application or abstract wall art drawing operations.
Indoor positioning and navigation systems and methods
Indoors positioning and navigation systems and methods are described herein. In one embodiment, a system for inspecting or maintaining a storage tank includes a vehicle having: at least one sensor for determining properties of a storage tank and a navigation system. The navigation system includes an acoustic transmitter carried by the vehicle and an inertial measurement unit (IMU) sensor configured to at least partially determine a location of the vehicle with respect to the storage tank. The vehicle also includes a propulsion unit configured to move the vehicle within the storage tank, and an acoustic receiver fixed with respect to the storage tank. The vehicle moves inside the storage tank in concentric arcs with respect to the acoustic receiver.
Carpet recognition method applicable to robot cleaner
A carpet recognition method for a robot cleaner. The robot cleaner comprises a sleeve and an ultrasonic sensor, wherein the ultrasonic sensor is fixed in the sleeve. The recognition method comprises: controlling the ultrasonic sensor to vertically transmit an ultrasonic signal to the current ground, and to receive an actual echo signal reflected by the current ground; and determining whether the actual echo signal is different from the standard echo signal of the normal ground, and if so, recognizing the current ground as a carpet surface.
Mobile Robot with Audio Perception System
A mobile robot includes a microphone array with a set of microphones. The microphone array is at least partially disposed on the mobile robot. The mobile robot receives audio signals from the microphone array. Audio feature data of acoustic activity is extracted from the audio signals. Direction of arrival (DOA) data of the acoustic activity is generated based on the audio signals. A machine learning model is configured to generate audio event data using the audio feature data. The audio event data identifies at least one sound source of the audio feature data. A knowledge graph is queried using the audio event data to obtain entity data. The entity data has a predetermined relation with the audio event data. Semantic audio scene data is generated using the audio event data, the DOA data, and the entity data. The mobile robot performs an action based on the semantic audio scene data.
AUTONOMOUS DEVICES AND METHODS OF USE
An unmanned device for a marine environment comprises a location sensor configured to gather location data corresponding to the unmanned device; at least one propulsion system; a transmitter and memory including computer program code. The computer program code is configured to, when executed, cause the processor to cause the propulsion system to propel the unmanned device in a pattern along the body of water, cause the sonar transducer to emit the one or more sonar beams into the body of water, receive sonar return data corresponding to sonar returns, and generate a sonar image corresponding to the sonar return data. Further, the computer program code is configured to cause the processor to detect an object within the sonar image, assign a score to the object indicating the likelihood that the object is a desired object type, and send an alert to the remote electronics device upon assignment of the score.
SPATIAL BLIND SPOT MONITORING SYSTEMS AND RELATED METHODS OF USE
Embodiments of the present disclosure provide a system and a method of controlling a robot for autonomous navigation. The method includes receiving a set of point values defining LIDAR data from a LIDAR sensor scanning a 2D omnidirectional plane, receiving a sensor value from an ultrasonic sensor having a 3D field of view excluding the plane, and resolving an observable field of view for the LIDAR sensor, where the observable field of view includes a blind spot of the LIDAR sensor, and modifying the LIDAR data using the sensor value based on the object being located in the blind spot indicated by the sensor value less than one or more point values corresponding to a portion of the plane extending along the observable field of view, where the modified LIDAR data indicates the object being detected by the LIDAR sensor despite the object located outside the 2D field of view.
UNMANNED MOVING OBJECT, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
A drone which is an unmanned moving object according to an embodiment includes a flight controller that controls the driving of the drone, a first communication unit that communicates with an operation device that remotely controls the drone, and a second communication unit that receives unique information sent out from another moving object for identifying presence and/or a position of the other moving object.
Obstacle to path assignment and performance of control operations based on assignment for autonomous systems and applications
In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed informatione.g., obstacle locations, path locations, etc.from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object. Additionally, a machine may perform control operations based at least on the path assignments.