Patent classifications
G05D2111/20
Cleaning robot and remote controller included therein
A cleaning robot includes a navigator to move a main body, a remote controller to output a modulated infrared ray in accordance with a control command of a user and to form a light spot, a light receiver to receive the infrared ray from the remote controller, and a controller to control the navigator such that the main body tracks the light spot when the modulated infrared ray is received in accordance with the control command. Because the cleaning robot tracks a position indicated by the remote controller, a user may conveniently move the cleaning robot.
UNDERWATER CLEANING ROBOT
An underwater cleaning robot contains a movement device for moving the underwater cleaning robot under water, a cleaning device for cleaning an object located under water, a control device for controlling the movement device and/or the cleaning device and a communication device for receiving and/or transmitting signals from outside the underwater cleaning robot and vice versa. The communication device contains a first ultrasonic transducer for receiving ultrasonic signals transmitted under water and is designed to transmit electrical signals, corresponding to the ultrasonic signals received, to the control device.
COMMUNICATION DELAY COMPENSATION METHOD AND SYSTEM BASED ON AUTONOMOUS ROBOT
The disclosure provides a communication delay compensation method and a communication delay compensation system based on an autonomous robot, where the method includes the following steps: establishing a state equation based on a system model of an AUV positioning system; acquiring an included angle between a direction vector of AUV to an observation station and a velocity vector of AUV based on the system model; establishing an observation equation according to the state equation and the included angle; establishing an extended Kalman filter equation based on the system model, the included angle and the observation equation; and calculating a position information predicted value at the current time by using the extended Kalman filter equation to complete communication delay compensation.
AUTONOMOUS ASCENT OF AN UNDERWATER VEHICLE
There is provided a computerized method of controlling ascent of an underwater vehicle (UV) from a safety depth to a water surface, the method comprising: at safety depth, controlling the UV to collect, from a passive sonar associated with the UV, first data indicative of first locations of surface targets within a first surface area of interest; controlling ascent of the UV to an intermediate depth in accordance with the first data; at the intermediate depth, controlling the UV to collect second data indicative of second locations of surface targets within a second surface area of interest, wherein the second data comprises one or more of: data from a passive sonar, data from one or more magnetic sensors, data from an active sonar, data from a light detection and ranging (LIDAR) scanner; and controlling ascent of the UV to a periscope depth in accordance with the second data.
Automatic working system, automatic walking device, and method for controlling same, and computer-readable storage medium
A self-working system, a self-walking device (1) and a method for controlling same, and a computer-readable storage medium. The control method comprises: acquiring a captured image; processing the captured image to acquire a processed image; segmenting the processed image into at least one sub-region; calculating the size A.sub.n of each sub-region, respectively; counting the number of sub-regions with A.sub.n>V in the processed image, and marking same as the number N.sub.b of special sub-regions, wherein V is a preset quantity threshold; if N.sub.b1, judging that the captured image belongs to a lawn region; and if N.sub.b> 1, judging that the captured image belongs to a non-complete lawn region. If it is judged that a captured image belongs to a non-complete lawn region, it can be determined that there is a large obstacle or a boundary (2), etc. Whether the self-walking device (1) encounters an obstacle or a boundary (2) can be analyzed by analyzing a captured image, such that the operation is easier, and the control is more sensitive and effective.
APPARATUS CONTROL DEVICE, APPARATUS CONTROL METHOD, AND RECORDING MEDIUM
An apparatus control device for controlling an apparatus includes at least one processor configured to determine a characteristic of performance sound around a robot that is the apparatus, determine a situation of the robot or a situation around the robot, and when causing the robot to execute a performance coordinated action that is coordinated with the performance sound based on the determined characteristic of the performance sound, reflect the determined situation to the performance coordinated action.
APPARATUS CONTROL DEVICE, APPARATUS CONTROL METHOD, AND RECORDING MEDIUM
An apparatus control device for controlling an apparatus includes at least one processor configured to determine, as a characteristic of a performance sound around a robot that is the apparatus, constancy of a performance speed of the performance sound or a key of a performance based on the performance sound, and change a pseudo-emotion of the apparatus in accordance with the determined characteristic of the performance sound.
Spatial blind spot monitoring systems and related methods of use
Embodiments of the present disclosure provide a system and a method of controlling a robot for autonomous navigation. The method includes receiving a set of point values defining LIDAR data from a LIDAR sensor scanning a 2D omnidirectional plane, receiving a sensor value from an ultrasonic sensor having a 3D field of view excluding the plane, and resolving an observable field of view for the LIDAR sensor, where the observable field of view includes a blind spot of the LIDAR sensor, and modifying the LIDAR data using the sensor value based on the object being located in the blind spot indicated by the sensor value less than one or more point values corresponding to a portion of the plane extending along the observable field of view, where the modified LIDAR data indicates the object being detected by the LIDAR sensor despite the object located outside the 2D field of view.
MOWER OBSTACLE AVOIDANCE SYSTEM
A mower obstacle avoidance system includes a pair of sensors mounted on a robotic mower, each sensor emitting ultrasonic signals in a trajectory in front of the robotic mower, and receiving reflected signals from the other sensor. A controller commands a traction drive system to stop and turn the mower if either of the pair of sensors receives a reflected ultrasonic signal from an object at a stop distance in front of the robotic mower, if the object also is within a window of passage based on a height of cut, a width and a height of the robotic mower.
ROBOT AND METHOD FOR CONTROLLING THE ROBOT
A robot and a method for controlling the robot is provided. The robot includes: at least one sensor; a speaker; a microphone; a driver; at least one memory storing one or more instructions; and at least one processor configured to execute the one or more instructions, wherein the one or more instructions, when executed by the at least one processor, cause the robot to: generate a map comprising information regarding a plurality of objects based on sensing information obtained through the at least one sensor, generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, based on receiving a user voice through the microphone, obtain information on an intensity of the user voice for each of a plurality of directions, obtain information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions, obtain priority order information for the plurality of candidate directions based on a position of the robot and the stored reflectivity information, and obtain information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information.