G05D1/20

Method of predicting occupancy of unseen areas for path planning, associated device, and network training method

A method of predicting occupancy of unseen areas in a region of interest (ROI) includes obtaining a depth image of the ROI, the depth image being captured from a first height; generating an occupancy map based on the obtained depth image, the occupancy map comprising an array of cells corresponding to locations in the ROI; and generating an inpainted map by inputting the occupancy map into a trained inpainting network, the inpainted map comprising an array of cells corresponding to the ROI, and wherein the inpainting network is trained by comparing an output of the inpainting network, based on inputting a training depth image taken from the first height, to a ground truth map, the ground truth map being based on a combination of the training depth image and a depth image taken at a height different than the first height.

Neural networks for object detection and characterization
11928866 · 2024-03-12 · ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting locations in an environment of a vehicle where objects are likely centered and determining properties of those objects. One of the methods includes receiving an input characterizing an environment external to a vehicle. For each of a plurality of locations in the environment, a respective first object score that represents a likelihood that a center of an object is located at the location is determined. Based on the first object scores, one or more locations from the plurality of locations are selected as locations in the environment at which respective objects are likely centered. Object properties of the objects that are likely centered at the selected locations are also determined.

Neural networks for object detection and characterization
11928866 · 2024-03-12 · ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting locations in an environment of a vehicle where objects are likely centered and determining properties of those objects. One of the methods includes receiving an input characterizing an environment external to a vehicle. For each of a plurality of locations in the environment, a respective first object score that represents a likelihood that a center of an object is located at the location is determined. Based on the first object scores, one or more locations from the plurality of locations are selected as locations in the environment at which respective objects are likely centered. Object properties of the objects that are likely centered at the selected locations are also determined.

Based on detected start of picking operation, resetting stored data related to monitored drive parameter

A method for operating a materials handling vehicle is provided and comprises: monitoring, by a controller, a first vehicle drive parameter during a manual operation of the vehicle by an operator; storing, by the controller, data related to the monitored first vehicle drive parameter. The controller is configured to use the stored data for implementing a semi-automated driving operation of the vehicle subsequent to the manual operation of the vehicle. The method further comprises: detecting, by the controller, operation of the vehicle indicative of a start of a pick operation occurring during the manual operation of the vehicle; and based on detecting the start of the pick operation, resetting, by the controller, the stored data related to the monitored first vehicle drive parameter.

NEURAL NETWORKS FOR OBJECT DETECTION AND CHARACTERIZATION
20190279005 · 2019-09-12 ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting locations in an environment of a vehicle where objects are likely centered and determining properties of those objects. One of the methods includes receiving an input characterizing an environment external to a vehicle. For each of a plurality of locations in the environment, a respective first object score that represents a likelihood that a center of an object is located at the location is determined. Based on the first object scores, one or more locations from the plurality of locations are selected as locations in the environment at which respective objects are likely centered. Object properties of the objects that are likely centered at the selected locations are also determined.

Apparatus, system, and method of using depth assessment for autonomous robot navigation
12019452 · 2024-06-25 · ·

An apparatus, system and method of operating an autonomous mobile robot having a height of at least one meter. The robot body; at least two three-dimensional depth camera sensors affixed to the robot body proximate to the height, wherein the sensors are directed toward a floor surface and, in combination, comprise a substantially 360 degree field of view of the floor surface around the robot body; and a processing system for receiving pixel data within the field of view of the sensors; obtaining missing or erroneous pixels from the pixel data; comparing the missing or erroneous pixels to a template, wherein the template comprises at least an indication of ones of the missing or erroneous pixels indicative of the robot body and a shadow of the robot body; and outputting an indication of obstacles in or near the field of view based on the comparing.

Apparatus, system, and method of using depth assessment for autonomous robot navigation
12019452 · 2024-06-25 · ·

An apparatus, system and method of operating an autonomous mobile robot having a height of at least one meter. The robot body; at least two three-dimensional depth camera sensors affixed to the robot body proximate to the height, wherein the sensors are directed toward a floor surface and, in combination, comprise a substantially 360 degree field of view of the floor surface around the robot body; and a processing system for receiving pixel data within the field of view of the sensors; obtaining missing or erroneous pixels from the pixel data; comparing the missing or erroneous pixels to a template, wherein the template comprises at least an indication of ones of the missing or erroneous pixels indicative of the robot body and a shadow of the robot body; and outputting an indication of obstacles in or near the field of view based on the comparing.

Remote operation system
12038770 · 2024-07-16 · ·

A remote operation system for a moving body, includes: a control device provided on the moving body; and an operation terminal configured to receive input from a user and to communicate with the control device. The moving body includes an external environment sensor that acquires surrounding information of the moving body and a display device. The control device creates a surrounding image including the moving body based on the surrounding information, makes the display device display the surrounding image, and in a case where communication between the control device and the operation terminal is performed, makes the display device display the surrounding image in which at least a part of a communication status display showing a status of communication with the control device is superimposed on an image of the moving body included in the surrounding image.

Remote operation system
12038770 · 2024-07-16 · ·

A remote operation system for a moving body, includes: a control device provided on the moving body; and an operation terminal configured to receive input from a user and to communicate with the control device. The moving body includes an external environment sensor that acquires surrounding information of the moving body and a display device. The control device creates a surrounding image including the moving body based on the surrounding information, makes the display device display the surrounding image, and in a case where communication between the control device and the operation terminal is performed, makes the display device display the surrounding image in which at least a part of a communication status display showing a status of communication with the control device is superimposed on an image of the moving body included in the surrounding image.

METHOD OF CONTROLLING CLEANING ROBOT TO AVOID LIQUID OBJECT, AND CLEANING ROBOT THEREFOR

A cleaning robot including a cleaning module; a traveling module to move the cleaning robot on a surface to be cleaned; a light emission unit; an infrared light sensor; a visible light sensor; and at least one processor configured to execute instructions to control the light emission unit to emit infrared light toward a detection region, control the infrared light sensor to receive the emitted infrared light that is reflected from the detection region, control the visible light sensor to receive visible light reflected from the detection region, determine, based on the reflected visible light and an intensity of the reflected infrared light, whether a liquid object is present in the detection region, control, based on determining that the liquid object is present, the traveling module to move the cleaning robot to avoid the liquid object, and control the cleaning module to clean the surface while the cleaning robot moves.