Patent classifications
G01S17/89
SYSTEMS AND METHODS FOR DETERMINING ROAD TRAVERSABILITY USING REAL TIME DATA AND A TRAINED MODEL
Embodiments of the disclosed systems and methods provide for determination of roadway traversability by an autonomous vehicle using real time data and a trained traversability determination machine learning model. Consistent with aspects of the disclosed embodiments, the model may be trained using annotated birds eye view perspective data obtained using vehicle vision sensor systems (e.g., LiDAR and/or camera systems). During operation of a vehicle, vision sensor data may be used to construct birds eye view perspective data, which may be provided to the trained model. The model may label and/or otherwise annotate the vision sensor data based on relationships identified in the model training process to identify associated road boundary and/or lane information. Local vehicle control systems may compute control actions and issue commands to associated vehicle control systems to ensure the vehicle travels within a desired path.
SYSTEMS AND METHODS FOR DETERMINING ROAD TRAVERSABILITY USING REAL TIME DATA AND A TRAINED MODEL
Embodiments of the disclosed systems and methods provide for determination of roadway traversability by an autonomous vehicle using real time data and a trained traversability determination machine learning model. Consistent with aspects of the disclosed embodiments, the model may be trained using annotated birds eye view perspective data obtained using vehicle vision sensor systems (e.g., LiDAR and/or camera systems). During operation of a vehicle, vision sensor data may be used to construct birds eye view perspective data, which may be provided to the trained model. The model may label and/or otherwise annotate the vision sensor data based on relationships identified in the model training process to identify associated road boundary and/or lane information. Local vehicle control systems may compute control actions and issue commands to associated vehicle control systems to ensure the vehicle travels within a desired path.
AUTONOMOUS TRANSPORT VEHICLE WITH VISION SYSTEM
An autonomous guided vehicle includes a frame, a drive section, a payload handler, a sensor system, and a supplemental sensor system. The sensor system has electro-magnetic sensors, each responsive to interaction or interface of a sensor emitted or generated electro-magnetic beam or field with a physical characteristic, the electro-magnetic beam or field being disturbed by interaction or interface with the physical characteristic, and which disturbance is detected by and effects sensing of the physical characteristic. The sensor system generates sensor data embodying at least one of a vehicle navigation pose or location information and payload pose or location information. The supplemental sensor system supplements the sensor system, and is, at least in part, a vision system with cameras disposed to capture image data informing the at least one of a vehicle navigation pose or location and payload pose or location supplement to the information of the sensor system.
OBJECT DETECTION METHOD AND OBJECT TRACKING DEVICE USING LIDAR SENSOR
An object detection method using a lidar sensor of an embodiment includes determining whether a box of a target object is a box in which an overlapping object present therein can be deleted on the basis of shape information of the target object obtained by the lidar sensor, and generating a box track of the target object after removing the overlapping object according to a determination result.
ROBOTIC CLEANER
A robotic cleaning system may include a robotic cleaner configured to generate a map of an environment and a mobile device configured to communicatively couple to the robotic cleaner, the robotic cleaner configured to communicate the map to the mobile device. The mobile device may include a camera configured to generate an image of the environment, the image comprising a plurality of pixels, a display configured to display the image and to receive a user input while displaying the image, the user input being associated with one or more of the plurality of pixels, a depth sensor configured to generate depth data that is associated with each pixel of the image, an orientation sensor configured to generate orientation data that is associated with each pixel of the image, and a mobile controller configured to localize the mobile device within the map using the depth data and the orientation data.
Dual lens receive path for LiDAR system
A dual lens assembly positioned along an optical receive path within a LiDAR system is provided. The dual lens assembly is constructed to reduce a numerical aperture of a returned light pulse and reduce a walk-off error associated with one or more mirrors of the LiDAR system.
Dual lens receive path for LiDAR system
A dual lens assembly positioned along an optical receive path within a LiDAR system is provided. The dual lens assembly is constructed to reduce a numerical aperture of a returned light pulse and reduce a walk-off error associated with one or more mirrors of the LiDAR system.
Solid state pulse steering in lidar systems
LiDAR system and methods discussed herein use a dispersion element or optic that has a refraction gradient that causes a light pulse to be redirected to a particular angle based on its wavelength. The dispersion element can be used to control a scanning path for light pulses being projected as part of the LiDAR's field of view. The dispersion element enables redirection of light pulses without requiring the physical movement of a medium such as mirror or other reflective surface, and in effect further enables at least portion of the LiDAR's field of view to be managed through solid state control. The solid state control can be performed by selectively adjusting the wavelength of the light pulses to control their projection along the scanning path.
Solid state pulse steering in lidar systems
LiDAR system and methods discussed herein use a dispersion element or optic that has a refraction gradient that causes a light pulse to be redirected to a particular angle based on its wavelength. The dispersion element can be used to control a scanning path for light pulses being projected as part of the LiDAR's field of view. The dispersion element enables redirection of light pulses without requiring the physical movement of a medium such as mirror or other reflective surface, and in effect further enables at least portion of the LiDAR's field of view to be managed through solid state control. The solid state control can be performed by selectively adjusting the wavelength of the light pulses to control their projection along the scanning path.
Systems and methods for detecting and correcting data density during point cloud generation
A point cloud capture system is provided to detect and correct data density during point cloud generation. The system obtains data points that are distributed within a space and that collectively represent one or more surfaces of an object, scene, or environment. The system computes the different densities with which the data points are distributed in different regions of the space, and presents an interface with a first representation for a first region of the space in which a first subset of the data points are distributed with a first density, and a second representation for a second region of the space in which a second subset of the data points are distributed with a second density.