Patent classifications
G05D1/0248
ROBOT AND CONTROL METHOD THEREFOR
A robot is provided. The robot includes a depth camera, a light detection and ranging (LIDAR) sensor, and at least one processor, wherein the at least one processor acquires a first depth image including first depth information by using the depth camera, acquires second depth information corresponding to a first area of the first depth image by using the LIDAR sensor, acquires a depth difference between the second depth information and the first depth information included in the first area, identifies an area to be corrected around the first area, acquires information regarding a filter for correcting the first depth information on the basis of the depth difference, and acquires a second depth image by correcting the first depth information corresponding to the area to be corrected, on the basis of the first depth information, the second depth information, and the information regarding the filter.
Mobile robots to generate occupancy maps
An example control system includes a memory and at least one processor to obtain image data from a given region and perform image analysis on the image data to detect a set of objects in the given region. For each object of the set, the example control system may classify each object as being one of multiple predefined classifications of object permanency, including (i) a fixed classification, (ii) a static and fixed classification, and/or (iii) a dynamic classification. The control system may generate at least a first layer of a occupancy map for the given region that depicts each detected object that is of the static and fixed classification and excluding each detected object that is either of the static and unfixed classification or of the dynamic classification.
Discovering and plotting the boundary of an enclosure
Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.
Cargo trailer sensor assembly
A sensor assembly can include a housing that includes a view pane and a mounting feature configured to replace a trailer light of a cargo trailer of a semi-trailer truck. The sensor assembly can also include a lighting element mounted within the housing to selectively generate light, and a sensor mounted within the housing and having a field of view through the view pane. The sensor assembly can also include a communication interface configured to transmit sensor data from the sensor to a control system of the self-driving tractor.
Braking control behaviors for autonomous vehicles
A method and system are provided for controlling braking a vehicle in an autonomous driving mode. For instance, the vehicle is controlled in the autonomous driving mode according to a first braking control mode using a first model to adjust the position of a vehicle relative to an expected position of a current trajectory of the vehicle. Using a second model how close to a maximum deviation threshold the vehicle would come if a maximum braking strength for the vehicle was applied is predicted. The maximum deviation threshold provides an allowed forward deviation from the current trajectory. Based on the prediction, the vehicle is controlled in the autonomous driving mode according to a second braking control mode by automatically applying the maximum braking strength.
Autonomous driving system
An autonomous driving system acquires information concerning a vehicle density in an adjacent lane that is adjacent to a lane on which an own vehicle is traveling, when the own vehicle travels on a road having a plurality of lanes. The autonomous driving system selects the adjacent lane as an own vehicle travel lane, when the vehicle density in the adjacent lane that is calculated from the acquired information is lower than a threshold density that is determined in accordance with relations between the own vehicle and surrounding vehicles. The autonomous driving system performs lane change to the adjacent lane autonomously, or propose lane change to the adjacent lane to a driver, when the adjacent lane is selected as the own vehicle travel lane.
Continuous convolution and fusion in neural networks
Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
Methods and systems for computer-based determining of presence of objects
A computer-implemented method for processing a 3-D point cloud data and an associated image data to enrich the 3-D point cloud data with relevant portions of the image date. The method comprises generating a 3-D point cloud data tensor representative of information contained in the 3-D point cloud data and generating an image tensor representative of information contained in the image data; and then analyzing the image tensor to identify a relevant data portion of the image information relevant to the at least one object candidate. The method further includes amalgamating the 3-D point cloud data tensor with a relevant portion of the image tensor associated with the relevant data portion of the image information to generate an amalgamated tensor associated with the surrounding area and storing the amalgamated tensor to be used by a machine learning algorithm (MLA) to determine presence of the object in the surrounding area.
OBSTACLE TO PATH ASSIGNMENT FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
In various examples, one or more output channels of a deep neural network (DNN) may be used to determine assignments of obstacles to paths. To increase the accuracy of the DNN, the input to the DNN may include an input image, one or more representations of path locations, and/or one or more representations of obstacle locations. The system may thus repurpose previously computed information—e.g., obstacle locations, path locations, etc.—from other operations of the system, and use them to generate more detailed inputs for the DNN to increase accuracy of the obstacle to path assignments. Once the output channels are computed using the DNN, computed bounding shapes for the objects may be compared to the outputs to determine the path assignments for each object.
Plurality of autonomous mobile robots and controlling method for the same
A plurality of autonomous mobile robots includes a first mobile robot and a second mobile robot. The first mobile robot is provided with a transmitting optical sensor for outputting laser light, and a first module for transmitting and receiving an Ultra-Wideband (UWB) signal. The second mobile robot is provided with a receiving optical sensor for receiving the laser light and a plurality of second modules for transmitting and receiving the UWB signal. A control unit of the second mobile robot determines a relative position of the first mobile robot based on the received UWB signal and a determination of whether the laser light is received by the optical sensor.