Patent classifications
G05D1/0238
Multi-task multi-sensor fusion for three-dimensional object detection
Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.
Bounding box estimation and lane vehicle association
Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
Attract-repel path planner system for collision avoidance
A system for determining a travel direction that avoids objects when a vehicle travels from a current location to a target location is provided. The system determines a travel direction based on an attract-repel model. The system assigns a repel value to the object locations and an attract value. A repel represents a magnitude of a directional repulsive force, and the attract value represents the magnitude of a directional repulsive force. The system calculates an attract-repel field having an attract-repel magnitude and attract-repel direction for the current location based on the repel values and their directions and the attract value and its direction. The system then determines the travel direction for a vehicle to be the direction of the attract-repel field at the current location.
Information processing apparatus, information processing method, information processing system, and storage medium
An information processing apparatus for determining control values for controlling a position of a vehicle for conveying a cargo includes an acquisition unit configured to acquire first information for identifying a three-dimensional shape of the cargo based on a captured first image of the cargo, and second information for identifying, based on a captured second image of an environment where the vehicle moves, a distance between an object in the environment and the vehicle, and a determination unit configured to, based on the first information and the second information, determine the control values for preventing the cargo and the object from coming closer than a predetermined distance.
Lidar system with integrated circulator
A vehicle, Lidar system and method of detecting an object is disclosed. The Lidar system includes a photonic chip having an aperture, one or more photodetectors and a circulator. A transmitted light beam generated within the photonic chip exits the photonic chip via the aperture and a reflected light beam enters the photonic chip via the aperture, the reflected light beam being a reflection of the transmitted light beam from the object. The one or more photodetectors measure the parameter of the object from at least the reflected light beam. The circulator integrated into the photonic chip directs the transmitted light beam toward the aperture and directs the reflected light beam from the aperture to the one or more photodetectors. A navigation system navigates the vehicle with respect to the object based on the parameter of the object.
AUTOMATED INSPECTION OF AUTONOMOUS VEHICLE EQUIPMENT
An equipment inspection system receives data captured by a sensor of an autonomous vehicle (AV). The captured data describes a current state of equipment for servicing the AV. The equipment inspection system compares the captured data to a model describing an expected state of the equipment. The equipment inspection system determines, based on the comparison, that the equipment differs from the expected state. The equipment inspection system may transmit data describing the current state of the equipment to an equipment manager. The equipment manager may schedule maintenance for the equipment based on the current state of the equipment.
Systems and methods for managing a semantic map in a mobile robot
Described herein are systems, devices, and methods for maintaining a valid semantic map of an environment for a mobile robot. A mobile robot comprises a drive system, a sensor circuit to sense occupancy information, a memory, a controller circuit, and a communication system. The controller circuit can generate a first semantic map corresponding to a first robot mission using first occupancy information and first semantic annotations, transfer the first semantic annotations to a second semantic map corresponding to a subsequent second robot mission. The control circuit can generate the second semantic map that includes second semantic annotations generated based on the transferred first semantic annotations. User feedback on the first or the second semantic map can be received via a communication system. The control circuit can update first semantic map and use it to navigate the mobile robot in a future mission.
SYSTEM AND METHOD TO COMBINE INPUT FROM MULTIPLE SENSORS ON AN AUTONOMOUS WHEELCHAIR
The invention discloses a system for controlling the movement of a personal mobility vehicle. The system includes a processing unit that receives and processes a location data of one or more obstacles over a period of time and determines a change frequency of change of location of the obstacles during the period of time and further generates a movement state categorization of the obstacles categorizing them into either a dynamic obstacle or a static obstacle. The processing unit further determines a dynamic distance traveled by the dynamic obstacle during the period of time and also determines the velocity of the dynamic obstacle. Further, based on the change frequency of change of location, the processing unit determines the movement probability data that relates to the probability of movement of a static obstacle. And, based on the velocity of dynamic obstacles during various time intervals of the time period, the processing unit determines the velocity prediction data which relates to the prediction of the velocity of a dynamic obstacle.
MOBILE OBJECT CONTROL DEVICE, MOBILE OBJECT, LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM
A mobile object control device includes a route determiner configured to determine a route of a mobile object according to the number of obstacles existing around the mobile object; and a controller configured to move the mobile object along the route determined by the route determiner.
SYSTEMS AND METHODS FOR MANAGING MULTIPLE AUTONOMOUS VEHICLES
Control system and method for managing transport of vehicles in a warehouse. A network of cameras provide coverage over the route way network by capturing images and sending image data to a central control unit which processes the images and generates signals to control the movement of robot slaves. The control system also includes a calibration mechanism to calibrate a map of the network of routes and an obstruction matrix function. The robot slaves include a safety override mechanism to control the robot slaves autonomously and independently in case of detecting an obstacle or an unexpected hazard in a path of its movement along a route of the warehouse network.