Patent classifications
G06V20/56
USER SAFETY AND SUPPORT IN SEARCH AND RESCUE MISSIONS
Locating, aiding, and communicating with users and personnel in emergency situations by traversing a defined path utilizing an unmanned vehicle, detecting a user within a threshold distance of the defined path, logging a geolocation of the user within the unmanned vehicle, and determining whether to dispatch assistance to the user.
IDENTIFICATION OF SPURIOUS RADAR DETECTIONS IN AUTONOMOUS VEHICLE APPLICATIONS
The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.
Robotic Source Detection Device And Method
An autonomous robotic vehicle is capable of detecting, identifying, and locating the source of gas leaks such as methane. Because of the number of operating components within the vehicle, it may also be considered a robotic system. The robotic vehicle can be remotely operated or can move autonomously within a jobsite. The vehicle selectively deploys a source detection device that precisely locates the source of a leak. The vehicle relays data to stakeholders and remains powered that enables operation of the vehicle over an extended period. Monitoring and control of the vehicle is enabled through a software interface viewable to a user on a mobile communications device or personal computer.
CONTEXT BASED LANE PREDICTION
A method for context based lane prediction, the method may include obtaining sensed information regarding an environment of the vehicle; providing the sensed information to a second trained machine learning process; and locating one or more lane boundaries by the second trained machine learning process. The second trained machine learning process is generated by: performing a self-supervised training process, using a first dataset, of a first machine learning process to provide a first trained machine learning process; wherein the first trained machine learning process comprises a first encoder portion and a first decoder portion; replacing the first decoder portion by a second decoder portion to provide a second machine learning process; and performing an additional training process, using a second dataset that is associated with lane boundary metadata, of the second machine learning process to provide a second trained machine learning process.
Hyper planning based on object and/or region
A vehicle computing system may implement techniques to predict behavior of objects detected by a vehicle operating in the environment. The techniques may include determining a feature with respect to a detected objects (e.g., likelihood that the detected object will impact operation of the vehicle) and/or a location of the vehicle and determining based on the feature a model to use to predict behavior (e.g., estimated states) of proximate objects (e.g., the detected object). The model may be configured to use one or more algorithms, classifiers, and/or computational resources to predict the behavior. Different models may be used to predict behavior of different objects and/or regions in the environment. Each model may receive sensor data as an input, and output predicted behavior for the detected object. Based on the predicted behavior of the object, a vehicle computing system may control operation of the vehicle.
Leveraging machine vision and artificial intelligence in assisting emergency agencies
A system for locating according to a data description includes an interface and a processor. The interface is configured to receive the data description. The processor is configured to create a model-based item identification job based at least in part on the data description; provide the model-based item identification job to a set of vehicle event recorder systems, wherein the model-based item identification job uses a model to identify sensor data resembling the data description; receive the sensor data from the set of vehicle event recorder systems; and store the sensor data associated with the model-based item identification job.
Leveraging machine vision and artificial intelligence in assisting emergency agencies
A system for locating according to a data description includes an interface and a processor. The interface is configured to receive the data description. The processor is configured to create a model-based item identification job based at least in part on the data description; provide the model-based item identification job to a set of vehicle event recorder systems, wherein the model-based item identification job uses a model to identify sensor data resembling the data description; receive the sensor data from the set of vehicle event recorder systems; and store the sensor data associated with the model-based item identification job.
Efficient road coordinates transformations library
A system and method operate an autonomous vehicle. A sensor senses a road and an object. A processor determines, in a Cartesian reference frame, a representation of the road and a source point representative of the object, samples a first waypoint and a second waypoint from the representation of the road, determines a linear projection of the source point to a line connecting the first waypoint and the second waypoint, determines a first estimate of a longitudinal component of the source point in a road-based reference frame based on the linear projection, the first estimate being on a curve representing the road between the first waypoint and the second waypoint, determines a second estimate of the longitudinal component from the first estimate, determines a coordinate of the source point in the road-based reference frame from the second estimate and operates the vehicle with respect to the object using the coordinate.
Method and apparatus for de-biasing the detection and labeling of objects of interest in an environment
Described herein are methods of generating learning data to facilitate de-biasing the labeled location of an object of interest within an image. Methods may include: receiving sensor data, where the sensor data is a first image; determining reference corner locations of an object in the first image using image processing; generating observed corner locations of the object in the first image from the determined reference corner locations; generating a bias transformation based, at least in part, on a difference between the reference corner locations and the observed corner locations of the object in the first image; receiving sensor data from another image sensor of a second image; receiving observed corner locations of an object in the second image from a user; and applying the bias transformation to the observed corner locations of the object in the second image to generate de-biased corners for the object in the second image.
High-definition city mapping
A vehicle generates a city-scale map. The vehicle includes one or more Lidar sensors configured to obtain point clouds at different positions, orientations, and times, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform registering, in pairs, a subset of the point clouds based on respective surface normals of each of the point clouds; determining loop closures based on the registered subset of point clouds; determining a position and an orientation of each of the subset of the point clouds based on constraints associated with the determined loop closures; and generating a map based on the determined position and the orientation of each of the subset of the point clouds.