Patent classifications
B60W2554/4029
NETWORK ARCHITECTURE FOR THE JOINT LEARNING OF MONOCULAR DEPTH PREDICTION AND COMPLETION
System, methods, and other embodiments described herein relate to determining depths of a scene from a monocular image. In one embodiment, a method includes generating depth features from sensor data according to whether the sensor data includes sparse depth data. The method includes selectively injecting the depth features into a depth model. The method includes generating a depth map from at least a monocular image using the depth model that is guided by the depth features when injected. The method includes providing the depth map as depth estimates of objects represented in the monocular image.
REGION OF INTEREST SELECTION FOR OBJECT DETECTION
An object detection system may generate regions of interest (ROIs) from an input image that can be processed by a wide range of object detectors. According to the techniques described herein, an image is processed by a light-weight neural network (e.g., a heatmap network) that outputs object center and object scale heat-maps. The heatmaps are processed to define ROIs that are likely to include objects. Overlapping ROIs are then merged to reduce the aggregate size of the ROIs, and merged ROIs are downscaled to a reduced set of pre-defined resolutions. Fully-convolutional, high-accuracy object detectors may then operate on the downscaled ROIs to output accurate detections at a fraction of the computations by operating on a reduced image. For example, fully-convolutional, high-accuracy object detectors may operate on a subset of the entire image (e.g., cropped images based on ROIs) thus reducing computations otherwise performed over the entire image.
SYSTEMS AND METHODS FOR RECONSTRUCTION OF A VEHICULAR CRASH
A system for notifying emergency services of a vehicular crash may (i) receive sensor data of a vehicular crash from at least one mobile device associated with a user; (ii) generate a scenario model of the vehicular crash based upon the received sensor data; (iii) store the scenario model; and/or (iv) transmit a message to one or more emergency services based upon the scenario model. As a result, the speed and accuracy of deploying emergency services to the vehicular crash location is increased. The system may also utilize vehicle occupant positional data, and internal and external sensor data to detect potential imminent vehicle collisions, take corrective actions, automatically engage autonomous or semi-autonomous vehicle features, and/or generate virtual reconstructions of the vehicle collision.
DRIVER BEHAVIOR RISK ASSESSMENT AND PEDESTRIAN AWARENESS
Driver behavior risk assessment and pedestrian awareness may include an receiving an input stream of images of an environment including one or more objects within the environment, estimating an intention of an ego vehicle based on the input stream of images and a temporal recurrent network (TRN), generating a scene representation based on the input stream of images and a graph neural network (GNN), generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.
Detecting sensor degradation by actively controlling an autonomous vehicle
Methods and systems are disclosed for determining sensor degradation by actively controlling an autonomous vehicle. Determining sensor degradation may include obtaining sensor readings from a sensor of an autonomous vehicle, and determining baseline state information from the obtained sensor readings. A movement characteristic of the autonomous vehicle, such as speed or position, may then be changed. The sensor may then obtain additional sensor readings, and second state information may be determined from these additional sensor readings. Expected state information may be determined from the baseline state information and the change in the movement characteristic of the autonomous vehicle. A comparison of the expected state information and the second state information may then be performed. Based on this comparison, a determination may be made as to whether the sensor has degraded.
DRIVING ASSISTANCE APPARATUS
A driving assistance apparatus is configured to determine whether a preset deceleration assistance start condition is satisfied, start deceleration assistance for a driver's vehicle against a first deceleration-triggering object, determine whether a preset deceleration assistance termination condition is satisfied, issue a deceleration assistance termination notification to a driver of the driver's vehicle, determine whether a second deceleration-triggering object is detected ahead of the driver's vehicle, and determine whether a preset quick resumption condition is satisfied. The driving assistance apparatus is configured not to issue the deceleration assistance termination notification when the driving assistance apparatus determines that the quick resumption condition is satisfied, even in a case where the driving assistance apparatus determines that the deceleration assistance termination condition for the first deceleration-triggering object is satisfied.
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
The present disclosure relates to methods and systems for spatiotemporal graph modelling of road users in observed frames of an environment in which an autonomous vehicle operates (i.e. a traffic scene), clustering of the road users into categories, and providing the spatiotemporal graph to a trained graphical convolutional neural network (GNN) to predict a future pedestrian action. The future pedestrian action can be: one of the pedestrian will cross a road and the pedestrian will not cross the road. The spatiotemporal graph includes a better understanding of the observed frames (i.e. traffic scene).
Vehicle control device, method and computer program product
A vehicle control device includes a crossing vehicle detection sensor configured to detect a crossing vehicle approaching an own vehicle while the own vehicle is traveling in an intersecting lane, the intersecting lane being a lane that intersects an own vehicle lane at an intersection at a time the own vehicle approaches the intersection, the crossing vehicle being a vehicle travelling in the intersecting lane; and a controller configured to automatically brake the own vehicle to avoid a collision between the own vehicle and the crossing vehicle under a condition that the own vehicle enters the intersecting lane. The controller is configured to set, between the own vehicle and the crossing vehicle, a virtual area that moves with the crossing vehicle and that extends in an advancing direction of the crossing vehicle, and automatically brake the own vehicle to prevent the own vehicle from contacting the virtual area.
Vehicle control systems
Apparatuses, systems, and methods are provided for the utilization of vehicle control systems to cause a vehicle to take preventative action responsive to the detection of a near short term adverse driving scenario. A vehicle control system may receive information corresponding to vehicle operation data and ancillary data. Based on the received vehicle operation data and the received ancillary data, a multi-dimension risk score module may calculate risk scores associated with the received vehicle operation data and the received ancillary data. Subsequently, the vehicle control systems may cause the vehicle to perform at least one of a close call detection action and a close call detection alert to lessen the risk associated with the received vehicle operation data and the received ancillary data.
Safe state to safe state navigation
Systems and methods are provided for vehicle navigation. In one implementation, a system may comprise an interface to obtain sensing data of an environment of the host vehicle. A processing device may be configured to determine a planned navigational action for the host vehicle; identify, from the sensing data, a target vehicle in the environment of the host vehicle; predict a distance between the host vehicle and the target vehicle that would result if the planned navigational action was taken; determine a host vehicle braking distance based on a braking capability, acceleration capability, and speed of the host vehicle; determine a target vehicle braking distance, based on a speed and braking capability of the target vehicle; and implement the planned navigational action when the predicted distance of the planned navigational action is greater than a safe longitudinal distance being calculated based on the host vehicle and target vehicle braking distances.