B60W2554/4029

Vehicle information provision device

A vehicle information provision device includes a travel state detection unit, a surroundings situation detection unit, a potential hazard detection unit detecting a potential hazard based on the situation detected by the surroundings situation detection unit, a driver state detection unit detecting the driver state during self-driving of a vehicle; a driver state determination unit configured to determine whether or not the driver is observing the situation in the vehicle surroundings based on the state of the driver detected by the driver state detection unit, and an information control unit that provides information to the driver regarding the potential hazard in a case in which the driver is observing the situation in the vehicle surroundings, and restricts provision of information to the driver in a case in which the driver is not observing the situation in the vehicle surroundings during self-driving of the vehicle.

System, vehicle and method for adapting a driving condition of a vehicle upon detecting an event in an environment of the vehicle

Methods and systems are provided for adapting a driving condition of a vehicle. The system includes a non-transitory computer readable medium having stored thereon a pre-programmed driving maneuver of the vehicle. The system also includes a processor configured to obtain audio data of acoustic sources in the environment of the vehicle. The processor is further configured to determine a receiving direction of the acoustic source based on the audio data. The processor is further configured to determine whether the acoustic source are located within the maneuver of the vehicle based on the pre-programmed or updated driving maneuvers and the determined receiving direction of the acoustic source. Furthermore, the processor is configured to determine a range between the vehicle and the acoustic source in order to determine that the acoustic source is located within the driving path of the vehicle.

Methods and apparatuses for operating a self-driving vehicle

Aspects of the present disclosure may include methods, apparatuses, and computer readable media for receiving one or more images having a plurality of objects, receiving a notification from an occupant of the self-driving vehicle, generating an attention map highlighting the plurality of objects based on at least one of the one or more images and the notification, and providing at least one of a steering control or a velocity control to operate the self-driving vehicle based on the attention map and the notification.

Learning in Lane-Level Route Planner
20220274624 · 2022-09-01 ·

Lane-level route planning includes obtaining lane-level information of a road, where the road includes a first lane and a second lane and the lane-level information includes first lane information related to the first lane and second lane information related to the second lane; converting the lane-level information to probabilities for a state transition function; receiving a destination; and obtaining a policy as a solution to a model that uses the state transition function.

TECHNOLOGIES FOR IMAGE SIGNAL PROCESSING AND VIDEO PROCESSING
20220277164 · 2022-09-01 ·

Systems, methods, and computer-readable media are provided for efficient control and data utilization between processing components of a system. An method can include obtaining image data captured by an image sensor; prior to a first computing component performing a first set of operations on the image data and a second computing component performing a second set of operations on the image data, determining one or more common operations included in the first set of operations and the second set of operations, wherein the first set of operations is different than the second set of operations; performing the one or more common operations on the image data; and generating an output of the one or more operations for use by the first computing component to perform the first set of operations and the second computing component to perform the second set of operations.

System and method for predicting the movement of pedestrians

A system and related method for predicting movement of a plurality of pedestrians may include one or more processors and a memory. The memory includes an initial trajectory module, an exit point prediction module, a path planning module, and an adjustment module. The modules include instructions that when executed by the one or more processors cause the one or more processors to obtain trajectories of the plurality of pedestrians, predict future exit points for the plurality of pedestrians from a scene based on the trajectories of the plurality of pedestrians, determine trajectory paths of the plurality of pedestrians based on the future exit points and at least one scene element of a map, and adjust the trajectory paths based on at least one predicted interaction between at least two of the plurality of pedestrians.

Maintaining road safety when there is a disabled autonomous vehicle
11447067 · 2022-09-20 · ·

The technology relates to autonomous vehicles suffering a breakdown along a roadway. Onboard systems may utilize various proactive operations to alert specific vehicles or other objects on or near the roadway about the breakdown. This can be done alternatively or in addition to turning on the hazard lights or calling for remote assistance. The disabled vehicle is able to detect nearby and approaching objects. The detection may be performed in combination with a determination of the type of object or predicted behavior for that object, enables the vehicle to generate a targeted alert that can be transmitted or otherwise presented to that particular object. This approach provides the other object, such as a vehicle, bicyclist or pedestrian, sufficient time and information about the breakdown to take appropriate corrective action. Different communication options are available and may be selected based on the particular object, environmental conditions and other factors.

SYSTEM AND METHOD FOR COMPLETING TRAJECTORY PREDICTION FROM AGENT-AUGMENTED ENVIRONMENTS
20220153307 · 2022-05-19 ·

A system and method for completing trajectory prediction from agent-augmented environments that include receiving image data associated with surrounding environment of an ego agent and processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The system and method also include processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. The system and method further include predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.

TRAINING OF JOINT DEPTH PREDICTION AND COMPLETION
20220148203 · 2022-05-12 ·

System, methods, and other embodiments described herein relate to training a depth model for joint depth completion and prediction. In one arrangement, a method includes generating depth features from sparse depth data according to a sparse auxiliary network (SAN) of a depth model. The method includes generating a first depth map from a monocular image and a second depth map from the monocular image and the depth features using the depth model. The method includes generating a depth loss from the second depth map and the sparse depth data and an image loss from the first depth map and the sparse depth data. The method includes updating the depth model including the SAN using the depth loss and the image loss.

Sparse Auxiliary Network for Depth Completion
20220148202 · 2022-05-12 ·

System, methods, and other embodiments described herein relate to determining depths of a scene from a monocular image. In one embodiment, a method includes generating depth features from depth data using a sparse auxiliary network (SAN) by i) sparsifying the depth data, ii) applying sparse residual blocks of the SAN to the depth data, and iii) densifying the depth features. The method includes generating a depth map from the depth features and a monocular image that corresponds with the depth data according to a depth model that includes the SAN. The method includes providing the depth map as depth estimates of objects represented in the monocular image.