G06V20/584

AUTONOMOUS DRIVING METHOD FOR AVOIDING STOPPED VEHICLE AND APPARATUS FOR THE SAME
20230008458 · 2023-01-12 ·

Disclosed herein are an autonomous driving method for avoiding a stopped vehicle and an apparatus for the same. The autonomous driving method for avoiding a stopped vehicle is performed by an autonomous driving control apparatus provided in an autonomous vehicle, and includes obtaining taillight recognition information for a stopped vehicle identified ahead of the autonomous vehicle, determining whether the stopped vehicle is to be avoided in consideration of the taillight recognition information, when it is determined that the stopped vehicle is to be avoided, setting an avoidance method in consideration of whether lane returning is to be performed, which is determined based on an autonomous driving task, and setting an avoidance time point corresponding to the avoidance method and controlling the autonomous vehicle to avoid the stopped vehicle by traveling along an avoidance path generated in conformity with the avoidance time point.

Connected camera system for vehicles

Methods and systems for sharing image data between vehicles. The system includes an image sensor of a first vehicle configured to detect image data of an environment around the first vehicle as it traverses a road. The system also includes a transceiver of the first vehicle configured to communicate the detected image data. The system also includes a transceiver of a second vehicle configured to receive the detected image data. The system also includes a display screen of the second vehicle configured to display a view of the environment around the road based on the detected image data.

Providing a GUI to enable analysis of time-synchronized data sets pertaining to a road segment

Techniques for collecting, synchronizing, and displaying various types of data relating to a road segment enable, via one or more local or remote processors, servers, transceivers, and/or sensors, (i) enhanced and contextualized analysis of vehicle events by way of synchronizing different data types, relating to a monitored road segment, collected via various different types of data sources; (ii) enhanced and contextualized analysis of filed insurance claims pertaining to a vehicle incident at a road segment; (iii) advantageous machine learning techniques for predicting a level of risk assumed for a given vehicle event or a given road segment; (iv) techniques for accounting for region-specific driver profiles when controlling autonomous vehicles; and/or (v) improved techniques for providing a GUI to display collected data in a meaningful and contextualized manner.

Data augmentation for vehicle control

This application is directed to augmenting training data used for vehicle driving modelling. A computer system obtains a first image of a road and identifies a drivable area of the road within the first image. The computer system obtains an image of an object and generates a second image from the first image by overlaying the image of the object over the drivable area. The second image is added to a corpus of training images to be used by a machine learning system to generate a model for facilitating driving of a vehicle (e.g., at least partial autonomously). In some embodiments, the computer system applies machine learning to train a model using the corpus of training images and distributes the model to one or more vehicles. In use, the model processes road images captured by the one or more vehicles to facilitate vehicle driving.

AUTOMATIC HIGH BEAM CONTROL FOR AUTONOMOUS MACHINE APPLICATIONS
20230211722 · 2023-07-06 ·

In various examples, high beam control for vehicles may be automated using a deep neural network (DNN) that processes sensor data received from vehicle sensors. The DNN may process the sensor data to output pixel-level semantic segmentation masks in order to differentiate actionable objects (e.g., vehicles with front or back lights lit, bicyclists, or pedestrians) from other objects (e.g., parked vehicles). Resulting segmentation masks output by the DNN(s), when combined with one or more post processing steps, may be used to generate masks for automated high beam on/off activation and/or dimming or shading—thereby providing additional illumination of an environment for the driver while controlling downstream effects of high beam glare for active vehicles.

LIGHT EMITTING DIODE FLICKER MITIGATION
20230215189 · 2023-07-06 ·

Systems and methods are provided for detecting a flashing light on one or more traffic signal devices. The method includes capturing a series of images of one or more traffic signal elements in a traffic signal device over a length of time. The method further includes, for each traffic signal element, analyzing the series of images to determine one or more time periods at which the traffic signal element is in an on state or an off state, and analyzing the time periods to determine one or more distinct on states and one or more distinct off states. The method further includes identifying one or more cycles correlating to a distinct on state immediately followed by a distinct off state, or a distinct off state immediately followed by a distinct on state, and, upon identifying a threshold number adjacent cycles, classifying the traffic signal element as a flashing light.

SYSTEM AND METHOD FOR GENERATING CONTEXT-RICH PARKING EVENTS
20230215188 · 2023-07-06 ·

A method of generating a context-rich parking event of a target vehicle taken by a patrol vehicle; including obtaining a plate read event identifying an identifier of the target vehicle; initiating a collection of a first context image of a first view of the target vehicle; obtaining of geolocation information; obtaining temporal information; verifying if at least one condition is met by calculating if at least one of: a temporal constraint threshold is reached by using the temporal information; and a position constraint threshold is reached by using the geolocation information; initiating a collection by the patrol vehicle of a second context image of a second view of the target vehicle; and causing an association between the second context image and the plate read event to generate the context-rich parking event.

Yield behavior modeling and prediction

Techniques for determining a vehicle action and controlling a vehicle to perform the vehicle action for navigating the vehicle in an environment can include determining a vehicle action, such as a lane change action, for a vehicle to perform in an environment. The vehicle can detect, based at least in part on sensor data, an object associated with a target lane associated with the lane change action sensor data. In some instances, the vehicle may determine attribute data associated with the object and input the attribute data to a machine-learned model that can output a yield score. Based on such a yield score, the vehicle may determine whether it is safe to perform the lane change action.

TRAFFIC LIGHT ORIENTED NETWORK

A navigation system for a host vehicle may include at least one processor comprising circuitry and a memory. The memory may include instructions that when executed by the circuitry cause the at least one processor to receive from an image capture device associated with the host vehicle a captured image representative of an environment of the host vehicle, to identify a first segment of the captured image associated with a traffic light, to provide the first segment of the captured image to a first trained network, the first trained network being configured to generate a first output indicative of a state of the traffic light, to identify a second segment of the captured image that includes contextual information associated with the traffic light, to provide the second segment to a second trained network, the second trained network being configured to generate a second output indicative of a proposed navigational action for the host vehicle relative to the traffic light, to determine, based on both the first output from the first trained network and the second output from the second trained network a planned navigational action for the host vehicle and to cause the host vehicle to take the planned navigational action.

Brake light detection

Systems, methods, and devices for detecting brake lights are disclosed herein. A system includes a mode component, a vehicle region component, and a classification component. The mode component is configured to select a night mode or day mode based on a pixel brightness in an image frame. The vehicle region component is configured to detect a region corresponding to a vehicle based on data from a range sensor when in a night mode or based on camera image data when in the day mode. The classification component is configured to classify a brake light of the vehicle as on or off based on image data in the region corresponding to the vehicle.