G06V20/58

Temporal information prediction in autonomous machine applications

In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.

Incorporating rules into complex automated decision making
11579614 · 2023-02-14 · ·

A set of input conditions is obtained. A plurality of potential decisions is obtained based at least in part on the set of input conditions. A rule-based system is used to process the plurality of potential decisions and obtain a set of one or more updated potential decisions, wherein: the rule-based system specifies a plurality of rules; a rule specifies a rule condition and a corresponding action, wherein when the rule condition is met, the corresponding action is to be performed; and using the rule-based system to process the plurality of potential decisions includes: for a selected potential decision in the plurality of potential decisions, determining whether the rule condition is met for a selected rule among the plurality of rules, wherein the selected rule condition is dependent on, at least in part, the selected potential decision; and in response to the selected rule condition being met, performing the corresponding action. The set of one or more updated potential decisions to be executed is output.

Incorporating rules into complex automated decision making
11579614 · 2023-02-14 · ·

A set of input conditions is obtained. A plurality of potential decisions is obtained based at least in part on the set of input conditions. A rule-based system is used to process the plurality of potential decisions and obtain a set of one or more updated potential decisions, wherein: the rule-based system specifies a plurality of rules; a rule specifies a rule condition and a corresponding action, wherein when the rule condition is met, the corresponding action is to be performed; and using the rule-based system to process the plurality of potential decisions includes: for a selected potential decision in the plurality of potential decisions, determining whether the rule condition is met for a selected rule among the plurality of rules, wherein the selected rule condition is dependent on, at least in part, the selected potential decision; and in response to the selected rule condition being met, performing the corresponding action. The set of one or more updated potential decisions to be executed is output.

System and method for future forecasting using action priors

A system for method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data and detecting actions associated with agents located within the surrounding environment of the ego vehicle and analyzing the dynamic data and processing an ego motion history of the ego vehicle. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle.

System and method for future forecasting using action priors

A system for method for future forecasting using action priors that include receiving image data associated with a surrounding environment of an ego vehicle and dynamic data associated with dynamic operation of the ego vehicle. The system and method also include analyzing the image data and detecting actions associated with agents located within the surrounding environment of the ego vehicle and analyzing the dynamic data and processing an ego motion history of the ego vehicle. The system and method further include predicting future trajectories of the agents located within the surrounding environment of the ego vehicle and a future ego motion of the ego vehicle within the surrounding environment of the ego vehicle.

Systems and methods for updating navigational maps
11579627 · 2023-02-14 · ·

Systems and methods for updating navigational maps based using at least one sensor are provided. In one aspect, a control system for an autonomous vehicle, includes a processor and a computer-readable memory configured to cause the processor to: receive output from at least one sensor located on the autonomous vehicle indicative of a driving environment of the autonomous vehicle, retrieve a navigational map used for driving the autonomous vehicle, and detect one or more inconsistencies between the output of the at least one sensor and the navigational map. The computer-readable memory is further configured to cause the processor to: in response to detecting the one or more inconsistencies, trigger mapping of the driving environment based on the output of the at least one sensor, update the navigational map based on the mapped driving environment, and drive the autonomous vehicle using the updated navigational map.

Localization and mapping method and moving apparatus

A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.

Localization and mapping method and moving apparatus

A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.

Using mapped elevation to determine navigational parameters

Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.

Depth image acquiring apparatus, control method, and depth image acquiring system

It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.