Patent classifications
G06V20/56
System and method for providing unsupervised domain adaptation for spatio-temporal action localization
A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.
System and method for providing unsupervised domain adaptation for spatio-temporal action localization
A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.
Method for size estimation by image recognition of specific target using given scale
The present invention relates to a method for size estimation by image recognition of a specific target using a given scale. First, a reference objected is recognized in an image and the corresponding scale is established. Then the specific target is searched and the size of the specific target is estimated according to the acquired scale.
Method for size estimation by image recognition of specific target using given scale
The present invention relates to a method for size estimation by image recognition of a specific target using a given scale. First, a reference objected is recognized in an image and the corresponding scale is established. Then the specific target is searched and the size of the specific target is estimated according to the acquired scale.
Unsupervised learning of metric representations from slow features
A method of unsupervised learning of a metric representation and a corresponding system for a mobile device determines a metric position information for a mobile device from an environmental representation. The mobile device comprises at least one sensor for acquiring sensor data and an odometer system configured to acquire displacement data of the mobile device. An environmental representation is generated based on the acquired sensor data by applying an unsupervised learning algorithm. The mobile device moves along a trajectory and the displacement data and the sensor data are acquired while the mobile device is moving along the trajectory. A set of mapping parameters is calculated based on the environmental representation and the displacement data. A metric position estimation is determined based on a further environmental representation and the calculated set of mapping parameters.
Temporal information prediction in autonomous machine applications
In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
Recording apparatus, recording method, and non-transitory computer-readable medium
A recording apparatus includes: a captured data acquisition unit configured to acquire captured data captured by a camera that captures an image of an outside of a vehicle; an event detection unit configured to detect an event with respect to the vehicle; an attachment/detachment detection unit configured to detect an attachment/detachment state of the recording apparatus with respect to the vehicle; and a recording controller configured to store, when the event detection unit has detected the event, captured data for a predetermined period of time due to the detected event as first event recording data, invalidate, when it is detected by the attachment/detachment detection unit that the recording apparatus has been detached from the vehicle, the detection of the event by the event detection unit after the detection of the detachment, and store captured data after the detection of the detachment as second event recording data.
Recording apparatus, recording method, and non-transitory computer-readable medium
A recording apparatus includes: a captured data acquisition unit configured to acquire captured data captured by a camera that captures an image of an outside of a vehicle; an event detection unit configured to detect an event with respect to the vehicle; an attachment/detachment detection unit configured to detect an attachment/detachment state of the recording apparatus with respect to the vehicle; and a recording controller configured to store, when the event detection unit has detected the event, captured data for a predetermined period of time due to the detected event as first event recording data, invalidate, when it is detected by the attachment/detachment detection unit that the recording apparatus has been detached from the vehicle, the detection of the event by the event detection unit after the detection of the detachment, and store captured data after the detection of the detachment as second event recording data.
Method and system for generating and updating digital maps
A method and control system for generating and updating digital maps using a plurality of passages along a road portion by at least one road vehicle is provided. The method comprises obtaining positioning data and sensor data of each passage from the at least one road vehicle. Further, the method comprises forming a sub-map representation of the surrounding environment at each obtained longitudinal position based on the obtained sensor data, and estimating a longitudinal error for each obtained longitudinal position within each segment. Furthermore, the method comprises determining a new plurality of longitudinal positions of each road vehicle for each passage by applying the estimated longitudinal error on each corresponding obtained longitudinal position, and applying the determined new plurality of longitudinal positions on associated sensor data in order to generate a first layer of a map representation of the surrounding environment along the road portion.
Method and system for generating and updating digital maps
A method and control system for generating and updating digital maps using a plurality of passages along a road portion by at least one road vehicle is provided. The method comprises obtaining positioning data and sensor data of each passage from the at least one road vehicle. Further, the method comprises forming a sub-map representation of the surrounding environment at each obtained longitudinal position based on the obtained sensor data, and estimating a longitudinal error for each obtained longitudinal position within each segment. Furthermore, the method comprises determining a new plurality of longitudinal positions of each road vehicle for each passage by applying the estimated longitudinal error on each corresponding obtained longitudinal position, and applying the determined new plurality of longitudinal positions on associated sensor data in order to generate a first layer of a map representation of the surrounding environment along the road portion.