G06T2207/30261

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20230206596 · 2023-06-29 ·

The present technique relates to an information processing device, an information processing method, and a program that can improve tracking performance.

A feature information extracting unit extracts feature information about an object for each frame image, and a tracking unit tracks a vehicle in the frame image by using the feature information. The present technique is applicable to a driving support device with an onboard camera, for example.

ADAPTIVE OBJECT TRACKING ALGORITHM FOR AUTONOMOUS MACHINE APPLICATIONS
20230206651 · 2023-06-29 ·

In various examples, lane location criteria and object class criteria may be used to determine a set of objects in an environment to track. For example, lane information, freespace information, and/or object detection information may be used to filter out or discard non-essential objects (e.g., objects that are not in an ego-lane or adjacent lanes) from objects detected using an object detection algorithm. Further, objects corresponding to non-essential object classes may be filtered out to generate a final filtered set of objects to be tracked that may be of a lower quantity than the actual number of detected objects. As a result, object tracking may only be executed on the final filtered set of objects, thereby decreasing compute requirements and runtime of the system without sacrificing object tracking accuracy and reliability with respect to more pertinent objects.

METHODS AND SYSTEMS FOR DETECTING FOREIGN OBJECTS ON A LANDING SURFACE
20230206646 · 2023-06-29 ·

Disclosed are methods, systems, and computer-implemented method for detecting foreign objects on a landing surface. For instance, the method may include capturing an image from one or more cameras associated with a vehicle, detecting the landing surface present in the captured image, retrieving a reference image for the detected landing surface, and extracting a plurality of feature points present in both the captured image and the reference image. The method may further include determining a transformation between the captured image and the reference image by correlating the plurality of feature points between the captured image and the reference image, creating a virtual image by applying the transformation to one of the captured image or the reference image, and comparing the virtual image to the other one of the captured image or the reference image that was not transformed to detect one or more foreign objects.

SYSTEMS AND METHODS FOR UTILIZING MODELS TO DETECT DANGEROUS TRACKS FOR VEHICLES

A device may receive accelerometer data and video data for a vehicle and may identify bounding boxes and object classes for objects near the vehicle. The device may identify tracks for the objects and may filter out tracks that are not associated with vehicles or vulnerable road users to generate one or more tracks or an indication of no tracks. The device may generate a collision cone identifying a drivable area of the vehicle to identify objects more likely to be involved in a collision and may filter out tracks from the one or more tracks, based on the bounding boxes, and to generate a subset of tracks or another indication of no tracks. The device may determine scores for the subset of tracks and may identify a track of the subset of tracks with a highest score. The device may perform actions based on the identified track.

APPARATUS AND METHOD WITH DRIVING CONTROL
20230206648 · 2023-06-29 · ·

Disclosed are apparatuses and methods for controlling driving of a vehicle, the method including obtaining a first image captured by a first ultra-wide-angle lens disposed on a first position in a vehicle, obtaining a second image captured by a second ultra-wide-angle lens disposed on a second position in the vehicle, monitoring a state of a driver and an occupant of the vehicle based on the first image and the second image, detecting an object in a blind spot area based on a matching of the first image and the second image, and generating information for control of the vehicle based on a result of the monitoring and a result of the detecting of the object.

Method and system for detecting moving objects

A moving objects detection method is disclosed. The method may include: identifying a plurality of feature points based on a plurality of video frames; selecting from the plurality of feature points to form a first and a second groups of feature points based on correlations between the plurality of feature points; and identifying in at least one video frame two segments based on the first and the second groups of feature points, respectively, as detected moving objects, where a correlation between two feature points may include a distance component and a movement difference component, where the distance component is related to a distance between the two feature points, and the movement difference component is related to a difference between corresponding movements of the two feature points. A moving objects detection system is also provided.

Image processing apparatus
09852502 · 2017-12-26 · ·

An image processing apparatus includes an image processing section periodically performing image processing for a periodically captured image, a diagnosing section comparing an image processing result obtained from diagnostic image data with an expected value data which indicates a reference for a normal processing result of image processing of the diagnostic image data and determining whether the image processing result obtained from the diagnostic image data is normal, by making the image processing section perform image processing for the diagnostic image data which is directly accessible by the image processing section in parallel with image processing periodically performed by the image processing section for the captured image, and an output controlling section outputting the processing result of image processing for the captured image as valid to a control section on condition that the image processing result obtained from the diagnostic image data is determined as normal by the diagnosing section.

Three dimensional bounding box estimation from two dimensional images

A three dimensional bounding box is determined from a two dimensional image. A two dimensional bounding box is calculated based on a detected object within the image. A three dimensional bounding box is parameterized as having a yaw angle, dimensions, and a position. The yaw angle is defined as the angle between a ray passing through a center of the two dimensional bounding box and an orientation of the three dimensional bounding box. The yaw angle and dimensions are determined by passing the portion of the image within the two dimensional bounding box through a trained convolutional neural network. The three dimensional bounding box is then positioned such that the projection of the three dimensional bounding box into the image aligns with the two dimensional bounding box previously detected. Characteristics of the three dimensional bounding box are then communicated to an autonomous system for collision and obstacle avoidance.

Video object detection

A method for video object detection includes detecting an object in a first video frame, and selecting a first interest point and a second interest point of the object. The first interest point is in a first region of interest located at a first corner of a box surrounding the object. The second interest point is in a second region of interest located at a second corner of the box. The second corner is diagonally opposite the first corner. A first optical flow of the first interest point and a second optical flow of the second interest point are determined. A location of the object in a second video frame is estimated by determining, in the second video frame, a location of the first interest point based on the first optical flow and a location of the second interest point based on the second optical flow.

Systems and methods for aligning map data

Systems, methods, and non-transitory computer-readable media can receive a geometric map and a semantic map associated with a geographic area, the semantic map comprising semantic data associated with vehicle navigation. A first semantic position estimate associated with a first piece of semantic data contained in the semantic map is generated based on semantic data location information associated with the first piece of semantic data. A final position for the first semantic position estimate is received. One or more three-dimensional semantic labels are applied to the geometric map based on the final position of the first semantic position estimate. A warped semantic map is generated. Generating the warped semantic map comprises warping the semantic map based on the one or more three-dimensional semantic labels.