G06V10/803

Data Processing Method and Apparatus, and Related Device
20230162008 · 2023-05-25 ·

A data processing method includes obtaining first data and second data, where the first data and the second data are adjacent sequence data, and a sequence of the first data is prior to a sequence of the second data; padding third data between the first data and the second data according to a preset rule to obtain fourth data, where the third data isolates the first data from the second data; and completing data processing on the fourth data using a convolutional neural network.

ADVANCED DRIVER ASSIST SYSTEM, METHOD OF CALIBRATING THE SAME, AND METHOD OF DETECTING OBJECT IN THE SAME
20230110116 · 2023-04-13 ·

An advanced driver assist system (ADAS) includes a processing circuit and a memory storing instructions executable by the processing circuit. The processing circuit executes the instructions to cause the ADAS to: obtain, from a vehicle, a video sequence including a plurality of frames captured while driving the vehicle, where each of the frames corresponds to a stereo image including a first viewpoint image and a second viewpoint image; determine depth information in the stereo image based on reflected signals received while driving the vehicle; fuse the stereo image and the depth information to generated fused information, and detect at least one object included in the stereo image based on the fused information.

METHOD AND APPARATUS FOR RECOGNIZING VEHICLE LANE CHANGE TREND
20230110730 · 2023-04-13 ·

Example methods and apparatus for recognizing a vehicle lane change trend are described. One example method includes obtaining laser point cloud data of a detected target vehicle. A first distance relationship value between a center line of a lane in which the current vehicle is located and the target vehicle is obtained based on the laser point cloud data. A second distance relationship value between the center line of the lane and the target vehicle is obtained. First confidence of the first distance relationship values and second confidence of the second distance relationship values are calculated, and fusion distance relationship values are then calculated based on the first confidence and the second confidence. It is determined whether the target vehicle has a lane change trend based on the fusion distance relationship values.

METHOD AND SYSTEM FOR AUTOMATIC EXTRINSIC CALIBRATION OF SPAD LIDAR AND CAMERA PAIRS
20230115660 · 2023-04-13 · ·

A method of calibrating a camera sensor and a SPAD LiDAR includes extracting identified features in each of a selected camera image and an ambient-intensity (A-I) image, generating a set of keypoints based on the identified features extracted for each of the images to provide a set of 2D camera keypoint locations and a set of 2D A-I keypoint locations, determining matched keypoints based on the set of 2D A-I keypoint locations and the set of 2D camera keypoint locations to provide a set of 2D A-I matched pixel locations and a set of 2D camera matched pixel locations, interpolating a 3D point cloud data with the set of 2D A-I matched pixel locations to obtain a set of 3D LiDAR matched pixel locations, and determining and storing extrinsic parameters to transform the set of 3D LiDAR matched pixel locations with the set of 2D camera matched pixel locations.

Machine learning to predict cognitive image composition

An automatic method of determining an image composition procedure that generates a new image visualization based on aggregations and variations of input images. A set of input images is received. Visual features are extracted from the input images. Context associated with input images is received. Based on the extracted visual features and the context associated with the input images, a composition procedure comprising a set of image operations to apply on the set of input images is learned. One or more image operations in the composition procedure are determined to present to a user. A difference visualization image associated with the input images may be generated by executing the one or more image operations.

Verification system
11625466 · 2023-04-11 · ·

A device includes memory and a processor. The device receives biometric information. The device receives location information. The device analyzes the received biometric information with stored biometric information. The device analyzes the received location information with stored location information. The device determines whether the received biometric information matches the stored biometric information. The device determines whether the received location information matches the stored location information. The device sends an electronic communication that indicates whether the received biometric information matches the stored biometric information and whether the received local information matches the stored location information.

TARGET DETECTION METHOD AND APPARATUS
20230072289 · 2023-03-09 ·

A target detection method and apparatus are provided. A first image of a target scenario collected by an image sensor is analyzed to obtain one or more first 2D detection boxes of the target scenario, and a three-dimensional point cloud of the target scenario collected by a laser sensor is analyzed to obtain one or more second 2D detection boxes of the target scenario in one or more views (for example, a BEV and/or a PV). Then, comprehensive analysis is performed on a matching degree and confidence of the one or more first 2D detection boxes, and a matching degree and confidence of the one or more second 2D detection boxes, to obtain a 2D detection box of a target. Finally, a 3D model of the target is obtained based on a three-dimensional point corresponding to the 2D detection box of the target.

Geographic object detection apparatus and geographic object detection method

A geographic object recognition unit (120) recognizes, using image data (192) obtained by photographing in a measurement region where a geographic object exists, a type of the geographic object from an image that the image data (192) represents. A position specification unit (130) specifies, using three-dimensional point cloud data (191) indicating a three-dimensional coordinate value of each of a plurality of points in the measurement region, a position of the geographic object.

Combined track confidence and classification model

Techniques are disclosed for a combined machine learned (ML) model that may generate a track confidence metric associated with a track and/or a classification of an object. Techniques may include obtaining a track. The track, which may include object detections from one or more sensor data types and/or pipelines, may be input into a machine-learning (ML) model. The model may output a track confidence metric and a classification. In some examples, if the track confidence metric does not satisfy a threshold, the ML model may cause the suppression of the output of the track to a planning component of an autonomous vehicle.

TRANSFORMATION OF DATA SAMPLES TO NORMAL DATA

A device comprising at least one processing logic configured for: obtaining an input vector representing an input data sample; until a stop criterion is met, performing successive iterations of: using an autoencoder trained using a set of reference vectors to encode the input vector into a compressed vector, and decode the compressed vector into a reconstructed vector; calculating a reconstruction loss between the reconstructed and the input vectors, and a gradient of the reconstruction loss; updating said input vector for the subsequent iteration using said gradient.