G06T7/80

Homography error correction

An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.

Method for calibrating a photodetector array, a calibration device, and an associated imaging system
11557063 · 2023-01-17 · ·

A method for calibrating a photodetector array supplying a video stream includes: a determination step, wherein an offset table is determined for each current image of the video stream based on at least two corrections from among the following: a first correction from a comparison of the current image to a corresponding predetermined reference table; a second correction from a calculation of a column error of the current image; and a third correction from a high-pass temporal filtering of the video stream; and a calculation step, wherein a current value of an offset table, equal to a sum between a previous value of the offset table and a weighted sum of at least two corrections, is calculated, with each coefficient of the offset table being associated with a respective photodetector of the array.

Method for calibrating a photodetector array, a calibration device, and an associated imaging system
11557063 · 2023-01-17 · ·

A method for calibrating a photodetector array supplying a video stream includes: a determination step, wherein an offset table is determined for each current image of the video stream based on at least two corrections from among the following: a first correction from a comparison of the current image to a corresponding predetermined reference table; a second correction from a calculation of a column error of the current image; and a third correction from a high-pass temporal filtering of the video stream; and a calculation step, wherein a current value of an offset table, equal to a sum between a previous value of the offset table and a weighted sum of at least two corrections, is calculated, with each coefficient of the offset table being associated with a respective photodetector of the array.

Method and apparatus for processing video frame

Embodiments of the present disclosure provide a method and apparatus for processing a video frame, and relates to the field of computer vision technology. The method may include: acquiring a plurality of candidate first-order radial distortion parameters preset for a to-be-processed video frame, and acquiring a specified value of a specified radial distortion parameter; performing radial distortion correction on the to-be-processed video frame to obtain a first initial corrected video frame; selecting a first initial corrected video frame in which a local region except for a center region after distortion correction includes a largest number of straight line segments; and determining a candidate first-order radial distortion parameter corresponding to the selected first initial corrected video frame for use as a target first-order radial distortion parameter of the to-be-processed video frame.

Method and apparatus for processing video frame

Embodiments of the present disclosure provide a method and apparatus for processing a video frame, and relates to the field of computer vision technology. The method may include: acquiring a plurality of candidate first-order radial distortion parameters preset for a to-be-processed video frame, and acquiring a specified value of a specified radial distortion parameter; performing radial distortion correction on the to-be-processed video frame to obtain a first initial corrected video frame; selecting a first initial corrected video frame in which a local region except for a center region after distortion correction includes a largest number of straight line segments; and determining a candidate first-order radial distortion parameter corresponding to the selected first initial corrected video frame for use as a target first-order radial distortion parameter of the to-be-processed video frame.

Systems and methods for determining a target field angle of an image capturing device

The present disclosure relates to systems and methods for determining automatically a target field angle of an image capturing device. The method may include obtaining, by the image capturing device, at least two images for determining a target field angle of the image capturing device. The method may also include obtaining a field angle range of the image capturing device. Further, the method may include determining the target field angle by matching the at least two images based on the field angle range.

Systems and methods for determining a target field angle of an image capturing device

The present disclosure relates to systems and methods for determining automatically a target field angle of an image capturing device. The method may include obtaining, by the image capturing device, at least two images for determining a target field angle of the image capturing device. The method may also include obtaining a field angle range of the image capturing device. Further, the method may include determining the target field angle by matching the at least two images based on the field angle range.

Vehicle sensor calibration and verification
11594037 · 2023-02-28 · ·

Systems and methods for automated vehicle sensor calibration and verification are provided. One example method involves monitoring a vehicle using one or more external sensors of a vehicle calibration facility. The sensor data may be indicative of a relative position of the vehicle in the vehicle calibration facility. The method also involves causing the vehicle to navigate in an autonomous driving mode, based on the sensor data, from a current position of the vehicle to a first calibration position in the vehicle calibration facility. The method also involves causing a first sensor of the vehicle to perform a first calibration measurement while the vehicle is at the first calibration position. The method also involves calibrating the first sensor based on at least the first calibration measurement.

Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
11595574 · 2023-02-28 · ·

An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.

Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
11595574 · 2023-02-28 · ·

An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.