G06T7/85

Calibration for multi-camera and multisensory systems

A method and apparatus for calibrating an image capture device are provided. The method includes capturing one or more of a single or Multiview image set by the image capture device, detecting one or more calibration features in each set by a processor, initializing each of the one or more calibration parameters a corresponding default value, extracting one or more relevant calibration parameters, computing an individual cost term for each of the identified relevant calibration parameters, and scaling each of the relevant cost terms. The method continues with combining all the cost terms once each of the calculated relevant cost terms have been scaled, determining if the combination of the cost terms has been minimized, adjusting the calibration parameters if it is determined that that the combination of the cost terms has not been minimized, and returning to the step of extracting one or more of the relevant calibration parameters.

METHOD AND SYSTEM FOR PERFORMING AUTOMATIC CAMERA CALIBRATION
20230025684 · 2023-01-26 ·

A system and method for performing automatic camera calibration is provided. The system receives a calibration image, and determines a plurality of image coordinates for representing respective locations at which a plurality of pattern elements of a calibration pattern appear in a calibration image. The system determines, based on the plurality of image coordinates and defined pattern element coordinates, an estimate for a first lens distortion parameter of a set of lens distortion parameters, wherein the estimate for the first lens distortion parameter is determined while estimating a second lens distortion parameter of the set of lens distortion parameters to be zero, or is determined without estimating the second lens distortion parameter. The system determines, after the estimate of the first lens distortion parameter is determined, an estimate for the second lens distortion parameter based on the estimate for the first lens distortion parameter.

Self-supervised training of a depth estimation model using depth hints

A method for training a depth estimation model with depth hints is disclosed. For each image pair: for a first image, a depth prediction is determined by the depth estimation model and a depth hint is obtained; the second image is projected onto the first image once to generate a synthetic frame based on the depth prediction and again to generate a hinted synthetic frame based on the depth hint; a primary loss is calculated with the synthetic frame; a hinted loss is calculated with the hinted synthetic frame; and an overall loss is calculated for the image pair based on a per-pixel determination of whether the primary loss or the hinted loss is smaller, wherein if the hinted loss is smaller than the primary loss, then the overall loss includes the primary loss and a supervised depth loss between depth prediction and depth hint. The depth estimation model is trained by minimizing the overall losses for the image pairs.

IMAGE PROCESSING DEVICE
20230231965 · 2023-07-20 · ·

An image processing device includes a rotation processor and an image processor. The rotation processor receives an input image and generates a temporary image according to the input image. The image processor is coupled to the rotation processor and outputs a processed image according to the temporary image, wherein the image processor has a predetermined image processing width, a width of the input image is larger than the predetermined image processing width, and a width of the temporary image is less than the predetermined image processing width.

Multi-sensor sequential calibration system

Techniques for performing a sensor calibration using sequential data is disclosed. An example method includes receiving, from a first camera located on a vehicle, a first image comprising at least a portion of a road comprising lane markers, where the first image is obtained by the camera at a first time; obtaining a calculated value of a position of an inertial measurement (IM) device at the first time; obtaining an optimized first extrinsic matrix of the first camera by adjusting a function of a first actual pixel location of a location of a lane marker in the first image and an expected pixel location of the location of the lane marker; and performing autonomous operation of the vehicle using the optimized first extrinsic matrix of the first camera when the vehicle is operated on another road or at another time.

Dynamic calibration of 3D acquisition systems

Embodiments are directed to a sensing system that employs beams to scan paths across an object such that sensors may the beams reflected by the scanned object. Events may be provided based on the detected signals and the paths such that each event may be associated with a sensor and event metrics. Crossing points for each sensor may be determined based on where the paths intersect the scanned object such that events associated with each sensor are associated with the crossing points for each sensor. Each crossing point of each sensor may be compared to each correspondent crossing point of each other sensor. Actual crossing points may be determined based on the comparison and the crossing points for each sensor. Position information for each sensor may be determined based on the actual crossing points.

System and method for runtime determination of camera miscalibration

This invention provides a system and method for runtime determination (self-diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less-expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.

SYSTEM AND METHOD FOR COLLABORATIVE SENSOR CALIBRATION
20230213939 · 2023-07-06 ·

The present teaching relates to method, system, medium, and implementations for sensor calibration. An ego vehicle determines whether a sensor deployed on the ego vehicle to facilitate autonomous driving of the ego vehicle needs to be calibrated and sends, if it is determined that the sensor needs to be calibrated, a request for assistance in collaborative calibration of the sensor, with a first position of the ego vehicle or a first configuration of the sensor with respect to the ego vehicle. When a response of the request is received, an assisting vehicle is indicated to travel to be near the ego vehicle to facilitate the calibration of the sensor by collaborating with the moving ego vehicle and the ego vehicle coordinates with the assisting vehicle to enable the sensor to acquire information of a target present on the assisting vehicle for the collaborative calibration of the sensor.

CAMERA ARRAY CALIBRATION IN A FARMING MACHINE

The calibration system of the farming machine receives images from each camera of the camera array. The images comprise visual information representing a view of a portion of an area surrounding the farming machine. To calibrate a pair of cameras including a first camera and second camera, the calibration system determines a relative pose between the pair of cameras by extracting relative position and orientation characteristics from visual information in both an image received from the first camera and an image received from the second camera. The calibration system identifies a calibration error for the pair of cameras based on a comparison of the relative pose with an expected pose between the first pair of cameras. The calibration system transmits a notification to an operator of the farming machine that describes the calibration error and instructions for remedying the calibration error.

ON-VEHICLE SPATIAL MONITORING SYSTEM

A vehicle control system including a spatial monitoring system includes on-vehicle cameras that capture images, from which are recovered a plurality of three-dimensional (3D) points. A left ground plane normal vector is determined for a left image, a center ground plane normal vector is determined for a front image, and a right ground plane normal vector is determined for a right image. A first angle difference between the left ground plane normal vector and the center ground plane normal vector is determined, and a second angle difference between the right ground plane normal vector and the center ground plane normal vector is determined. An uneven ground surface is determined based upon one of the first angle difference or the second angle difference, and an alignment compensation factor for the left camera or the right camera is determined. A bird's eye view image is determined based upon the alignment compensation factor.