G06T7/85

SYSTEMS AND METHODS FOR MEASUREMENT OF 3D ATTRIBUTES USING COMPUTER VISION
20230066820 · 2023-03-02 ·

A system including a computing device and camera is disclosed; the system configured for measuring three-dimensional attributes and associated performance measurements of a mechanical device. Some embodiments comprise a camera configured to capture images of the mechanical device and a computing device in communication with the camera. In some embodiments, the computing device is configured to access a first set of pixels associated with a first plurality of fiducials to calibrate a spatial resolution of the camera. A second image from the camera can be converted into a second set of pixels associated with each of the plurality of fiducials, which are attached to the mechanical device. The computing device can be further configured to compare the first and second set of pixels to determine the location of the plurality of fiducials on the mechanical device.

Method And System For The Calibration Of An Object Reconstruction Device

The present invention relates to a method and system for the optical and geometric calibration of a device configured for reconstructing the three-dimensional shape of an object. The object is reconstructed from a set of images thereof acquired by a plurality of cameras, where said plurality of cameras must necessarily be calibrated so that reconstruction is carried out without errors. In the context of the invention, calibration consists of obtaining the extrinsic parameters of said plurality of cameras; that is, the positions of their optical centers and the spatial orientations of their optical axes.

DEPTH IMAGE PROCESSING METHOD, SMALL OBSTACLE DETECTION METHOD AND SYSTEM, ROBOT, AND MEDIUM
20230063535 · 2023-03-02 ·

Provided are a depth image processing method, and a small obstacle detection method and system. The method comprises calibration of sensors, distortion and epipolar rectification, data alignment, and sparse stereo matching. The depth image processing method and the small obstacle detection method and system of the present invention only requires execution of sparse stereo matching on hole portions of a structured light depth image, and do not requires stereo matching of the entire image, thereby significantly reducing the overall computation load for processing a depth image, and enhancing the robustness of a system.

AUTOMATICALLY DETERMINING EXTRINSIC PARAMETERS OF MODULAR EDGE COMPUTING DEVICES
20230120944 · 2023-04-20 ·

Implementations are disclosed for automatic commissioning, configuring, calibrating, and/or coordinating sensor-equipped modular edge computing devices that are mountable on agricultural vehicles. In various implementations, neighbor modular edge computing device(s) that are mounted on a vehicle nearest a given modular edge computing device may be detected based on sensor signal(s) generated by contactless sensor(s) of the given modular edge computing device. Based on the detected neighbor modular edge computing device(s), an ordinal position of the given modular edge computing device may be determined relative to a plurality of modular edge computing devices mounted on the agricultural vehicle. Based on the sensor signal(s), distance(s) to the neighbor modular edge computing device(s) may be determined. Extrinsic parameters of the given modular edge computing device may be determined based on the ordinal position of the given modular edge computing device and the distance(s).

SYSTEMS, DEVICES, AND METHODS FOR IMAGING AND MEASUREMENT

A portable, handheld system for target measurement is provided. The system comprises an imaging assembly comprising first and second camera sensors, separated from one another by a fixed separation distance; and a processor operably coupled to the imaging assembly, the processor being configured to: activate the imaging assembly to capture a primary image of the target with the first camera sensor and to capture a secondary image of the target with the second camera sensor, wherein the target is in a field of view of each of the first and second camera sensors; analyze the captured primary and secondary images to determine a pixel shift value for the target; calculate a parallax value between the primary and secondary images using the determined pixel shift value; compute measurement data related to the target based on the calculated parallax value; and output the measurement data to a display of the imaging system.

Auto-calibration of vehicle sensors

A system for automatically calibrating sensors of a vehicle includes an electronic control unit, a projector communicatively coupled to the electronic control unit, a first sensor communicatively coupled to the electronic control unit, and a second sensor communicatively coupled to the electronic control unit. The electronic control unit is configured to project, with the projector, a calibration pattern onto a surface, capture, with the first sensor, a first portion of the calibration pattern, capture, with the second sensor, a second portion of the calibration pattern, and calibrate the first sensor and the second sensor based on at least one feature sensed within the first portion of the calibration pattern and the second portion of the calibration pattern.

System and method for coordinating landmark based collaborative sensor calibration
11630454 · 2023-04-18 · ·

The present teaching relates to method, system, medium, and implementations for sensor calibration. A request is received from an ego vehicle in motion on a route for assistance in collaborative calibration of a sensor deployed on the ego vehicle. The request specifies a position of the ego vehicle and a configuration of the sensor with respect to the ego vehicle. A collaborative means along the route is identified based on the ego vehicle's position, the configuration of the sensor, the route, and the position associated with the collaborative means. A calibration assistance package is generated in response to the request and sent to the ego vehicle. The calibration assistance package includes information about the collaborative means that can be used to identify the collaborative means while in motion along the route for calibrating the sensor while the ego vehicle is in the vicinity of the collaborative means.

Stereo calibration method for movable vision system

A stereo calibration method for a movable vision system. The movable vision system involved in the method comprises at least two photography components, at least one calculation component, and at least one control component. The method comprises: placing a calibration template in front of photography components; rotating the photography components by degrees of freedom of motion; obtaining one or more groups of images including calibration template features, and obtaining corresponding position information at the degrees of freedom of motion when the images are obtained; obtaining, by means of calculation, a calibration result of the photography components and the degrees of freedom of motion, i.e., a rotation matrix and a translation matrix of the rotation axis at the degrees of freedom of motion with respect to the photography components; and then obtaining in real time a stereo calibration result by combining the obtained calibration result and the position information at the degrees of freedom of motion in the vision system. The method implements stereo calibration of the photography components in a movable multiocular system in a motion state, has a good calculation real-time property, eliminates the error problem caused by machining or assembly, and has a broad application prospect.

METHOD OF CALIBRATING CAMERAS
20230162398 · 2023-05-25 ·

A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.

OPTICAL TRACKING DEVICE WITH BUILT-IN STRUCTURED LIGHT MODULE

A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.