Patent classifications
G06T2207/10012
Multichannel, multi-polarization imaging for improved perception
In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a number of first filter types. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data based on a portion of the first image data associated with the filter type in common.
Vanishing point stereoscopic image correction
Three-dimensional image calibration and presentation for stereoscopic imaging systems such as eyewear including a first camera and a second camera is described. The calibration and presentation includes obtaining a calibration offset using vanishing points obtained from images captured by a first camera and a second camera to accommodate rotation of the first and second cameras with respect to one another, adjusting a three-dimensional rendering offset by the obtained calibration offset, and presenting the stereoscopic images using the three dimension rendering offset.
Methods and systems for digital mammography imaging
Various methods and systems are provided for tracking a biopsy target across one or more images. In one example, a method includes determining a position of a biopsy target in a selected image of a patient based on an image registration process with a reference image of the patient, and displaying a graphical representation of the position of the biopsy target on the selected image.
Depth-based image stitching for handling parallax
A solution to the problem of image and video stitching is disclosed that compensates for the effects of lens distortion, camera misalignment, and parallax in combining multiple images. The disclosed image stitching technique includes depth or disparity estimation, alignment, and blending processes configured to be computationally efficient and produce quality results by limiting the presence of noticeable seams and artifacts in the final stitched image. An inter-frame approach applies image stitching to video frames to maintain temporal continuity between successive frames across a stitched video output having a 360-degree viewing perspective. A temporal adjustment is configured to improve temporal continuity between a subsequent frame and a previous frame in a sequence of video frames.
Dense depth computations aided by sparse feature matching
A system for dense depth computation aided by sparse feature matching generates a first image using a first camera, a second image using a second camera, and a third image using a third camera. The system generates a sparse disparity map using the first image and the third image by (1) identifying a set of feature points within the first image and a set of corresponding feature points within the third image, and (2) identifying feature disparity values based on the set of feature points and the set of corresponding feature points. The system also applies the first image, the second image, and the sparse disparity map as inputs for generating a dense disparity map.
Calibration for multi-camera and multisensory systems
A method and apparatus for calibrating an image capture device are provided. The method includes capturing one or more of a single or Multiview image set by the image capture device, detecting one or more calibration features in each set by a processor, initializing each of the one or more calibration parameters a corresponding default value, extracting one or more relevant calibration parameters, computing an individual cost term for each of the identified relevant calibration parameters, and scaling each of the relevant cost terms. The method continues with combining all the cost terms once each of the calculated relevant cost terms have been scaled, determining if the combination of the cost terms has been minimized, adjusting the calibration parameters if it is determined that that the combination of the cost terms has not been minimized, and returning to the step of extracting one or more of the relevant calibration parameters.
DISTANCE DETERMINATION METHOD, APPARATUS AND SYSTEM
The present disclosure provides a distance determination method, apparatus and system, relating to the technical field of image processing. The method includes the following steps: acquiring a master visual image photographed by a master camera and an original auxiliary visual image photographed by an auxiliary camera; acquiring an initial matching point pair between the master visual image and the original auxiliary visual image through feature extraction and feature matching; correcting the original auxiliary visual image sequentially, based on the initial matching point pair and different constraints, so as to obtain a target auxiliary visual image, wherein the different constraints includes: a constraint of a minimum rotation angle and a constraint of a minimum parallax; and determining a focusing distance according to the master visual image and the target auxiliary visual image. The focusing distance can be determined more accurately.
FEATURE POINT POSITION DETECTION METHOD AND ELECTRONIC DEVICE
The disclosure provides a feature point position detection method and an electronic device. The method includes: obtaining a plurality of first relative positions of a plurality of feature points on a specific object relative to a first image capturing element; obtaining a plurality of second relative positions of the plurality of feature points on the specific object relative to a second image capturing element; and in response to determining that the first image capturing element is unreliable, estimating a current three-dimensional position of each feature point based on a historical three-dimensional position and the plurality of second relative positions of each feature point.
PLANAR OBJECT SEGMENTATION
Robots might interact with planar objects (e.g., garments) for process automation, quality control, to perform sewing operations, or the like. It is recognized herein that robots interacting with such planar objects can pose particular problems, for instance problems related to detecting the planar object and estimating the pose of the detected planar object. A system can be configured to detect or segment planar objects, such as garments. The system can include a three-dimensional (3D) sensor positioned to detect a planar object along a transverse direction. The system can further include a first surface that supports the planar object. The first surface can be positioned such that the planar object is disposed between the first surface and the 3D sensor along the transverse direction. In various examples, the 3D sensor is configured to detect the planar object without detecting the first surface.
Systems and methods for hybrid depth regularization
Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.