Patent classifications
G06T2207/20228
VEHICLE EXTERNAL ENVIRONMENT RECOGNITION APPARATUS
A vehicle external environment recognition apparatus includes at least one processor, and at least one memory coupled to the at least one processor. The at least one processor is configured to operate in cooperation with at least one program stored in the at least one memory to execute processing. The processing includes generating a distance image from luminance images, specifying, by using semantic segmentation, a floating matter class in the luminance images, and invalidating parallax associated with floating pixels that are included in the distance image and belong to the floating matter class.
METHODS, SYSTEMS AND APPARATUS TO OPTIMIZE PIPELINE EXECUTION
Methods, apparatus, systems, and articles of manufacture to optimize pipeline execution are disclosed. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to execute the machine readable instructions to determine a value associated with a first location of a first pixel of a first image and a second location of a second pixel of a second image by calculating a matching cost between the first location and the second location, generate a disparity map including the value, and determine a minimum value based on the disparity map corresponding to a difference in horizontal coordinates between the first location and the second location.
ELECTRONIC DEVICE COMPRISING CAMERA MODULE FOR OBTAINING DEPTH INFORMATION
An electronic device includes a processor; a first camera module comprising a camera including a first lens assembly having a first field of view (FOV) and a second camera module comprising a camera spaced apart a first camera module and including a second lens assembly having a second FOV narrower than the first FOV; wherein the first camera module includes an image sensor and a filter including a glass plate spaced apart the image sensor and disposed on the image sensor, and a layer disposed on the glass plate and configured to absorb a portion of infrared light among the light transmitted through the first lens assembly, wherein the processor is configured to obtain depth information about on the subject located within the second FOV based on data about the light passing through the filter, which is obtained through the image sensor.
PLANAR SURFACE DETECTION APPARATUS AND METHOD
Provided is a method and apparatus for detecting a planar surface, the method including acquiring, based on a pixelwise disparity of an input image estimated in a first network, a pixelwise plane parameter of the input image, determining a pixelwise segment matching probability of the input image based on a second network trained to perform a segmentation of an image, acquiring a segment-wise plane parameter based on the pixelwise plane parameter and the pixelwise segment matching probability, and detecting a planar surface in the input image based on the segment-wise plane parameter.
METHOD AND SYSTEM OF INTEGRITY MONITORING FOR VISUAL ODOMETRY
A method of integrity monitoring for visual odometry comprises capturing a first image at a first time epoch with stereo vision sensors, capturing a second image at a second time epoch, and extracting features from the images. A temporal feature matching process is performed to match the extracted features, using a feature mismatching limiting discriminator. A range, or depth, recovery process is performed to provide stereo feature matching between two images taken by the stereo vision sensors at the same time epoch, using a range error limiting discriminator. An outlier rejection process is performed using a modified RANSAC technique to limit feature moving events. Feature error magnitude and fault probabilities are characterized using overbounding Gaussian models. A state vector estimation process with integrity check is performed using solution separation to determine changes in rotation and translation between images, determine error statistics, detect faults, and compute protection level or integrity risk.
METHOD OF CALIBRATING CAMERAS
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
MOUNTING CALIBRATION OF STRUCTURED LIGHT PROJECTOR IN MONO CAMERA STEREO SYSTEM
An apparatus includes an interface and a processor. The interface may be configured to receive pixel data. The processor may be configured to (i) generate a reference image and a target image from said pixel data, (ii) perform disparity operations on the reference image and the target image, and (iii) build a disparity angle map in response to the disparity operations. The disparity operations may comprise (a) selecting a plurality of grid pixels, (b) measuring a disparity angle for each grid pixel, (c) calculating a plurality of coefficients by resolving a surface formulation for a disparity angle map of the grid pixels, and (d) generating values in a disparity angle map for the pixel data utilizing the coefficients.
IMAGING SYSTEM AND METHOD
In exemplary illustrative embodiments, a method of generating a digital image and/or modified depth information may include obtaining, via a first electronic sensor, a plurality of images of a target within a time period; selecting one or more pixels in a first image of the plurality of images; identifying corresponding pixels, that correspond to the one or more selected pixels, in one or more other images of the plurality of images, the one or more selected pixels and the corresponding pixels defining sets of reference pixels; identifying two or more images of the plurality of images having respective sets of reference pixels with optimal disparity; generating modified depth information; and/or generating a final digital image via the plurality of images and the modified depth information.
METHOD AND SYSTEM FOR AUTOMATICALLY OPTIMIZING 3D STEREOSCOPIC PERCEPTION, AND MEDIUM
Provided by the present invention are a method and system for automatically optimizing 3D stereoscopic perception, and a medium. The method comprises the following steps that are executed successively: step 1: given current left and right images, calculating a stereo disparity to generate a disparity map; step 2: calculating a depth value corresponding to each individual pixel by using the calculated disparity; step 3: calculating a depth distance of a target to be observed; step 4: acquiring corresponding left and right image displacement values by using the depth distance calculated in step 3; and step 5: applying the acquired image displacement values into a 3D display. The beneficial effect of the present invention is that: the method for automatically optimizing 3D stereoscopic perception according to the present invention solves fatigue and dizziness that are easily generated during the use of a 3D endoscope.
Image data transmission method, content processing apparatus, head-mounted display, relay apparatus and content processing system
Disclosed herein is an image data transmission method including, by an image generation apparatus, generating an image to be merged with a display image and data of an α value representative of a transparency of a pixel of the image to be merged, generating data for merging representing the image to be merged and the data of the α value on one image plane, and transmitting the data for merging to an apparatus that generates the display image.