Patent classifications
H04N2013/0077
Three-dimensional image reconstruction using multi-layer data acquisition
A camera, including: two imaging systems each comprising a different optical path corresponding to a different viewing angle of an object; one or more illumination sources; a mask disposed with multiple pairs of apertures, wherein each aperture of each aperture pair corresponds to a different one of the imaging systems; at least one detector configured to acquire multiple image pairs of the object from the two imaging systems via the multiple pairs of apertures; and a processor configured to produce from the multiple acquired image pairs a multi-layer three dimensional reconstruction of the object.
SYSTEMS AND METHODS OF CREATING A THREE-DIMENSIONAL VIRTUAL IMAGE
Embodiments of the present invention create a three-dimensional virtual model by a user identifying a three-dimensional object, capturing a plurality of two-dimensional images of said object in succession, said plurality of images being captured from different orientations, recording said plurality of images on a storage medium, determining the relative change in position of said plurality of images by comparing two subsequent images, wherein the relative change is determined by a difference in color intensity values between the pixels of one image and another image, generating a plurality of arrays from the difference determined and generating a computer image from said plurality of arrays, wherein said computer image represents said three-dimensional object.
METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL (3D) ROAD MODEL
A method for generating a three-dimensional (3D) lane model, the method including calculating a free space indicating a driving-allowed area based on a driving image captured from a vehicle camera, generating a dominant plane indicating plane information of a road based on either or both of depth information of the free space and a depth map corresponding to a front of the vehicle, and generating a 3D short-distance road model based on the dominant plane.
Image processing apparatus
Provided is an image processing apparatus that: acquires a depth map that includes information that indicates a distance up to a subject in an actual space, the depth map including, for each of one or a plurality of areas in the depth map, information regarding the distance up to a subject portion that appears in the area and regarding a color component of the subject portion; and generates a composite image in which a virtual object is arranged in a scene image that represents a scene of the actual space. The image processing apparatus determines a display color of the virtual object on the basis of the distance up to the subject portion that appears in the depth map and the color component thereof.
Faster state transitioning for continuous adjustable 3Deeps filer spectacles using multi-layered variable tint materials
An electrically controlled spectacle includes a spectacle frame and optoelectronic lenses housed in the frame. The lenses include a left lens and a right lens, each of the optoelectrical lenses having a plurality of states, wherein the state of the left lens is independent of the state of the right lens. The electrically controlled spectacle also includes a control unit housed in the frame, the control unit being adapted to control the state of each of the lenses independently.
POLARIZATION CAPTURE DEVICE, SYSTEM, AND METHOD
A device includes a first lens. The device also includes a first polarized image sensor coupled with the first lens and configured to capture, from a first perspective, a first set of image data in a plurality of polarization orientations. The device also includes a second lens disposed apart from the first lens. The device further includes a second polarized image sensor coupled with the second lens and configured to capture, from a second perspective different from the first perspective, a second set of image data in the plurality of polarization orientations.
Computer-generated image processing including volumetric scene reconstruction to replace a designated region
An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction of an image including replacing a designated region in the image.
Camera image fusion methods
A method including providing a digital image from a camera imaging sensor wherein the image comprises both low resolution multispectral and panchromatic information; interpreting the digital image to obtain a low resolution multispectral digital image, interpreting the digital image to obtain a high resolution monochromatic digital, fusing the low resolution multispectral digital image and the high resolution monochromatic digital image to produce a high resolution colour image.
Dirty lens image correction
Systems and method for correcting images including artifacts due to dirty camera lenses of electronic device are disclosed. Correction of images by the systems and methods includes obtaining a first raw pixel image of a scene captured with a first camera, obtaining a second raw image of the scene captured with a second camera separate from the first camera in a camera baseline direction, rectifying the first and second raw pixel images to create respective first and second rectified pixel images, determining disparity correspondence between corresponding image pixel pairs of the first and second rectified images in the camera baseline direction, mapping first and second rectified images into the same domain using the determined disparity, detect image artifact regions within each domain mapped image by comparing corresponding regions of the domain mapped images, determining correction factors for each detected image artifact region, and correcting the rectified first and second images by applying the determined correction factors.
Automatic System for Production-Grade Stereo Image Enhancements
Systems and methods for automatically optimizing pairs of stereo images are presented. Pixels in an image pair are reprojected from their raw format to a common pixel-space coordinate system. The image pair is masked to remove pixels that do not appear in both images. Each image undergoes a local enhancement process. The local enhancements are processed into a global enhancement for each image of the image pair. A factor is calculated to balance the left and right images with each other. A brightness factor for the image pair is calculated. The enhancements, including the balancing and brightness factors, are then mapped to their respective images.