Patent classifications
H04N13/271
Camera system and method for hair segmentation
A method for operating an image processing device coupled to a color camera and a depth camera is provided. The method includes receiving a color image of a 3-dimensional scene from a color camera, receiving a depth map of the 3-dimensional scene from a depth camera, generating an aligned 3-dimensional face mesh from a plurality of color images received from the color camera indicating movement of a subject's head within the 3-dimensional scene and form the depth map, determining a head region based the depth map, segmenting the head region into a plurality of facial sections based on both the color image, depth map, and the aligned 3-dimensional face mesh, and overlaying the plurality of facial sections on the color image.
DEVICE AND METHOD OF DIMENSIONING USING DIGITAL IMAGES AND DEPTH DATA
A device and method of dimensioning using digital images and depth data is provided. The device includes a camera and a depth sensing device whose fields of view generally overlap. Segments of shapes belonging to an object identified in a digital image from the camera are identified. Based on respective depth data, from the depth sensing device, associated with each of the segments of the shapes belonging to the object, it is determined whether each of the segments is associated with a same shape belonging to the object. Once all the segments are processed to determine their respective associations with the shapes of the object in the digital image, dimensions of the object are computed based on the respective depth data and the respective associations of the shapes.
IMAGE PROCESSING APPARATUS, IMAGING DEVICE, MOVING OBJECT DEVICE CONTROL SYSTEM, AND IMAGE PROCESSING METHOD
An image processing apparatus includes a processor configured to generate vertical direction distribution data which is a distribution of distance values to a vertical direction of a distance image from the distance image having distance values according to distance of a road surface in captured images. There is an extraction of a pixel having the highest frequency value for each of the distance values from pixels in a search area. There is a detection of the road surface based on each pixel extracted.
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
According to one embodiment, an image processing device includes a synthesis processing unit. The synthesis processing unit synthesizes a plurality of depth maps. A depth map is generated based on images that are mutually different in viewpoint. The plurality of depth maps are mutually different in focal length. The depth map includes distance data in a distance range set in accordance with the focal length.
Encoding device, encoding method, decoding device, and decoding method for encoding multiple viewpoints for compatibility with existing mode allowing fewer viewpoints
An encoding device and method, and a decoding device and method, capable of encoding and decoding a multi-viewpoint image in accordance with a mode having compatibility with an existing mode. A compatible encoder generates a compatible stream by encoding an image that is a compatible image. An image converting unit converts the resolution of images that are auxiliary images. An auxiliary encoder generates an encoded stream of the auxiliary image by encoding the auxiliary image of which the resolution is converted. A compatibility information generating unit generates, as compatibility information, information that designates the image as a compatible image. A multiplexing unit transmits the compatible stream, the encoded stream of the auxiliary image, and the compatibility information. The encoding device can encode a 3D image of the multi-viewpoint mode.
Building a three-dimensional composite scene
The capture and alignment of multiple 3D scenes is disclosed. Three dimensional capture device data from different locations is received thereby allowing for different perspectives of 3D scenes. An algorithm uses the data to determine potential alignments between different 3D scenes via coordinate transformations. Potential alignments are evaluated for quality and subsequently aligned subject to the existence of sufficiently high relative or absolute quality. A global alignment of all or most of the input 3D scenes into a single coordinate frame may be achieved. The presentation of areas around a particular hole or holes takes place thereby allowing the user to capture the requisite 3D scene containing areas within the hole or holes as well as part of the surrounding area using, for example, the 3D capture device. The new 3D captured scene is aligned with existing 3D scenes and/or 3D composite scenes.
IMAGE RECORDING AND 3D INFORMATION ACQUISITION
Two or more images are taken wherein during the image taking a focal sweep is performed. The exposure intensity is modulated during the focal sweep and done so differently for the images. This modulation provides for a watermarking of depth information in the images. The difference in exposure during the sweep watermarks the depth information differently in the images. By comparing the images a depth map for the images can be calculated. A camera system has a lens and a sensor and a means for performing a focal sweep and means for modulating the exposure intensity during the focal sweep. Modulating the exposure intensity can be done by modulating a light source or the focal sweep or modulating the transparency of a transparent medium in the light path.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND SOLID-STATE IMAGING DEVICE
The present disclosure relates to an image processing device, an image processing method, and a solid-state imaging device capable of detecting image information with high accuracy by using images with different shooting conditions. A difference detection unit detects difference information of a short exposure image of a current frame and a past frame shot in a short exposure time. The difference detection unit detects difference information of a long exposure image of a current frame and a past frame shot in a long exposure time. The combining unit combines the difference information of the short exposure image and the difference information of the long exposure image on the basis of the short exposure image or the long exposure image and generates a motion vector of the current frame on the basis of the combined difference information. The present disclosure may be applied to the image processing apparatus and the like, for example.
IMAGING DEVICE ASSEMBLY, THREE-DIMENSIONAL SHAPE MEASURING DEVICE, AND MOTION DETECTING DEVICE
An imaging device assembly includes a light source, an imaging device formed with a plurality of imaging elements, and a control device. Each imaging element (10) includes a light receiving portion (21), a first charge storage portion (22) and a second charge storage portion (24), and a first charge transfer control means (23) and a second charge transfer control means (24). Under the control of the control device, the imaging element (10) captures an image of an object on the basis of high-intensity light and stores first image signal charge into the first charge storage portion (22) during a first period, and captures an image of the object on the basis of low-intensity light and stores second image signal charge into the second charge storage portion (24) during a second period. The control device obtains an image signal on the basis of the difference between the first image signal charge and the second image signal charge.
THREE-DIMENSIONAL AUTO-FOCUSING DISPLAY METHOD AND SYSTEM THEREOF
A 3D auto-focusing display method comprises executing an eye-tracking step on a 3D image to obtain focal point coordinates (x1, y1) of viewers of the image, mapping the focal point coordinates (x1, y1) of viewers to a coordinate location of a display to obtain display coordinates (x2, y2) for defining the coordinate location of the display corresponding to a depth diagram of the 3D image, determining a region where the image is located by using the display coordinates (x2, y2) as an input parameter and by use of the depth diagram of the image, determining whether the image is 3D stereoscopic images according to the region and executing a depth map step to revise the 3D image based on the image and a plurality of depth data of the region to reflect the display coordinates (x2, y2) as a focused image, and outputting the revised focused image to the display.