Patent classifications
G06T2207/20228
Method and system of increasing integer disparity accuracy for camera images with a diagonal layout
A system, article, and method of increasing integer disparity accuracy for camera images with a diagonal layout.
Systems and methods for 3D facial modeling
In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
TRANSITION BETWEEN BINOCULAR AND MONOCULAR VIEWS
An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images.
3D depth sensor and projection system and methods of operating thereof
A diffractive optical element includes: a first facet configured to perform an expansion optical function; and a second facet configured to perform a collimation optical function and a pattern generation function.
Image processing device, image processing method, and program
There is provided an image processing device including a matching degree calculation unit configured to calculate a matching degree between a pixel value of a target pixel in a standard image of a current frame and a pixel value of a corresponding pixel in a reference image of the current frame, and an estimation unit configured to estimate a disparity between the standard image and the reference image based on a result obtained by calculating the matching degree. The matching degree calculation unit calculates the matching degree using a disparity estimated for the standard image and the reference image of a previous frame.
Disparity-to-Depth Calibration for Plenoptic Imaging Systems
A procedure to calibrate a depth-disparity mapping for a plenoptic imaging system. In one aspect, one or more test objects located at known field positions and known depths are presented to the plenoptic imaging system. The plenoptic imaging system captures plenoptic images of the test objects. The plenoptic images include multiple images of the test objects captured from different viewpoints. Disparities for the test objects are calculated based on the multiple images taken from the different viewpoints. Since the field positions and depths of the test objects are known, a mapping between depth and disparity as a function of field position can be determined.
Systems and methods for persona identification using combined probability maps
Disclosed herein are systems and methods for persona identification using combined probability maps. An embodiment takes the form of a method that includes obtaining at least one frame of pixel data; processing the at least one frame of pixel data to generate a hair-identification probability map; and generating a persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map.
METHOD AND APPARATUS FOR PROCESSING IMAGE CONTENT
A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.
GENERATING CONTENT FOR A VIRTUAL REALITY SYSTEM
The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user's gaze at the virtual reality content, and suggesting an advertisement based on the location of the user's gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
Display apparatus and method for estimating depth
A display apparatus and method may be used to estimate a depth distance from an external object to a display panel of the display apparatus. The display apparatus may acquire a plurality of images by detecting lights that are input from an external object and passed through apertures formed in a display panel, may generate one or more refocused images, and may calculate a depth from the external object to the display panel using the plurality of images acquired and one or more refocused images.