Patent classifications
H04N13/232
CAPTURING AND ALIGNING PANORAMIC IMAGE AND DEPTH DATA
This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.
Method and apparatus for restoring image
Provided is a method and apparatus for restoring an image, the apparatus including a plurality of lenses configured to pass a plurality of rays, a sensor including a target sensing element configured to receive a target ray passing a first lens among the plurality of lenses, and a second sensing element configured to receive a second ray passing a second lens among the plurality of lenses, the first lens being different from the second lens, and a processor configured to determine the second sensing element based on a difference between a direction of the target ray and a direction of the second ray, and to restore color information corresponding to the target sensing element based on color information detected by the second sensing element.
Method and apparatus for restoring image
Provided is a method and apparatus for restoring an image, the apparatus including a plurality of lenses configured to pass a plurality of rays, a sensor including a target sensing element configured to receive a target ray passing a first lens among the plurality of lenses, and a second sensing element configured to receive a second ray passing a second lens among the plurality of lenses, the first lens being different from the second lens, and a processor configured to determine the second sensing element based on a difference between a direction of the target ray and a direction of the second ray, and to restore color information corresponding to the target sensing element based on color information detected by the second sensing element.
Multi-view collimated display
A method of displaying a light field to at least one viewer of a light field display device, the light field based on a 3D model, the light field display device comprising a plurality of spatially distributed display elements, the method including the steps of: (a) determining the viewpoints of the eyes of the at least one viewer relative to the display device; (b) for each eye viewpoint and each of a plurality of the display elements, rendering a partial view image representing a view of the 3D model from the eye viewpoint through the display element; and (c) displaying, via each display element, the set of partial view images rendered for that display element.
Multi-view collimated display
A method of displaying a light field to at least one viewer of a light field display device, the light field based on a 3D model, the light field display device comprising a plurality of spatially distributed display elements, the method including the steps of: (a) determining the viewpoints of the eyes of the at least one viewer relative to the display device; (b) for each eye viewpoint and each of a plurality of the display elements, rendering a partial view image representing a view of the 3D model from the eye viewpoint through the display element; and (c) displaying, via each display element, the set of partial view images rendered for that display element.
Combining light-field data with active depth data for depth map generation
Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
Combining light-field data with active depth data for depth map generation
Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
SYSTEM AND METHOD FOR LIGHTFIELD CAPTURE
A system for generating holographic images or videos comprising a camera array, a plurality of processors, and a central computing system. A method for generating holographic images can include receiving a set of images and processing the images.
DEVICE AND METHOD FOR RAPID THREE-DIMENSIONAL CAPTURE OF IMAGE DATA
A device includes a detection path, along which detection radiation is guided, and a means for splitting the detection radiation between first and second detection paths. A detector has detector elements in each detection path. A microlens array is disposed upstream of each detector in a pupil. The first and second detectors have a substantially identical spatial resolution. The detector elements of the first detector are arranged line by line in a first line direction, while the detector elements of the second detector are arranged line by line in a second line direction. The first and second detectors are arranged relative to the image to be captured such that the first and second line directions are inclined relative to one another. A readout unit for reading out the image data of the detectors is configured for selectively reading those detector elements arranged line by line which form an image line.
Wide viewing angle stereo camera apparatus and depth image processing method using the same
Disclosed are a wide viewing angle stereo camera apparatus and a depth image processing method using the same. A stereo camera apparatus includes a receiver configured to receive a first image and a second image of a subject captured through a first lens and a second lens that are provided in a vertical direction; a converter configured to convert the received first image and second image using a map projection scheme; and a processing configured to extract a depth of the subject by performing stereo matching on the first image and the second image converted using the map projection scheme, in a height direction.