Patent classifications
H04N2013/0088
Image Processing as a Function of Deformable Electronic Device Geometry and Corresponding Devices and Methods
An electronic device includes one or more sensors detecting a geometry of the electronic device. At least one imager, disposed to a first side of a deformable portion of the electronic device, captures at least one image while at least one other imager, disposed to a second side of the deformable portion of the electronic device, captures at least one other image. The at least one image and at least one other image can each be any of a single image, a sequence of images, or video. One or more processors combine the at least one image and the at least one other image to create a composite image as a function of the geometry of the electronic device.
Information processing device and positional information obtaining method
An information processing device extracts an image of a marker from a photographed image, and obtains a position of a representative point of the marker in a three-dimensional space. Meanwhile, a position and an attitude corresponding to a time of photographing the image are estimated on the basis of an output value of a sensor included in a target object. A weight given to positional information of each marker is determined by using a target object model on the basis of the estimation, and positional information of the target object is calculated. Further, final positional information is obtained by synthesizing estimated positional information at a predetermined ratio, and the final positional information is output and fed back for a next estimation.
METHODS AND APPARATUS FOR RECEIVING AND/OR USING REDUCED RESOLUTION IMAGES
Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images.
Methods, systems, and media for rendering immersive video content with foveated meshes
Methods, systems, and media for rendering immersive video content with foveated meshes are provided. In some embodiments, the method comprises: receiving a video content item; determining, using a hardware processor, whether the video content item meets at least one criterion; in response to determining that the video content item meets the at least one criterion, generating, using the hardware processor, a foveated mesh in accordance with a foveation ratio parameter on which frames of the video content item are to be projected, wherein the foveated mesh has a non-uniform position map that increases pixel density in a central portion of each frame of the video content item in comparison with peripheral portions of each frame of the video content item; and storing the video content item in a file format that includes the generated foveated mesh, wherein the immersive video content is rendered by applying the video content item as a texture to the generated foveated mesh.
Layered scene decomposition CODEC with layered depth imaging
A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the display surface increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
Methods and apparatus for processing content based on viewing information and/or communicating content
Methods and apparatus for collecting user feedback information from viewers of content are described. Feedback information is received from viewers of content. The feedback indicates, based on head tracking information in some embodiments, where users are looking in a simulated environment during different times of a content presentation, e.g., different frame times. The feedback information is used to prioritize different portions of an environment represented by the captured image content. Resolution allocation is performed based on the feedback information and the content re-encoded based on the resolution allocation. The resolution allocation may and normally does change as the priority of different portions of the environment change.
Panoramic image generating method and apparatus
The present application discloses a method and apparatus for generating a panoramic image. A cuboid three-dimensional image of each local space is determined according to an original two-dimensional image of each local space in the overall space and a preset cuboid model corresponding to each local space, and then a three-dimensional panoramic image of the overall space is generated according to all determined cuboid three-dimensional images. Since the three-dimensional panoramic image of the overall space is generated according to the cuboid three-dimensional images of all local spaces, the overall space may be visually viewed at a three-dimensional angle of view to achieve the three-dimensional real-scene effect.
Image processing method and device, and three-dimensional imaging system
Disclosed are an image processing method and device, and a three-dimensional imaging system. The method comprises the following steps of: acquiring a two-dimensional image to be processed; aligning the two-dimensional image to be processed to a grid template; performing mapping processing on the two-dimensional image to be processed by using a grid mapping table to acquire a first image, wherein the grid mapping table is used for representing the mapping relationship of grid images; mirroring the first image to acquire a second image; and synthesizing the first image and the second image to acquire the superimposed image of the first image and the second image. According to the method, the grid template and the grid mapping table are used for performing mapping processing on the two-dimensional image to be processed so as to simulate a left-eye image and a right-eye image acquired by human eyes; and a same two-dimensional image to be processed need to be mapped only once to acquire the left-eye image and the right-eye image, the steps of image processing being reduced, thus the time of image processing being shortened, and providing favorable conditions for the follow-up real-time conversion of the superimposed two-dimensional image into a three-dimensional image.
THREE-DIMENSIONAL NOISE REDUCTION
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.
Stereo infrared imaging for head mounted devices
A stereoscopic system that employs stereo infrared imaging to improve resolution and field of view (FOV) is disclosed. In embodiments, the stereoscopic system includes first and second infrared cameras that are configured to detect scenery in the same FOV. The stereoscopic system further includes at least one display and at least one controller configured to render imagery from the first infrared camera and imagery from the second infrared camera via a display (or multiple displays) to generate stereoscopic infrared imagery.