Patent classifications
H04N13/144
Light field display metrology
Examples of a light field metrology system for use with a display are disclosed. The light field metrology may capture images of a projected light field, and determine focus depths (or lateral focus positions) for various regions of the light field using the captured images. The determined focus depths (or lateral positions) may then be compared with intended focus depths (or lateral positions), to quantify the imperfections of the display. Based on the measured imperfections, an appropriate error correction may be performed on the light field to correct for the measured imperfections. The display can be an optical display element in a head mounted display, for example, an optical display element capable of generating multiple depth planes or a light field display.
Imaging system for a vehicle and method for obtaining an anti-flickering super-resolution image
An imaging system for a vehicle for obtaining an anti-flickering super-resolution image includes an image sensor adapted to obtain a sequence of images, and an image processor adapted to receive the sequence of images, compare image information of a most recent image of the sequence of images to a reference image to detect at least one image region of mismatch in the most recent image, remove the detected image region from image information of the most recent image to obtain adjusted image information, and add the adjusted image information of the most recent image to a super-resolution image.
Imaging system for a vehicle and method for obtaining an anti-flickering super-resolution image
An imaging system for a vehicle for obtaining an anti-flickering super-resolution image includes an image sensor adapted to obtain a sequence of images, and an image processor adapted to receive the sequence of images, compare image information of a most recent image of the sequence of images to a reference image to detect at least one image region of mismatch in the most recent image, remove the detected image region from image information of the most recent image to obtain adjusted image information, and add the adjusted image information of the most recent image to a super-resolution image.
Method and apparatus for photographing and projecting moving images in three dimensions
A digital cinematographic and projection process that provides 3D stereoscopic imagery that is not adversely affected by the standard frame rate of 24 frames per second, as is the convention in the motion picture industry worldwide. A method for photographing and projecting moving images in three dimensions includes recording a moving image with a first and a second camera simultaneously and interleaving a plurality of frames recorded by the first camera with a plurality of frames recorded by the second camera. The step of interleaving includes retaining odd numbered frames recorded by the first camera and deleting the even numbered frames, retaining even numbered frames recorded by the second camera and deleting the odd numbered frames, and creating an image sequence by alternating the retained images from the first and second camera.
Method and apparatus for photographing and projecting moving images in three dimensions
A digital cinematographic and projection process that provides 3D stereoscopic imagery that is not adversely affected by the standard frame rate of 24 frames per second, as is the convention in the motion picture industry worldwide. A method for photographing and projecting moving images in three dimensions includes recording a moving image with a first and a second camera simultaneously and interleaving a plurality of frames recorded by the first camera with a plurality of frames recorded by the second camera. The step of interleaving includes retaining odd numbered frames recorded by the first camera and deleting the even numbered frames, retaining even numbered frames recorded by the second camera and deleting the odd numbered frames, and creating an image sequence by alternating the retained images from the first and second camera.
Methods and apparatus for processing and or encoding images with negative parallax
Encoding and streaming methods and apparatus are described. Objects in negative parallax in frame pairs, e.g., pairs of left and right eye images forming a stereoscopic image, are identified. An amount of negative parallax reduction implemented depends, in some embodiments, on the data rate being used for encoding and/or the amount of negative parallax detected in the frame pair to be encoded. The lower the supported data rate the greater the reduction in negative parallax in some embodiments. In some, but not all, embodiments objects in positive parallax, e.g., objects appearing to go into the page, are not subject to parallax reduction. When a lowest supported data rate is used mono encoding is used and parallax reduction steps are skipped. The same frame pair is encoded multiple times at different data rates. Different amounts of negative parallax reduction are performed for at least some of the different supported data rates.
Methods and apparatus for processing and or encoding images with negative parallax
Encoding and streaming methods and apparatus are described. Objects in negative parallax in frame pairs, e.g., pairs of left and right eye images forming a stereoscopic image, are identified. An amount of negative parallax reduction implemented depends, in some embodiments, on the data rate being used for encoding and/or the amount of negative parallax detected in the frame pair to be encoded. The lower the supported data rate the greater the reduction in negative parallax in some embodiments. In some, but not all, embodiments objects in positive parallax, e.g., objects appearing to go into the page, are not subject to parallax reduction. When a lowest supported data rate is used mono encoding is used and parallax reduction steps are skipped. The same frame pair is encoded multiple times at different data rates. Different amounts of negative parallax reduction are performed for at least some of the different supported data rates.
Dynamic Focus 3D Display
A direct retinal projector system that provides dynamic focusing for virtual reality (VR) and/or augmented reality (AR) is described. A direct retinal projector system scans images, pixel by pixel, directly onto the subject's retinas. This allows individual pixels to be optically affected dynamically as the images are scanned to the subject's retinas. Dynamic focusing components and techniques are described that may be used in a direct retinal projector system to dynamically and correctly focus each pixel in VR images as the images are being scanned to a subject's eyes. This allows objects, surfaces, etc. that are intended to appear at different distances in a scene to be projected to the subject's eyes at the correct depths.
Dynamic Focus 3D Display
A direct retinal projector system that provides dynamic focusing for virtual reality (VR) and/or augmented reality (AR) is described. A direct retinal projector system scans images, pixel by pixel, directly onto the subject's retinas. This allows individual pixels to be optically affected dynamically as the images are scanned to the subject's retinas. Dynamic focusing components and techniques are described that may be used in a direct retinal projector system to dynamically and correctly focus each pixel in VR images as the images are being scanned to a subject's eyes. This allows objects, surfaces, etc. that are intended to appear at different distances in a scene to be projected to the subject's eyes at the correct depths.
IDENTIFYING REGIONS OF VISIBLE MEDIA DATA THAT BELONG TO A TRIGGER CONTENT TYPE
A computing system includes a storage device and processing circuitry. The processing circuitry is configured to obtain an image frame that comprises a plurality of pixels that form a pixel array. Additionally, the processing circuitry is configured to determine that a region of the image frame belongs to a trigger content type. Based on determining that the region of the image frame belongs to the trigger content type, the processing circuitry is configured to modify the region of the image frame to adjust a luminance of pixels of the region of the image frame based on part on an ambient light level in a viewing area of the user; and output, for display by a display device in the viewing area of the user, a version of the image frame that contains the modified region.