Patent classifications
G06T3/0025
FOVEATED VIDEO RENDERING
Techniques are described for generating and rendering video content based on area of interest (also referred to as foveated rendering) to allow 360 video or virtual reality to be rendered with relatively high pixel resolution even on hardware not specifically designed to render at such high pixel resolution. Processing circuitry may be configured to keep the pixel resolution within a first portion of an image of one view at the relatively high pixel resolution, but reduce the pixel resolution through the remaining portions of the image of the view based on an eccentricity map and/or user eye placement. A device may receive the images of these views and process the images to generate viewable content (e.g., perform stereoscopic rendering or interpolation between views). Processing circuitry may also make use of future frames within a video stream and base predictions on those future frames.
Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
Systems and methods for modifying image distortion (curvature) for viewing distance in post capture. Presentation of imaging content on a content display device may be characterized by a presentation field of view (FOV). Presentation FOV may be configured based on screen dimensions of the display device and distance between the viewer and the screen. Imaging content may be obtained by an activity capture device characterized by a wide capture field of view lens (e.g., fish-eye). Images may be transformed into rectilinear representation for viewing. When viewing images using presentation FOV that may narrower than capture FOV, transformed rectilinear images may appear distorted. A transformation operation may be configured to account for presentation FOV—capture FOV mismatch. In some implementations, the transformation may include fish-eye to rectilinear transformation characterized by a transformation strength that may be configured based on a ratio of the presentation FOV to the capture FOV.
FOVEATED RENDERING USING EYE MOTION
A method for providing imagery to a user on a display includes receiving eye tracking data. The method also includes determining a current gaze location and a relative distance between the current gaze location and an edge of the display using the eye tracking data. The method also includes defining a first tile centered at the current gaze location and multiple tiles that surround the first tile using the current gaze location and the relative distance between the current gaze location and the edge of the display. The method includes providing a foveated rendered image using the first tile and the multiple tiles.
Image processing devices, methods for controlling an image processing device, and computer-readable media
According to various embodiments, an image processing device may be provided. The image processing device may include: an input circuit configured to receive display data; a splitting circuit configured to split the display data into a first output and a second output; a first output circuit configured to output the first output for displaying with a first spatial resolution; and a second output circuit configured to output the second output for displaying with a second spatial resolution.
Foveated rendering using eye motion
A method for providing imagery to a user on a display includes receiving eye tracking data. The method also includes determining a gaze location on the display and at least one of a confidence factor of the gaze location, or a speed of the change of the gaze location using the eye tracking data. The method also includes establishing multiple tiles using the gaze location and at least one of the confidence factor or the speed of the change of the gaze location. The method also includes providing a foveated rendered image using the multiple tiles.
Systems and methods for fusing images
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
Foveated image capture for power efficient video see-through
Generating an image stream may include obtaining image data from a camera, identifying a first subset of the image data including a region of interest, identifying a second subset of the image data different than the first subset of the image data, processing the first subset of image data by a first processing pipeline to obtain a first processed set of image data, processing the second subset of image data by a second processing pipeline to obtain a second processed set of image data, wherein the second processing pipeline processes at a lower quality than the first processing pipeline, and combining the first processed set of image data and the second processed set of image data to obtain a processed image frame.
Electronic device with central and peripheral displays
An electronic device such as a head-mounted device may have a display that is viewable by a user from eye boxes. The electronic device may have a gaze tracking system that monitors a user's eyes in the eye boxes to gather gaze direction information. The display may have a central portion and a peripheral portion. The peripheral portion may have a lower resolution than the central portion and may be used in displaying content that is viewable in a user's peripheral vision. During operation, control circuitry in the electronic device may adjust peripheral content on the peripheral portion to correct for parallax-induced mismatch between the peripheral content and central content on the central portion of the display. The control circuitry may also depower peripheral pixels that are determined to be unviewable based on the gaze direction. Diffusers may be used to hide seams between the central and peripheral display portions.
Image capture device providing warped previews
An image capture device may capture visual content through a front-facing optical element. A portion of the visual content may be enlarged for presenting on a front-facing display of the image capture device. Extent of the visual content within the portion may be warped to increases size of depiction within the portion.
SYSTEMS AND METHODS FOR GENERATING OBJECT DETECTION LABELS USING FOVEATED IMAGE MAGNIFICATION FOR AUTONOMOUS DRIVING
Systems and methods for processing high resolution images are disclosed. The methods include generating a saliency map of a received high-resolution image using a saliency model. The saliency map includes a saliency value associated with each of a plurality of pixels of the high-resolution image. The method then includes using the saliency map for generating an inverse transformation function that is representative of an inverse mapping of one or more first pixel coordinates in a warped image to one or more second pixel coordinates in the high-resolution image, and implementing an image warp for converting the high-resolution image to the warped image using the inverse transformation function. The warped image is a foveated image that includes at least one region having a higher resolution than one or more other regions of the warped image.