H04N13/128

Transferring graphic objects between non-augmented reality and augmented reality media domains
11562544 · 2023-01-24 ·

A display of an augmented reality-enabled (AR) device, such as a mobile phone, can be used to transfer a graphical object between a secondary display, such as a computer monitor, that is captured by a camera of the AR device, and AR space, where the object is visible only through the AR interface of the AR device. A graphical object can be selected through the AR interface and, for example, moved around on a canvas of the secondary display by the user of the AR device. When the AR interface is used to move an enabled object near an edge of the canvas or physical boundary of the secondary display, the object as shown on the secondary display can be made to disappear from the secondary display to be replaced by a virtual object shown only on the AR interface in a similar location.

Depth image processing method, depth image processing apparatus and electronic device

Disclosed are a depth image processing method, a depth image processing apparatus (10), and an electronic device (100). The depth image processing method is applied in the electronic device (100) including a depth image capturing apparatus (20) configured to capture an initial depth image. The depth image processing method includes: obtaining (01) target depth data for a number of regions of interest of the initial depth image; determining (02) whether the number of regions of interest is greater than a predetermined value; grouping (03), in response to the number of regions of interest being greater than the predetermined value, the number of regions of interest based on the target depth data to obtain a target depth of field; obtaining (04) a target blurring intensity based on the target depth of field; and blurring (05) the initial depth image based on the target blurring intensity to obtain a blurred depth image.

Depth-based image stabilization
11704776 · 2023-07-18 · ·

Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.

Depth-based image stabilization
11704776 · 2023-07-18 · ·

Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.

IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
20230232130 · 2023-07-20 ·

Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.

Multi-channel depth estimation using census transforms

A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.

Multi-channel depth estimation using census transforms

A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.

Methods and systems for reprojection in augmented-reality displays
11704883 · 2023-07-18 · ·

Methods and systems are provided for a reprojection engine for augmented-reality devices. The augmented-reality device projects virtual content within a real-world environment. The augmented-reality device tracks a six degrees of freedom headpose of the augmented-reality device, depth information of the virtual content, motion vectors that correspond to movement of the virtual content, and a color buffer for a reprojection engine. The reprojection engine generates a reprojection of the virtual content defined by an extrapolation of a first frame using the headpose, the depth information, motion vectors, and the color surface data structure. The reprojected virtual content continues to appear as if positioned with the real-world environment regardless of changes in the headpose of the augmented-reality device or motion of the virtual content.

Methods and systems for reprojection in augmented-reality displays
11704883 · 2023-07-18 · ·

Methods and systems are provided for a reprojection engine for augmented-reality devices. The augmented-reality device projects virtual content within a real-world environment. The augmented-reality device tracks a six degrees of freedom headpose of the augmented-reality device, depth information of the virtual content, motion vectors that correspond to movement of the virtual content, and a color buffer for a reprojection engine. The reprojection engine generates a reprojection of the virtual content defined by an extrapolation of a first frame using the headpose, the depth information, motion vectors, and the color surface data structure. The reprojected virtual content continues to appear as if positioned with the real-world environment regardless of changes in the headpose of the augmented-reality device or motion of the virtual content.

Depth map re-projection on user electronic devices

A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.