G06T7/536

DETERMINING IMAGE FEATURE HEIGHT DISPARITY
20230222678 · 2023-07-13 ·

A device to determine a height disparity between features of an image includes a memory including instructions and processing circuitry. The processing circuitry is configured by the instructions to obtain an image including a first repetitive feature and a second repetitive feature. The processing circuitry is further configured by the instructions to determine a distribution of pixels in a first area of the image, where the first area includes an occurrence of the repetitive features, and to determine a distribution of pixels in a second area of the image, where the second area includes another occurrence of the repetitive features. The processing circuitry is further configured by the instructions to evaluate the distribution of pixels in the first area and the distribution of pixels in the second area to determine a height difference between the first repetitive feature and the second repetitive feature.

Image processing apparatus, control method for an image processing apparatus, and medium
11557081 · 2023-01-17 · ·

A reduction in quality of a virtual viewpoint image is suppressed. An image processing apparatus specifies an image in which a specific position in a target object is not occluded by another object from among a plurality of images obtained by the plurality of imaging apparatuses based on a pixel value of the plurality of images; determines, based on the specified image, a value of a pixel corresponding to the specific position in the virtual viewpoint image to be generated based on the plurality of images; and generates the virtual viewpoint image including the target object based on the determined value of the pixel.

Image processing apparatus, control method for an image processing apparatus, and medium
11557081 · 2023-01-17 · ·

A reduction in quality of a virtual viewpoint image is suppressed. An image processing apparatus specifies an image in which a specific position in a target object is not occluded by another object from among a plurality of images obtained by the plurality of imaging apparatuses based on a pixel value of the plurality of images; determines, based on the specified image, a value of a pixel corresponding to the specific position in the virtual viewpoint image to be generated based on the plurality of images; and generates the virtual viewpoint image including the target object based on the determined value of the pixel.

Vanishing point extraction devices and methods of extracting vanishing point
11557052 · 2023-01-17 · ·

Vanishing point extraction includes obtaining a straight line including a vanishing point of a first image; obtaining a plurality of sample points in the first image based on processing the first image according to an object included in the first image and the straight line including the vanishing point of a first image, such that the plurality of sample points are determined as pixels in the first image having coordinates that overlap with coordinates of pixels of both the straight line and the object included in the first image; obtaining at least one matching point, in a second image, that corresponds to at least one sample point of the plurality of sample points in the first image the second image generated subsequently to the first image being generated; and obtaining a vanishing point of the second image based on the at least one matching point of the second image.

Vanishing point extraction devices and methods of extracting vanishing point
11557052 · 2023-01-17 · ·

Vanishing point extraction includes obtaining a straight line including a vanishing point of a first image; obtaining a plurality of sample points in the first image based on processing the first image according to an object included in the first image and the straight line including the vanishing point of a first image, such that the plurality of sample points are determined as pixels in the first image having coordinates that overlap with coordinates of pixels of both the straight line and the object included in the first image; obtaining at least one matching point, in a second image, that corresponds to at least one sample point of the plurality of sample points in the first image the second image generated subsequently to the first image being generated; and obtaining a vanishing point of the second image based on the at least one matching point of the second image.

System and method for digital hologram synthesis and process using deep learning

A system and method for hologram synthesis and processing capable of synthesizing holographic 3D data and displaying (or reconstructing) a full 3D image at high speed using a deep learning engine. The system synthesizes or generates a digital hologram from a light field refocus image input using the deep learning engine. That is, RGB-depth map data is acquired at high speed using the deep learning engine, such as a convolutional neural network (CNN), from real 360° multi-view color image information and the RGB-depth map data is used to produce hologram content. In addition, the system interlocks hologram data with user voice recognition and gesture recognition information to display the hologram data at a wide viewing angle and enables interaction with the user.

System and method for digital hologram synthesis and process using deep learning

A system and method for hologram synthesis and processing capable of synthesizing holographic 3D data and displaying (or reconstructing) a full 3D image at high speed using a deep learning engine. The system synthesizes or generates a digital hologram from a light field refocus image input using the deep learning engine. That is, RGB-depth map data is acquired at high speed using the deep learning engine, such as a convolutional neural network (CNN), from real 360° multi-view color image information and the RGB-depth map data is used to produce hologram content. In addition, the system interlocks hologram data with user voice recognition and gesture recognition information to display the hologram data at a wide viewing angle and enables interaction with the user.

Transparent display system, parallax correction method and image outputting method

A parallax correction method for a transparent display system is provided. The transparent display system includes a transparent display device located between a background object and a user. The parallax correction method includes the following steps. A gaze point is displayed on the transparent display device. An image including the transparent display device, the background object and the user is captured. At least two display anchor points and at least two corresponding background object anchor points are detected according to the image. The display anchor points are located on the transparent display device, and the background object anchor points are located on the background object. A plurality of visual extension lines extending from the display anchor points and the corresponding background object anchor points are obtained. An equivalent eye position of the ocular dominance of the user is obtained according an intersection of the visual extension lines.

Transparent display system, parallax correction method and image outputting method

A parallax correction method for a transparent display system is provided. The transparent display system includes a transparent display device located between a background object and a user. The parallax correction method includes the following steps. A gaze point is displayed on the transparent display device. An image including the transparent display device, the background object and the user is captured. At least two display anchor points and at least two corresponding background object anchor points are detected according to the image. The display anchor points are located on the transparent display device, and the background object anchor points are located on the background object. A plurality of visual extension lines extending from the display anchor points and the corresponding background object anchor points are obtained. An equivalent eye position of the ocular dominance of the user is obtained according an intersection of the visual extension lines.

Systems and methods for generating augmented reality environments from two-dimensional drawings

Systems and methods for generating augmented reality environments from 2D drawings are provided. The system performs a camera calibration process to determine how a camera transforms images from the real world into a 2D image plane. The system calculates a camera pose and determines an object position and an object orientation relative to a known coordinate system. The system detects and processes a 2D drawing/illustration and generates a 3D model from the 2D drawing/illustration. The system performs a rendering process, wherein the system generates an augmented reality environment which includes the 3D model superimposed on an image of the 2D drawing/illustration. The system can generate the augmented reality environment in real time, allowing the system to provide immediate feedback to the user. The images processed by the system can be from a video, from multiple image photography, etc.