THREE-DIMENSIONAL AUTO-FOCUSING DISPLAY METHOD AND SYSTEM THEREOF
20170257614 · 2017-09-07
Inventors
Cpc classification
H04N13/383
ELECTRICITY
H04N13/122
ELECTRICITY
H04N2013/0081
ELECTRICITY
H04N2213/002
ELECTRICITY
H04N13/271
ELECTRICITY
International classification
Abstract
A 3D auto-focusing display method comprises executing an eye-tracking step on a 3D image to obtain focal point coordinates (x1, y1) of viewers of the image, mapping the focal point coordinates (x1, y1) of viewers to a coordinate location of a display to obtain display coordinates (x2, y2) for defining the coordinate location of the display corresponding to a depth diagram of the 3D image, determining a region where the image is located by using the display coordinates (x2, y2) as an input parameter and by use of the depth diagram of the image, determining whether the image is 3D stereoscopic images according to the region and executing a depth map step to revise the 3D image based on the image and a plurality of depth data of the region to reflect the display coordinates (x2, y2) as a focused image, and outputting the revised focused image to the display.
Claims
1. A three-dimensional (3D) auto-focusing display method, comprising: providing an image; executing an eye-tracking step on the image to obtain focal point coordinates (x1, y1) of viewers of the image; mapping the focal point coordinates (x1, y1) of viewers to a coordinate location of a display to obtain display coordinates (x2, y2) for defining the coordinate location of the display corresponding to a depth diagram of the image; determining a region where the image is located by using the display coordinates (x2, y2) as an input parameter and by use of the depth diagram of the image; determining whether the image is three-dimensional (3D) stereoscopic images according to the region where the image is located, and executing a depth map step to revise the 3D image based on the image and a plurality of depth data of the region to reflect the display coordinates (x2, y2) as a focused image; and outputting the revised focused image to the display to display 3D stereoscopic images on the display.
2. The three-dimensional (3D) auto-focusing display method as claimed in claim 1, wherein the image is one of a landscape, portrait or physical substance goods.
3. The three-dimensional (3D) auto-focusing display method as claimed in claim 1, wherein the depth diagram of the image is a combination of a plurality of segments of different regions.
4. The three-dimensional (3D) auto-focusing display method as claimed in claim 3, wherein each segment is defined as a set of pixels of the image, and the image has a same depth value or in a range of the same depth value.
5. A three-dimensional (3D) auto-focusing display system, comprising: a front viewer image capturing sensor module used for performing an eye-tracking function on an image to obtain focal point coordinates (x1, y1) of viewers of the image; a rear viewer image capturing sensor module used for capturing the image; an image processing module used for processing the image to obtain display coordinates (x2, y2) corresponding to the image and to display the image as a 3D stereoscopic image; and a display module used for displaying the 3D stereoscopic image.
6. The three-dimensional (3D) auto-focusing display system as claimed in claim 5, wherein the front viewer image capturing sensor module is a camera apparatus with sensors or a web camera apparatus with pupil detection function.
7. The three-dimensional (3D) auto-focusing display system as claimed in claim 5, wherein the rear viewer image capturing sensor module is one of a time-of-flight camera apparatus, a stereoscopic camera apparatus, and a web camera apparatus with image depth generating function.
8. The three-dimensional (3D) auto-focusing display system as claimed in claim 5, wherein the image processing module further uses a two-dimensional (2D) image and information of a depth diagram corresponding to the 2D image to form the 3D stereoscopic images.
9. The three-dimensional (3D) auto-focusing display system as claimed in claim 5, wherein the image processing module further executes a number of image analyses and filtering algorithm on the 3D stereoscopic images, and corrects the 3D stereoscopic images in use of image data and depth diagram data.
10. The three-dimensional (3D) auto-focusing display system as claimed in claim 5, wherein the image processing module further uses a method of extrapolating to extrapolate the focal point coordinates (x1, y1) of viewers, and therefore executes auto-focusing and translates the focal point coordinates (x1, y1) of viewers into the display coordinates (x2, y2) with respect to the display module, and confirms segments of the image in order to reflect the display coordinates (x2, y2), and to form a suitable stereoscopic gained image and confirm the 3D stereoscopic image being displayed is located on a focus of the display module.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
[0019] Referring to
[0020] Next, in step 13, a step of executing image processing is performed by using a three-dimensional (3D) image auto-focusing step to process the display coordinates (x2, y2) relative to the image, the three-dimensional (3D) stereoscopic images and depth maps of images so as to form a new, focused and corrected three-dimensional (3D) stereoscopic images relative to the display coordinates (x2, y2). In step 15, a step of executing a sub-pixel mapping step is performed to further focus and correct the three-dimensional (3D) stereoscopic images. Finally, in step 17, a step of outputting the focused and corrected three-dimensional (3D) stereoscopic image to reflect the display coordinates (x2, y2) on the display.
[0021] In more details, the three-dimensional auto-focusing display method in accordance with the present invention is provided to comprise integration of two systems. The method comprises a step of displaying images of 3D stereoscopic content, such as the step 11, at first wherein viewer's eyes are focused on a specific point in a physical space. Next, a step of executing an eye-tracking step, such as the step 111, is performed to obtain focal point coordinates (x1, y1) of viewers, and determine the focal point coordinates (x1, y1) of viewers by using an eye-tracking system. Then, a step of mapping the focal point coordinates (x1, y1) of viewers to display coordinates (x2, y2), such as the step 113, is performed, and the display coordinates (x2, y2) are represented by pixel coordinates of a display. A step of executing a depth map step on the image in order to obtain a depth diagram corresponding to the image, such as the step 121, is then performed. The depth diagram can be obtained by using hardware components or by using depth structural algorithms to process 3D stereoscopic images. The display coordinates (x2, y2) relative to the image are used as input parameters of an image processing module 25 of the three-dimensional (3D) stereoscopic display system 2 of the present invention. The image processing module 25 determines in which region along coordinates the image is located by input of the display coordinates (x2, y2) and by use of the image depth diagram. The image depth diagram is an identifying factor of regions of the image. In other words, the depth diagram is a combination of segments of different regions. Each segment is defined as a set of pixels of the image, which has a same depth value or in a range of the same depth value. The image processing module 25 uses a combination of images and depth data to correct 3D stereoscopic images, and to reflect the display coordinates (x2, y2) as focuses. Then, the image processing module 25 outputs the corrected and focused image by forming a sub-pixel pattern (RGB patterns), and outputs the corrected and focused image to the display to form a sub-pixel pattern for convenient viewing.
[0022] Next, referring to
[0023] The rear viewer image capturing sensor module 23 is used for image capturing from the image and acts as a source of stereoscopic images in the present invention. In a preferred embodiment of the present invention, the rear viewer image capturing sensor module 23 can be a stereo camera module with a time-of-flight sensor. The camera module can be used to capture stereoscopic images by itself or to capture images relative to a depth diagram by using a flying sensor. Another example of the rear viewer image capturing sensor module 23 comprises a stereo camera apparatus without any time-of-flight sensor, and a two dimensional (2D) image sensor. The rear viewer image capturing sensor module 23 is not limited the above examples. The modules mentioned above can establish stereoscopic images and depth diagrams for output by using image processing of stereoscopic or 2D images. The rear viewer image capturing sensor module 23 can also be one of a time-of-flight camera apparatus, a stereoscopic camera apparatus, and a web camera apparatus with image depth generating function.
[0024] The image processing module 25 is used for executing an image processing step. The image processing step comprises identification of stereoscopic images and depth diagrams corresponding to the stereoscopic images, and establishing of an image data set, a stereoscopic image and a depth diagram corresponding to the stereoscopic image. The image processing module 25 is used for processing the focal point coordinates (x1, y1) of viewers, mapping the focal point coordinates (x1, y1) of viewers to display coordinates (x2, y2) relative to the display module 27, and executing auto-focusing gains and correction procedures to reflect the display coordinates (x2, y2) on the display module 27.
[0025] After processing of the image processing module 25, the focused and corrected three-dimensional (3D) stereoscopic images are able to reflect the display coordinates (x2, y2), and are transmitted to the display module 27 which can display three-dimensional (3D) stereoscopic images. The transmitted three-dimensional (3D) stereoscopic images are displayed for viewers having specific focusing with image sections.
[0026] The above disclosed three-dimensional (3D) auto-focusing display system 2 can display stereoscopic contents and three-dimensional (3D) stereoscopic auto-focusing images characteristics without any need of glasses. In details, the three-dimensional (3D) auto-focusing display system 2 comprises a 3D auto-stereoscopic display module 27, a front viewer image capturing sensor module 21 (or an eye-tracking camera) used for direct execution of eye-tracking function to obtain focal point coordinates (x1, y1) of viewers, and a rear viewer image capturing sensor module 23 (or a stereoscopic depth camera) used for capturing stereoscopic images and/or capturing 2D images along with a depth diagram of an image. The system also comprises a plurality of image processing modules 25. The image processing modules 25 are used for forming, gaining and outputting to display three-dimensional (3D) stereoscopic images. Three-dimensional (3D) stereoscopic images are formed by using 2D images and depth diagram information corresponding to the 2D images. Gain of three-dimensional (3D) stereoscopic images is processed by executing a number of image analyses and filtering algorithm on three-dimensional (3D) stereoscopic images, and by correcting three-dimensional (3D) stereoscopic images in use of image data and depth diagram data. Another image processing module 25 uses a method of extrapolating to extrapolate the focal point coordinates (x1, y1) of viewers, and therefore executes auto-focusing and translates the focal point coordinates (x1, y1) of viewers into display coordinates (x2, y2) (or named as second coordinates) with respect to the display module 27. Then, segments of the image are confirmed in order to reflect the display coordinates (x2, y2), and used to form a suitable stereoscopic gained image in order to confirm the stereoscopic image being displayed is located on focuses. The last image processing module 25 functions for inputting and gaining stereoscopic images, and then for executing RGB sub-pixel algorithm to output stereoscopic images to the display module 27.
[0027] Based on the above, further referring to
[0028] Referring to
[0029]
[0030] Further referring to
[0031] Although only the preferred embodiments of the present invention are described as above, the practicing scope of the present invention is not limited to the disclosed embodiments. It is understood that any simple equivalent changes or adjustments to the present invention based on the following claims of the present invention and the content of the above invention description may be still covered within the claimed scope of the following claims of the present invention.