METHOD FOR MITIGATING 3D CROSSTALK AND 3D DISPLAY
20230179752 · 2023-06-08
Assignee
Inventors
Cpc classification
H04N13/378
ELECTRICITY
H04N13/305
ELECTRICITY
International classification
Abstract
The disclosure provides a method for mitigating 3D crosstalk and a 3D display. The method includes: detecting first and second eye positions of a user, and determining a viewing angle of the user and a rotation angle of a head of the user accordingly; estimating a first reference position and a first midpoint position between first and second eyes of the user based on the first and second eye positions of the user; obtaining a second reference position, and estimating a difference between the first and second reference positions; correcting the first midpoint position to a second midpoint position based on the rotation angle of the user and the difference; and determining a first pixel for projecting to the first eye and a second pixel for projecting to the second eye among the pixels of the 3D display based on the second midpoint position.
Claims
1. A method for mitigating 3D crosstalk, adapted for a 3D display, comprising: detecting a first eye position and a second eye position of a user, and determining a viewing angle of the user and a rotation angle of a head of the user according to the first eye position and the second eye position of the user; estimating a first reference position and a first midpoint position between a first eye and a second eye of the user based on the first eye position and the second eye position of the user; obtaining a second reference position, and estimating a difference between the first reference position and the second reference position; correcting the first midpoint position to a second midpoint position based on the rotation angle of the user and the difference; and determining at least one first pixel adapted to project a light to the first eye of the user and at least one second pixel adapted to project a light to the second eye of the user, among a plurality of pixels of the 3D display, based on the second midpoint position.
2. The method of claim 1, wherein obtaining the second reference position comprises: setting the second reference position as the first reference position in response to determining that the viewing angle of the user complies with a predetermined condition.
3. The method of claim 2, comprising: determining that the viewing angle of the user complies with the predetermined condition in response to determining that the viewing angle of the user is 90 degrees.
4. The method of claim 1, wherein obtaining the second reference position comprises: obtaining a first historical reference position corresponding to the viewing angle of the user compliant with a predetermined condition last time in response to determining that the viewing angle of the user does not comply with the predetermined condition, and taking the first historical reference position as the second reference position.
5. The method of claim 1, wherein an x-axis coordinate and a z-axis coordinate of the first eye position are respectively represented as x.sub.L and z.sub.L , an x-axis coordinate and a z-axis coordinate of the second eye position are respectively represented as x.sub.R and z.sub.R, and the rotation angle of the head of the user is represented as θ, and θ=tan.sup.−1 [(z.sub.R−z.sub.L)/(x.sub.R−x.sub.L)].
6. The method of claim 1, wherein the rotation angle of the head of the user is represented as θ, and the viewing angle of the user is represented as δ, and δ=90°−θ.
7. The method of claim 1, wherein an x-axis coordinate and a z-axis coordinate of the first eye position are respectively represented as x.sub.L and z.sub.L , an x-axis coordinate and a z-axis coordinate of the second eye position are respectively represented as x.sub.R and z.sub.R, and the first midpoint position is represented as X.sub.mid, and X.sub.mid=(z.sub.R+z.sub.L)/2.
8. The method of claim 1, wherein the first midpoint position is represented as X.sub.mid, the difference is represented as Z.sub.diff, the second midpoint position is represented as X.sub.mod, and the rotation angle of the head of the user is represented as θ, and X.sub.mod=X.sub.mid+Z.sub.diff×tan(θ/2).
9. The method of claim 1, wherein a z-axis coordinate of the first eye position is represented as z.sub.L, a z-axis coordinate of the second eye position is represented as z.sub.R, and the first reference position is represented as Z.sub.δ, and Z.sub.δ=(z.sub.R+z.sub.L)/2.
10. The method of claim 1, wherein after determining the at least one first pixel adapted to project the light to the first eye of the user and the at least one second pixel adapted to project the light to the second eye of the user, among the plurality of pixels of the 3D display, based on the second midpoint position, the method further comprises: finding out at least one potential error pixel from the at least one first pixel according to the first eye position and the second eye position of the user; obtaining an angle difference between the viewing angle of the user and a reference angle, and determining an attenuation coefficient according to the angle difference, wherein the attenuation coefficient is negatively correlated with the angle difference; and reducing an intensity of a projection light of each potential error pixel based on the attenuation coefficient.
11. A 3D display, comprising: an eye tracking device, which detects a first eye position and a second eye position of a user; a processor, configured to: determine a viewing angle of the user and a rotation angle of a head of the user according to the first eye position and the second eye position; estimate a first reference position and a first midpoint position between a first eye and a second eye of the user based on the first eye position and the second eye position of the user; obtain a second reference position, and estimate a difference between the first reference position and the second reference position; correct the first midpoint position to a second midpoint position based on the rotation angle of the user and the difference; and determine at least one first pixel adapted to project a light to the first eye of the user and at least one second pixel adapted to project a light to the second eye of the user, among a plurality of pixels of the 3D display, based on the second midpoint position.
12. The 3D display of claim 11, wherein the processor performs: setting the second reference position as the first reference position in response to determining that the viewing angle of the user complies with a predetermined condition.
13. The 3D display of claim 12, wherein the processor performs: determining that the viewing angle of the user complies with the predetermined condition in response to determining that the viewing angle of the user is 90 degrees.
14. The 3D display of claim 11, wherein the processor performs obtaining a first historical reference position corresponding to the viewing angle of the user compliant with a predetermined condition last time in response to determining that the viewing angle of the user does not comply with the predetermined condition, and taking the first historical reference position as the second reference position.
15. The 3D display of claim 11, wherein an x-axis coordinate and a z-axis coordinate of the first eye position are respectively represented as x.sub.L and z.sub.L, an x-axis coordinate and a z-axis coordinate of the second eye position are respectively represented as x.sub.R and z.sub.R, and the rotation angle of the head of the user is represented as θ, and θ=tan.sup.−1 [(z.sub.R−z.sub.L)/(x.sub.R−x.sub.L)].
16. The 3D display of claim 11, wherein the rotation angle of the head of the user is represented as θ, and the viewing angle of the user is represented as δ, and δ=90°−θ.
17. The 3D display of claim 11, wherein an x-axis coordinate and a z-axis coordinate of the first eye position are respectively represented as x.sub.L and z.sub.L, an x-axis coordinate and a z-axis coordinate of the second eye position are respectively represented as x.sub.R and z.sub.R, and the first midpoint position is represented as X.sub.mid, and X.sub.mid=(x.sub.R+x.sub.L)/2.
18. The 3D display of claim 11, wherein the first midpoint position is represented as X.sub.mid, the difference is represented as Z.sub.diff, the second midpoint position is represented as X.sub.mod, and the rotation angle of the head of the user is represented as θ, and X.sub.mod=X.sub.mid+Z.sub.diff×tan(θ/2).
19. The 3D display of claim 11, wherein a z-axis coordinate of the first eye position is represented as z.sub.L, a z-axis coordinate of the second eye position is represented as z.sub.R, and the first reference position is represented as Z.sub.δ, and Z.sub.δ=(z.sub.R+z.sub.L)/2.
20. The 3D display of claim 11, wherein after determining the at least one first pixel adapted to project the light to the first eye of the user and the at least one second pixel adapted to project the light to the second eye of the user, among the plurality of pixels of the 3D display, based on the second midpoint position, the processor further performs: finding out at least one potential error pixel from the at least one first pixel according to the first eye position and the second eye position of the user; obtaining an angle difference between the viewing angle of the user and a reference angle, and determining an attenuation coefficient according to the angle difference, wherein the attenuation coefficient is negatively correlated with the angle difference; and reducing an intensity of a projection light of each potential error pixel based on the attenuation coefficient.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. the drawings illustrate exemplary embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DESCRIPTION OF THE EMBODIMENTS
[0020] Referring to
[0021] Taking
[0022] However, if the first midpoint position X.sub.mid may be corrected to a second midpoint position X.sub.mod through a kind of mechanism, the light path R.sub.1 may be correctly assigned to correspond to the right eye, thereby improving the 3D crosstalk situation.
[0023] In view of this, the disclosure provides a method for mitigating 3D crosstalk, which is adapted to improve the above-mentioned technical issue.
[0024] Referring to
[0025] As shown in
[0026] In some embodiments, the 3D display 300 may have, for example, the aforementioned pixels corresponding to the left eye and the right eye, a 3D lens element, a 3D weaver, etc., but components of the 3D display 300 may not be limited thereto.
[0027] The processor 304 is coupled to the eye tracking device 302, and may be a general-purpose processor, a special-purpose processor, a traditional processor, a digital signal processor, multiple microprocessors, one or more microprocessors combined with a core of the digital signal processor, a controller, a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), any other types of integrated circuits, a state machine, a processor based on an advanced RISC machine (ARM) and the like.
[0028] In the embodiment of the disclosure, the processor 304 may access specific modules and program codes to realize the method for mitigating 3D crosstalk provided by the disclosure, details of which are described as follows.
[0029] Referring to
[0030] First of all, in step S410, the first eye position and the second eye position of the user are detected by the eye tracking device 302. In one embodiment, an x-axis coordinate and a z-axis coordinate of the first eye position (such as a left eye position) may be respectively represented as x.sub.L and z.sub.L, and an x-axis coordinate and a z-axis coordinate of the second eye position (such as a right eye position) may be respectively represented as X.sub.R and z.sub.R.
[0031] Next, in step S420, a viewing angle δ of the user and a rotation angle 9 of the user's head are determined by the processor 304 according to the first eye position and the second eye position. In one embodiment, the rotation angle θ is obtained based on a formula “θ=tan.sup.−1 [(z.sub.R−z.sub.L)/(x.sub.R−x.sub.L]”, for example. Besides, the viewing angle δ is obtained based on a formula “δ=90°−θ”, for example.
[0032] In the embodiment of the disclosure, when the user faces the 3D display 300, the rotation angle θ of the user's head may be set as, for example, 0 degree, and the viewing angle δ may be correspondingly estimated to be, for example, 90 degrees.
[0033] Then, in step S430, a first reference position Z.sub.δ and the first midpoint position X.sub.mid between a first eye (such as the left eye) and a second eye (such as the right eye) of the user are estimated by the processor 304 according to the first eye position and the second eye position of the user.
[0034] In one embodiment, the processor 304 may select any point from the z-axis coordinate of the first eye position to the z-axis coordinate of the second eye position as the first reference position Z.sub.δ, for example. In one embodiment, the first reference position Z.sub.δ is obtained by the processor 304 based on, for example, a formula “Z.sub.δ=(z.sub.R+z.sub.L)/2”, but the disclosure is not limited thereto.
[0035] Moreover, the processor 304 may select any point from the x-axis coordinate of the first eye position to the x-axis coordinate of the second eye position as the first midpoint position X.sub.mid, for example. In one embodiment, the first midpoint position X.sub.mid is obtained by the processor 304 based on, for example, a formula “X.sub.mid=(x.sub.R+x.sub.L)/2”, but the disclosure is not limited thereto.
[0036] In step S440, a second reference position Z.sub.90 is obtained by the processor 304, and a difference Z.sub.diff between the first reference position Z.sub.δ and the second reference position Z.sub.90 is estimated. In one embodiment, whether the viewing angle δ complies with a predetermined condition may be first determined by the processor 304 in a process of obtaining the second reference position Z.sub.90. If so, the second reference position Z.sub.90 is set as the first reference position Z.sub.δ by the processor 304; if not, a first historical reference position corresponding to the viewing angle of the user compliant with the predetermined condition last time may be obtained by the processor 304 as the second reference position.
[0037] In one embodiment, whether the viewing angle δ equals to 90 degrees may, for example, be determined by the processor 304 in a process of determining whether the viewing angle δ complies with the predetermined condition. If so, the processor 304 determines that the viewing angle δ complies with the predetermined condition; otherwise, the processor 304 determines that the viewing angle δ does not comply with the predetermined condition, but the disclosure is not limited thereto. That is to say, the processor 304 may determine whether the user faces the 3D display 300; if so, the viewing angle δ is determined compliant with the predetermined condition, and the second reference position Z.sub.90 is further set as the current first reference position Z.sub.5, and then subsequent calculations are performed.
[0038] On the other hand, if the processor 304 determines that the viewing angle δ does not comply with the predetermined condition (for example, the user does not face the 3D display 300), then the processor 304 may take the first reference position obtained when the user faces the 3D display 300 last time as the first historical reference position, and further set the second reference position Z.sub.90 as the first historical reference position.
[0039] In one embodiment, it is assumed that the corresponding viewing angle δ may be estimated by the processor 304 according to the current first and second eye positions at different time points. Assuming that at a t−i time point, the current viewing angle (indicated as δ.sub.t−i) has been determined compliant with the predetermined condition by the processor 304 according to the current first and second eye positions of the user, then the first reference position (indicated as Z.sub.δ.sup.t−i) obtained at the moment may be taken as the second reference position Z.sub.90 by the processor 304. Next, assuming that the viewing angle δ.sub.t−i+1 obtained at a t−i+1 time point does not comply with the predetermined condition, then Z.sub.δ.sup.t−i may be adopted as the second reference position Z.sub.90 when step S440 corresponding to the t−i+1 time point is performed by the processor 304, and the difference (indicated as Z.sub.diff.sup.t−i+1) between the current first reference position (indicated as Z.sub.δa.sup.t−i+1) and the second reference position (i.e., Z.sub.δ.sup.t−i) is further estimated accordingly.
[0040] If none of the viewing angles obtained from a t−i+2 time point to the t−1 time point complies with the predetermined condition, the processor 304 may estimate the corresponding difference based on the above teachings.
[0041] Next, assuming that the viewing angle (indicated as δ.sub.t) obtained at a t time point complies with the predetermined condition again, the first reference position obtained currently (indicated as Z.sub.δ.sup.t) may be taken as the second reference position Z.sub.90 by the processor 304. Then, assuming that the viewing angle δ.sub.t+1 obtained at a t+1 time point does not comply with the predetermined condition, then Z.sub.δhu t may be adopted as the second reference position Z.sub.90 when step S440 corresponding to the t+1 time point is performed by the processor 304, and the difference (indicated as Z.sub.diff.sup.t+1) between the current first reference position (indicated as Z.sub.δ.sup.t+1) and the second reference position (i.e., Z.sub.δ.sup.t) is further estimated accordingly.
[0042] In other embodiments, a designer may also set a determining mechanism adapted to determine whether the viewing angle δ complies with the predetermined condition based on needs of the designer. For example, the processor 304 may also determine that the viewing angle δ complies with the predetermined condition when the viewing angle δ falls within a certain range (for instance, from 90−k to 90+k, where k is an arbitrary value), but the disclosure is not limited thereto.
[0043] After the second reference position Z.sub.90 is obtained based on the above teachings, the difference Z.sub.diff may be obtained by the processor 304 based on, for example, “Z.sub.diff=Z.sub.δ−Z.sub.90”, but the disclosure is not limited thereto.
[0044] Then, in step S450, the first midpoint position X.sub.mid is corrected to the second midpoint position X.sub.mod by the processor 304 based on the rotation angle θ of the user and the difference Z.sub.diff. In one embodiment, the second midpoint position X.sub.mod may be obtained by the processor 304 based on a formula “X.sub.mod=X.sub.mid+Z.sub.diff×tan(θ/2)”, but the disclosure is not limited thereto.
[0045] In step S460, at least one first pixel adapted to project a light to the first eye of the user (for example, the left eye) and at least one second pixel adapted to project a light to the second eye of the user (for example, the right eye), among the pixels of the 3D display 300, may be determined by the processor 304 based on the second midpoint position X.sub.mod.
[0046] In one embodiment, different from conventional techniques that regard the first midpoint position X.sub.mid as the reference point, the processor 304 may regard the second midpoint position X.sub.mod as the reference point instead, and accordingly further determine the pixels (i.e., the first pixels) for projecting a light to the first eye of the user and the pixels (i.e., the second pixels) for projecting a light to the second eye of the user, among the pixels of the 3D display 300.
[0047] In this way, as shown in
[0048] Furthermore, the user may generally rotate his or her head with the first eye as the axis or with the second eye as the axis, and the displacement amounts of the left and right eyes relative to Z.sub.90 may be used to estimate whether the rotation axis of the head of the user is closer to the left or the right eye. The eye with a smaller z-displacement amount is regarded as the rotation axis; the z-displacement of the right eye is z.sub.diff,R=z.sub.R−z.sub.R,90, and the z-displacement of the left eye is z.sub.diff,L=z.sub.L−z.sub.L,90. If |z.sub.diff,R|<|z.sub.diff,L|, the rotation axis falls on the right eye; otherwise, the rotation axis falls on the left eye, and the method of the disclosure is suitable for the two situations. For making the above concepts more understandable, the following descriptions are supplemented with
[0049] Referring to
[0050] Further referring to
[0051] Referring to
[0052] In the embodiment of the disclosure, 3D crosstalk is mitigated by the method described in the above embodiments, but some pixels may inevitably project the light to the wrong eye when the viewing angle δ is too large.
[0053] Referring to
[0054] In view of this situation, the disclosure further provides a corresponding processing mechanism, which is able to attenuate the light projected by the some pixels described above, thereby mitigating 3D crosstalk.
[0055] In one embodiment, the processor 304 finds out at least one potential error pixel from the aforementioned first pixels (corresponding to the first eye) according to the first eye position and the second eye position of the user. In the embodiment of disclosure, the potential error pixel is, for example, a pixel that may project the light to the wrong eye, like the pixel R.sub.k.
[0056] Typically, in a manufacturing process of the 3D display 300, the pixels used by the 3D display 300 to project lights and the corresponding light projection angles when the user is located at a certain position in front of the 3D display 300 may be learned through simulation, and the relative positions between the light projected by each pixel and the 3D lens element 102 may be known beforehand. In other words, when the eye positions of the user are known, which pixels project the light to the eye and the angles thereof may all be known beforehand through simulation.
[0057] Therefore, which pixels may result in the situation of the pixel R.sub.k as shown in
[0058] Next, the processor 304 obtains an angle difference between the viewing angle δ of the user and a reference angle and further determines an attenuation coefficient accordingly, and the attenuation coefficient may be negatively correlated with the above angle difference (i.e., the greater the angle difference is, the smaller the attenuation coefficient is, and vice versa). Then, the processor 304 reduces the intensity of a projection light of each potential error pixel based on the attenuation coefficient. In one embodiment, the attenuation coefficient is, for example, a value less than 1, and the processor 304 multiplies the intensity of the projection light of each potential error pixel by the attenuation coefficient to reduce the intensity of the projection light corresponding to each potential error pixel.
[0059] In one embodiment, the aforementioned reference angle may be set as, for example, 90 degrees (i.e., the viewing angle with the lowest degree of crosstalk). In this case, the processor 304 obtains an angle difference between the viewing angle δ and 90, for example. As shown in
[0060] Since the intensity of the projection light of each potential error pixel has been reduced through the above-mentioned mechanism, even if the light projected by each potential error pixel enters the wrong eye, the eye is less affected. In this way, the user's experience of viewing the 3D display is improved accordingly.
[0061] In short, according to the embodiments of the disclosure, the first midpoint position is corrected to the second midpoint position based on the rotation angle of the head of the user and the difference between the first and the second reference positions, and then the second midpoint position is taken as the reference point to determine which pixels in the 3D display are adapted to project the light to the left eye of the user and which pixels in the 3D display are adapted to project the light to the right eye of the user. Compared with the conventional techniques that take the first midpoint position as the reference point, the embodiments of the disclosure are able to correspondingly mitigate 3D crosstalk, thereby improving the user's viewing experience of the 3D display.
[0062] In addition, when the viewing angle of the user is too large and causes some pixels to unavoidably project the light to the wrong eye, the embodiments of the disclosure are able to reduce the intensity of the projection light of these pixels, thereby reducing the interference of these pixels caused to the user.
[0063] Although the disclosure has been described with reference to the above embodiments, it will be apparent to one of ordinary skill in the art that modifications to the described embodiments may be made without departing from the spirit and scope of the disclosure. Accordingly, the scope of the disclosure will be defined by the attached claims and not by the above detailed descriptions.