Dynamic convergence adjustment in augmented reality headsets
11736674 · 2023-08-22
Assignee
Inventors
- Yu-Jen Lin (Orlando, FL, US)
- Patrick John Goergen (Orlando, FL, US)
- Martin Evan Graham (Clermont, FL, US)
Cpc classification
H04N13/383
ELECTRICITY
H04N13/117
ELECTRICITY
H04N13/371
ELECTRICITY
G02B2027/0187
PHYSICS
International classification
G02B27/00
PHYSICS
G06T19/00
PHYSICS
Abstract
Systems and methods are disclosed that dynamically and laterally shift each virtual object displayed by an augmented reality headset by a respective distance as the respective virtual object is displayed to change virtual depth from a first virtual depth to a second virtual depth. The respective distance may be determined based on a lateral distance between a first convergence vector of a user's eye with the respective virtual object at the first virtual depth and a second convergence vector of the user's eye with the respective virtual object at the second virtual depth along the display, and may be based on an interpupillary distance. In this manner, display of the virtual object may be adjusted such that the gazes of the user's eyes may converge where the virtual object appears to be.
Claims
1. An augmented reality system comprising: an augmented reality headset configured to display virtual imagery; and one or more processors configured to: generate a first virtual image for display via the augmented reality headset, wherein the first virtual image comprises a virtual object at a first virtual depth; receive input image data comprising an indication that the virtual object is to be displayed as moving from the first virtual depth to a second virtual depth; determine a lateral adjustment to be applied to the virtual object based on an interpupillary distance of a user of the augmented reality headset and the indication that the virtual object is to be displayed as moving from the first virtual depth to the second virtual depth; and generate a second virtual image for display via the augmented reality headset based on the input image data, wherein the second virtual image comprises the virtual object at the second virtual depth and with the lateral adjustment applied to the virtual object.
2. The augmented reality system of claim 1, wherein the one or more processors are configured to: determine a first gaze line associated with the virtual object at the first virtual depth based on the interpupillary distance; and determine a second gaze line associated with the virtual object at the second virtual depth based on the interpupillary distance, wherein the lateral adjustment is based on the first gaze line and the second gaze line.
3. The augmented reality system of claim 1, wherein the first virtual depth and the second virtual depth comprise virtual depths of the virtual object in a simulated augmented reality environment or in a simulated virtual reality environment.
4. The augmented reality system of claim 1, wherein the augmented reality headset comprises a plurality of displays configured to display the first virtual image and the second virtual image.
5. The augmented reality system of claim 4, wherein the one or more processors are configured to determine the lateral adjustment along a display line passing through the plurality of displays.
6. The augmented reality system of claim 4, wherein the one or more processors are configured to determine the lateral adjustment based on a distance between a pupil of the user and a display of the plurality of displays.
7. The augmented reality system of claim 4, wherein the plurality of displays comprise semi-transparent displays configured to overlay the first virtual image and the second virtual image on a real world environment.
8. The augmented reality system of claim 1, wherein the augmented reality headset comprises a pupil tracking sensor configured to detect and provide an indication of a pupil position of the user.
9. The augmented reality system of claim 8, wherein the one or more processors are configured to determine the interpupillary distance based on the indication of the pupil position received from the pupil tracking sensor.
10. The augmented reality system of claim 1, wherein the lateral adjustment comprises a lateral shift of a center of the virtual object.
11. A tangible, non-transitory, computer-readable medium, comprising instructions for adjusting display of a virtual object that, when executed by one or more processors, cause the one or more processors to: generate a first virtual image for display via an augmented reality headset, wherein the first virtual image comprises the virtual object at a first virtual depth; receive an indication that the virtual object is to be displayed as moving from the first virtual depth to a second virtual depth; determine a lateral adjustment to be applied to the virtual object based on an interpupillary distance of a user of the augmented reality headset and the indication that the virtual object is to be displayed as moving from the first virtual depth to the second virtual depth; and generate a second virtual image for display via the augmented reality headset, wherein the second virtual image comprises the virtual object at the second virtual depth and with the lateral adjustment applied to the virtual object.
12. The tangible, non-transitory, computer-readable medium of claim 11, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: divide half of the interpupillary distance by a first virtual distance between a center point between pupils of the user and a reference point of the virtual object at the first virtual depth to determine a first quotient; and multiply the first quotient by a distance between a pupil of the pupils of the user and a display of the augmented reality headset to determine a first lateral pupil distance.
13. The tangible, non-transitory, computer-readable medium of claim 12, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: divide half of the interpupillary distance by a second virtual distance between the center point between the pupils and the reference point of the virtual object at the second virtual depth to determine a second quotient; and multiply the second quotient by an additional distance between another pupil of the pupils of the user and another display of the augmented reality headset to determine a second lateral pupil distance.
14. The tangible, non-transitory, computer-readable medium of claim 13, wherein the lateral adjustment comprises a difference between the first lateral pupil distance and the second lateral pupil distance.
15. The tangible, non-transitory, computer-readable medium of claim 11, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: receive input image data comprising the virtual object; and generate the first virtual image based on the input image data.
16. A method for adjusting display of virtual imagery comprising: generating a first virtual image for display via an augmented reality headset, wherein the first virtual image comprises a virtual object at a first virtual depth; receiving input image data comprising an indication that the virtual object is to be displayed as moving from the first virtual depth to a second virtual depth; determining a lateral adjustment to be applied to the virtual object based on an interpupillary distance of a user of the augmented reality headset and the indication that the virtual object is to be displayed as moving from the first virtual depth to the second virtual depth; and generating a second virtual image for display via the augmented reality headset based on the input image data, wherein the second virtual image comprises the virtual object at the second virtual depth and with the lateral adjustment applied to the virtual object.
17. The method of claim 16, comprising: determining a first gaze line associated with the virtual object at the first virtual depth based on the interpupillary distance; determining a second gaze line associated with the virtual object at the second virtual depth based on the interpupillary distance; and determining the lateral adjustment based on the first gaze line and the second gaze line.
18. The method of claim 16, wherein the lateral adjustment comprises a lateral shift of a center of the virtual object.
19. The method of claim 16, comprising determining the lateral adjustment based on a distance between a pupil of the user and a display of the augmented reality headset.
20. The method of claim 16, comprising determining the interpupillary distance of the user based on one or more indications of pupil positions of the user.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) In the real world, when a person views an object directly in front of them, the person simultaneously moves their eyes in opposite directions toward one another such that the gazes of each eye converges on the object or the pupil of each eye is in line with the object (a process referred to as vergence), and changes the optical power of their eyes to maintain a clear image of or focus on the object (a process referred to as accommodation). As such, a person is used to pointing their gaze at the same fixed point at which they are simultaneously focusing their eyes to maintain a clear image. If the person views the object as it moves closer, the gazes of each eye converge further together, and the optical power of the eyes changes to maintain a clear image of the object. If the person views the object as it moves further away, the gazes of each eye diverge, and the optical power of the eyes changes to maintain a clear image of the object. An augmented reality headset typically uses a display that simulates a depth of field. In particular, the display may be divided into a right display for the right eye to view and a left display for the left eye to view. Assuming the displays are generally rectangular, the augmented reality headset may display a virtual image having a virtual object directly in front of the user by displaying the virtual image having the virtual object on each of the right and left displays (e.g., a right virtual image having a right virtual object and a left virtual image having a left virtual object), with respective reference points (e.g., centers or approximate centers) of the respective virtual objects closer to the inside edge of each display than the outside edge. Moreover, the respective reference points of the virtual objects may be equal distances away from the inside edges of each display. This is because, when viewing a real world object, the gazes of each person's eyes will converge to the object the person is viewing.
(11) To make the virtual object appear to be closer to the user, the augmented reality headset may enlarge the virtual objects on the displays, while maintaining the equal distances from the respective reference points of the virtual objects to the inside edges of each display. To make the virtual object appear to be further from the user, the augmented reality headset may shrink the virtual objects on the displays, while maintaining the equal distances from the respective reference points of the virtual objects to the inside edges of each display. However, it is now recognized that, because the respective reference points of the virtual objects maintain equal distances to the inside edges of each display when appearing to move closer and further from the user, the point at which a user's eyes converge may not be where the virtual object appears to be. That is, the point at which a user's eyes converge may be in front of or behind where the virtual object appears to be. This may cause a blurring or double image effect when viewing the virtual object, resulting in a negative user experience.
(12) At the same time, the user's focus may be directed at where the virtual object appears to be. As such, a user may point their gaze at a different point than where they focus their eyes to maintain a clear image. This may create a vergence-accommodation conflict, which may lead to discomfort, fatigue, persisting headaches, and/or nausea.
(13) In accordance with present embodiments, a display of a virtual reality headset may present/display virtual objects. Reference points of such objects (e.g., geometric center points along a particular dimension of the objects) may be utilized to describe operational features of present embodiments. In particular, distances between such reference points and features of an augmented reality headset are controlled in accordance with present embodiments to improve user experiences. For example, instead of maintaining a distance from centers of virtual objects to an inside edge of a display of an augmented reality headset when the virtual objects are presented as changing from a first virtual depth to a second virtual depth, present embodiments dynamically and laterally shift each virtual object by a respective distance as the respective virtual object is presented as changing from the first virtual depth to the second virtual depth. The respective distance may be dynamically determined based on a lateral distance between a first convergence vector of a user's eye with the respective virtual object at the first virtual depth and a second convergence vector of the user's eye with the respective virtual object at the second virtual depth along the display, and may be based on an interpupillary distance. In this manner, display of the virtual object may be adjusted such that the gazes of the user's eyes may converge where the virtual object appears to be. As such, the user may point their gaze at the same point as where they focus their eyes to maintain a clear image. Thus, the presently disclosed systems and methods may reduce or eliminate the vergence-accommodation conflict when displaying a change in virtual depth of a virtual object, reducing or avoiding possible blurring or double image effects when viewing the virtual object, discomfort, fatigue, persisting headaches, and/or nausea, resulting in a better user experience.
(14) While the present disclosure discusses the use of augmented reality and augmented reality headsets, it should be understood that the disclosed techniques may also apply to virtual reality, mixed reality, or any other suitable interactive computer-generated experience taking place within a simulated environment. Moreover, use of the term “depth” with reference to a virtual object should be understood to refer to a virtual depth of the virtual object. That is, the terms “depth” and “virtual depth” refer to a depth that the virtual object appears to be located or disposed at (e.g., from the user's perspective) based on viewing the virtual object through an augmented reality headset.
(15) With this in mind,
(16)
(17)
(18) The convergence adjustment system 40 may also include an interpupillary distance determination engine 48 that dynamically determines an interpupillary distance of the user 10. The interpupillary distance may be a distance between the user's pupils. In some embodiments, the interpupillary distance determination engine 48 may receive a signal from the pupil tracking sensor 26 of the augmented reality headset 12 indicative of the interpupillary distance. The interpupillary distance determination engine 48 may then determine the interpupillary distance based on the received signal.
(19) In additional or alternative embodiments, the interpupillary distance determination engine 48 may estimate the interpupillary distance based on a calibration process. That is, when the user 10 first equips the augmented reality headset 12, the controller 42 of the convergence adjustment system 40 may perform the calibration process. The calibration process may include showing a number of virtual objects at different virtual depths, and prompting the user 10 to respond when the user 10 sees a single image versus a double image of a respective virtual object. The user's responses corresponding to seeing the single image may be used to estimate positions of the user's eyes by triangulating the estimated positions of the user's eyes with the different virtual depths at which the virtual objects are displayed. The interpupillary distance determination engine 48 may determine a set of interpupillary distances at the different virtual depths based on the estimated positions of the user's eyes. As such, the interpupillary distance determinations may be saved, and the interpupillary distance determination engine 48 may perform regression analysis or any other suitable form of estimation analysis to generate a mathematical model or expression that predicts the interpupillary distance depending on a virtual depth of a virtual object based on the set of saved interpupillary distance determinations. The interpupillary distance determination engine 48 may dynamically determine or estimate the interpupillary distance of the user 10 as the interpupillary distance may change as the user 10 views different objects (virtual or real). As such, it may be useful to constantly, periodically, or at certain times or points of interest (e.g., when a different virtual object is displayed or real object comes into view) to update the interpupillary distance of the user 10. It should be understood that the term “engine,” as used in the present disclosure, may include hardware (such as circuitry), software (such as instructions stored in the memory device 46 for execution by the processor 44), or a combination of the two. For example, the interpupillary distance determination engine 48 may include pupil tracking sensors 26 and circuitry coupled to the pupil tracking sensors 26 that receive pupil tracking information from the pupil tracking sensors 26 and determine the interpupillary distance of the user 10 based on the pupil tracking information.
(20) The convergence adjustment system 40 may further include a display adjustment engine 50 that adjusts display of a virtual object and/or provides an adjustment to the display of the virtual object based on an interpupillary distance of the user 10. In particular, the display adjustment engine 50 may receive input image data 52, which may include one or more virtual objects 54. Each virtual object 54 may be displayed at a respective virtual depth. The display adjustment engine 50 may also receive the interpupillary distance as determined by the interpupillary distance determination engine 48. The display adjustment engine 50 may then adjust display of each virtual object 54 based on the interpupillary distance.
(21) In some cases, the convergence adjustment system 40 may be part of the augmented reality headset 12. In additional or alternative embodiments, the convergence adjustment system 40 may be external to the augmented reality headset 12, and communicate with the augmented reality headset 12 via any suitable communication network and/or protocol. For example, each of the augmented reality headset 12 and the convergence adjustment system 40 may include a communication interface, and the communication interfaces may connect to a communication network. The communication network may be wired and/or wireless, such as a mobile network, WiFi, LAN, WAN, Internet, and/or the like, and enable the augmented reality headset 12 and the convergence adjustment system 40 to communicate with one another.
(22) As an example,
(23) The interpupillary distance between the user's pupils 72, 74 is indicated along an interpupillary line 76 passing through the user's pupils 72, 74 as IPD in
X.sub.1=((IPD/2)/D.sub.OBJ1)*D.sub.DISP (Equation 1)
(24)
(25) As a further example,
(26) The first gaze line or convergence vector 78 between either of the user's pupils 72, 74 and the reference point 71 of the virtual object 54 at the first virtual distance D.sub.OBJ1, which makes the angle θ.sub.1 with the center line 70, changes to a second gaze line or convergence vector 130 between either of the user's pupils 72, 74 and the reference point 132 of the virtual object 54 at the second virtual distance D.sub.OBJ2, which makes an angle θ.sub.2 with the center line 70, as the virtual object 54 changes depth. A distance between the display distance line 82 and the second gaze line 130 along the display line 80 is indicated as X.sub.2 (which may be referred to as a second lateral pupil distance to view the virtual object 54 at the position 128), and may be determined based on the rule of similar triangles. In particular, X.sub.2 may be determined using the equation below:
X.sub.2=((IPD/2)/D.sub.OBJ2)*D.sub.DISP (Equation 2)
(27) As such, the distance that the pupil moves at the display line 80 due to the change in depth of the virtual object 54 may be expressed as the difference between the distance between the display distance line 82 and the gaze line 78 along the display line 80 (e.g., X.sub.1), and the distance between the display distance line 82 and the second gaze line 130 along the display line 80 (e.g., X.sub.2), which may be referred to as X.sub.DIFF using the equation below:
X.sub.DIFF=|X.sub.1−X.sub.2| (Equation 3)
(28) Because the illustrated example of changing depth in
(29)
(30) To determine the distances Y.sub.2 from the inside edges 106, 112 of each display 22, 24 to the reference points 144, 146 of the left and right virtual objects 100, 102, the controller 42 may determine the distance Y.sub.1 from the reference points 104, 110 of the virtual objects 100, 102 from the inside edges 106, 112 of each display 22, 24. The controller 42 may also determine the difference X.sub.DIFF between the distance X.sub.1 between the display distance line 82 and the gaze line 78 along the display line 80, and the distance X.sub.2 between the display distance line 82 and the second gaze line 130 along the display line. In particular, in this case, where the controller 42 makes the virtual object 54 seem further away than when the virtual object 54 was at the original position 68, the controller 42 may move each of the left and right virtual objects 100, 102 toward the outer edges 108, 114 of each display 22, 24. As such, the controller 42 may add the difference X.sub.DIFF to the distance Y.sub.1 to determine the distance Y.sub.2. In cases where the controller 42 makes the virtual object 54 seem closer than when the virtual object 54 was at the original position 68, the controller 42 may move each of the left and right virtual objects 100, 102 toward the inner edges 106, 112 of each display 22, 24. As such, the controller 42 may subtract the difference X.sub.DIFF from the distance Y.sub.1 to determine the distance Y.sub.2.
(31) Thus, the controller 42 may display the virtual objects 100, 102 the difference X.sub.DIFF away from the distances Y.sub.1 away from the inside edges 106, 112 of each display 22, 24 to make the virtual object 54 appear to be at the second virtual position 128 at the convergence point of the gaze lines or convergence vectors 130 of the user's pupils 72, 74.
(32) The controller 42 may determine the difference X.sub.DIFF between the distance between the display distance line 82 and the gaze line 78 corresponding to the closer depth along the display line 80 (e.g., X.sub.1) and the distance between the display distance line 82 and the second gaze line 130 corresponding to the further depth along the display line 80 (e.g., X.sub.2) for multiple virtual objects 54, and display changes in depths of the multiple virtual objects 54 based on each respective difference X.sub.DIFF. Indeed, in some circumstances, if multiple virtual objects 54 at different depths change their respective depths and a respective difference X.sub.DIFF is not determined and applied for each virtual object 54 (e.g., the same difference X.sub.DIFF is applied to each virtual object 54), then the user 10 may experience a “jumping” effect of at least some of the multiple virtual objects 54 due to the unnatural and unrealistic shifting of at least some of the multiple virtual objects 54. As such, the controller 42 may dynamically determine the difference X.sub.DIFF for each virtual object 54 separately as the controller 42 receives an indication that the respective virtual object 54 is changing depth.
(33) Moreover, as illustrated, the controller 42 shrinks the virtual objects 100, 102 at the second positions 140, 142 to make the virtual objects 100, 102 seem further away than when the virtual objects 100, 102 were at their respective original positions 101, 103. In the case where the controller 42 appears to make the virtual objects 100, 102 move from a further depth to a closer depth, the controller 42 may instead enlarge the virtual objects 100, 102 from when the virtual objects 100, 102 were at their respective original positions 101, 103.
(34) The determination of the lateral distance X.sub.DIFF may be dependent on the center line 70 and the display line 80 being perpendicular to the interpupillary line 76, as it may be assumed that the user 10 primarily looks forward to view the virtual objects 54. Instead of moving their eyes to look at other objects, it may be assumed that the user 10 may turn his or her head to look at the other objects. Therefore, all virtual objects at the same depth may laterally shift the same lateral distance X.sub.DIFF toward the same direction. In cases where the user 10 is not assumed to primarily look forward to view the virtual objects 54, the controller 42 may apply a shifting deformation to or progressively adjust the display of the virtual object 54, based on and/or to compensate for the different focal length of the virtual objects 54.
(35)
(36) As illustrated, in process block 162, the processor 44 receives an indication that one or more displayed objects are to be displayed as moving from a first depth to a second depth. For example, the processor 44 may determine that the display 20 is displaying one or more virtual objects 54. The processor 44 may receive the input image data 52, which may include the one or more virtual objects 54 changing depth. As a result, the processor 44 may determine that the one or more virtual objects 54 are changing their respective depths from a respective first depth to a respective second depth. In additional or alternative embodiments, the processor 44 may receive one or more input signals (e.g., one or more change depth indication signals) that directly indicate that the one or more virtual objects 54 are changing their respective depths. With reference to
(37) In process block 164, the processor 44 determines an interpupillary distance of a user. For example, the processor 44 may receive pupil position information from the pupil tracking sensor 26 shown in
(38) In process block 166, the processor 44 determines a lateral distance between a first convergence vector associated a displayed object at the first depth and a second convergence vector associated with the displayed object at the second depth along a display based on the interpupillary distance. With reference to
(39) The lateral distance X.sub.DIFF is the difference between the first convergence vector 78 and the second convergence vector 130 along the display line 80. The processor 44 may determine the lateral distance X.sub.DIFF by determining the distance X.sub.1 between the display distance line 82 and the first convergence vector 78 along the display line 80. In particular, the processor 44 may determine X.sub.1 by dividing half of the interpupillary distance (IPD/2) by the first virtual distance (D.sub.OBJ1) between the center point C between the user's pupils 72, 74 and the reference point (e.g., the center) 71 of the virtual object 54 at the first position 68, and multiplying the result by the display distance D.sub.DISP between either of the user's pupils 72, 74 and the display line 80, as expressed in Equation 1 above. The processor 44 may determine X.sub.2 by dividing half of the interpupillary distance (IPD/2) by the second virtual distance (D.sub.OBJ2) between the center point C between the user's pupils 72, 74 and the reference point (e.g., the center) 132 of the virtual object 54 at the second position 128, and multiplying the result by the display distance D.sub.DISP between either of the user's pupils 72, 74 and the display line 80, as expressed in Equation 2 above. In some embodiments, the absolute value of the difference between X.sub.1 and X.sub.2 may be taken to ensure a positive value, as shown in Equation 3 above. The processor 44 may save the lateral distance X.sub.DIFF in any suitable memory or storage device, such as the memory device 46.
(40) In decision block 168, the processor 44 determines whether there is another displayed object to be displayed as moving from a respective first depth to a respective second depth. If so, the processor 44 repeats process block 166 to determine the lateral distance X.sub.DIFF between a first convergence vector associated with the additional virtual object at the respective first depth and a second convergence vector associated with the additional virtual object at the respective second depth along the display 20 based on the interpupillary distance. As such, the processor 44 may dynamically determine the difference X.sub.DIFF for each virtual object 54 separately, such that each virtual object 54 may correspond to a different lateral difference X.sub.DIFF value.
(41) If the processor 44 determines that there is not another displayed object to be displayed as moving from a respective first depth to a respective second depth, the processor 44, in process block 170, displays each displayed object as moving from the respective first depth to the respective second depth based on a respective lateral distance. In particular, the processor 44 may shift the reference points 144, 146 of each virtual object 100, 102 displayed on the displays 22, 24 by the lateral distance X.sub.DIFF. For example, as shown in
(42) The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).