Method, computer program product and binocular headset controller
11650425 · 2023-05-16
Assignee
Inventors
- Geoffrey Cooper (Danderyd, SE)
- Rickard Lundahl (Danderyd, SE)
- Erik Lindén (Danderyd, SE)
- Maria Gordon (Danderyd, SE)
Cpc classification
G02B2027/0198
PHYSICS
G06F3/011
PHYSICS
G02B27/0093
PHYSICS
International classification
G02B27/00
PHYSICS
G06F3/14
PHYSICS
Abstract
Computer-generated image data is presented on first and second displays of a binocular headset presuming that a user's left and right eyes are located at first and second positions relative to the first and second displays respectively. At least one updated version of the image data is presented, which is rendered presuming that at least one of the user's left and right eyes is located at a position different from the first and second positions respectively in at least one spatial dimension. In response thereto, a user-generated feedback signal is received expressing either: a quality measure of the updated version of the computer-generated image data relative to computer-generated image data presented previously; or a confirmation command. The steps of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal are repeated until the confirmation command is received. The first and second positions are defined based on the user-generated feedback signal.
Claims
1. A method performed in at least one processor, the method comprising: presenting computer-generated image data on first and second displays of a binocular headset, the computer-generated image data being rendered under a presumption that a user's left eye is located at a first position relative to the first display and the user's right eye is located at a second position relative to the second display, and the computer-generated image data comprising at least one graphical element shown on the first and the second displays respectively; presenting at least one updated version of the computer-generated image data that is rendered under the presumption that at least one of the user's left and right eyes is located at a position being different from the first and the second positions respectively in at least one spatial dimension; receiving a user-generated feedback signal comprising either: a quality measure of the updated version of the computer-generated image data relative to the computer-generated image data presented previously on the first and the second displays, or a confirmation command; iterating presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal until the user-generated feedback signal comprising the confirmation command is received, and defining the first and the second positions based on the user-generated feedback signal.
2. The method according to claim 1, comprising: presenting two or more updated versions of the computer-generated image data before receiving the user-generated feedback signal.
3. The method according to claim 1, wherein a subsequent presenting of the at least one updated version of the computer-generated image data is based on the quality measure comprised in the user-generated feedback signal such that the subsequent presenting is expected to result in a later received user-generated feedback signal comprising either (i) a further improved quality measure, or (ii) the confirmation command, or a lower quality measure.
4. The method according to claim 1, further comprising: assigning an estimated left eye position for the user based on the latest first position presumed when rendering the computer-generated image data before receiving the user-generated signal comprising the confirmation command; and assigning an estimated right eye position for the user based on a latest second position presumed when rendering the computer-generated image data before receiving the user-generated signal comprising the confirmation command.
5. The method according to claim 4, comprising: iterating presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal until a user-generated feedback signal comprising the confirmation command is received for one eye of the user's left and right eyes before starting to iterate presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal for the other eye of the user's left and right eyes.
6. The method according to claim 4, wherein the computer-generated image data is presented for the user's left and right eyes (i) in a temporal parallel manner, or (ii) in a temporal interleaved manner, wherein: at least one iteration of presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for a first eye of the user's left and right eyes, thereafter at least one iteration of presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for a second eye of the user's left and right eyes, and then at least one iteration of presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for said first eye.
7. The method according to claim 1, wherein presenting the at least one updated version of the computer-generated image data comprises: presenting a graphical element at a position being different from the first and the second positions respectively in a first spatial dimension, and preferably presenting the at least one updated version of the computer-generated image data comprises: presenting a graphical element at a position being different from the first and the second positions respectively in a second spatial dimension being orthogonal to the first spatial dimension.
8. The method according to claim 1, wherein the step of presenting the computer-generated image data on the first and the second displays comprises: presenting a first graphical element at a first focal plane on the first and the second displays respectively, and presenting a second graphical element at a second focal plane on the first and the second displays respectively, and wherein the quality measure comprised in the user-generated feedback signal is configured to indicate when the presenting of the first and the second graphical elements at the first and the second focal planes respectively is perceived by the user as a change in position of the first and the second graphical elements.
9. The method according to claim 1, wherein at the at least one graphical element has a rectilinear shape extending in two dimensions on each of the first and the second displays respectively, and preferably a number of graphical elements are presented as elements in at least one array, or as elements in a geometric symbol being mirror symmetric about at least two mutually orthogonal axes.
10. The method according to claim 1, wherein the at least one graphical element comprises a number of identical graphical elements distributed over an area.
11. The method according to claim 1, after having received the user-generated feedback signal comprising the confirmation command, the method further comprising: calculating an estimated interpupillary distance between the estimated left and right eye positions for the user as an absolute distance between first and the second coordinates, the first coordinate expressing the first position of a pupil of the user's left eye relative to the first display and the second coordinate expressing the second position a pupil of the user's right eye relative to the second display.
12. The method according to claim 11, wherein presenting the computer-generated image data on the first and the second displays comprises: presenting a two-dimensional pattern of graphical elements at a same first focal distance on the first and the second displays, the two-dimensional pattern being presented under a presumption that, for at least one of the user's left and right eyes, a center-pupil distance separates a position of an eye rotation center from a position of a pupil of the user's eye, the quality measure comprised in the user-generated feedback signal reflecting a degree of mismatch perceived by the user between the two-dimensional pattern presented on the first display and the two-dimensional pattern presented on the second display when the user focuses his/her gaze at a predefined point in the two-dimensional pattern; wherein in response to the quality measure, presenting the updated version of the computer-generated image data comprises: presenting the two-dimensional pattern of graphical elements under the presumption that the center-pupil distance is different from a previously assigned measure for this distance; and after having received the user-generated feedback signal comprising the confirmation command, the method further comprises: assigning an estimated center-pupil distance for the at least one of the user's left and right eyes to the center-pupil distance presumed latest before receiving the user-generated feedback signal comprising the confirmation command.
13. The method according to claim 12, wherein the computer-generated image data is presented for the user's left and right eyes in a temporal parallel manner.
14. The method according to claim 12, comprising presenting the computer-generated image data for the user's left and right eyes in a temporal interleaved manner, wherein: at least one iteration of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for a first eye of the user's left and right eyes, thereafter at least one iteration of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for a second eye of the user's left and right eyes, and then at least one iteration of presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal is completed for the first eye.
15. The method according to claim 12, wherein presenting the computer-generated image data on the first and the second displays comprises: presenting a two-dimensional pattern of graphical elements at a same second focal distance on the first and the second displays which same second focal distance is different the same first focal distance, the two-dimensional pattern being presented under a presumption that, for at least one of the user's left and right eyes, the assigned estimated center-pupil distance separates the position of the eye rotation center from the position of the pupil of the user's eye; wherein in response to the quality measure, presenting the updated version of the computer-generated image data comprises: presenting the two-dimensional pattern of graphical elements under the presumption that the center-pupil distance is different from a previously assigned estimated center-pupil distance; and after having received the user-generated feedback signal comprising the confirmation command, the method further comprises: assigning an enhanced estimated center-pupil distance for the at least one of the user's left and right eyes to the assigned center-pupil distance presumed latest before receiving the user-generated feedback signal comprising the confirmation command.
16. The method according to claim 1 further comprising rendering the at least one updated version of the computer-generated image data under the presumption that at least one of the user's left and right eyes is located at a position being different from the first and the second positions respectively in two or more spatial dimensions.
17. The method according to claim 1, wherein the quality measure reflected by the user-generated feedback signal expresses at least one of: a degree of misalignment between the computer-generated image data and the updated version thereof, and a degree of skewedness between the computer-generated image data and the updated version thereof.
18. A computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions being executable by a processor to cause the processor to: present computer-generated image data on first and second displays of a binocular headset, the computer-generated image data being rendered under a presumption that a user's left eye is located at a first position relative to the first display and the user's right eye is located at a second position relative to the second display, and the computer-generated image data comprising at least one graphical element shown on the first and the second displays respectively; present at least one updated version of the computer-generated image data that is rendered under the presumption that at least one of the user's left and right eyes is located at a position being different from the first and the second positions respectively in at least one spatial dimension; receive a user-generated feedback signal comprising either: a quality measure of the updated version of the computer-generated image data relative to the computer-generated image data presented previously on the first and the second displays, or a confirmation command; iterate presenting the at least one updated version of the computer-generated image data and receiving the user-generated feedback signal until the user-generated feedback signal comprising the confirmation command is received, and define the first and the second positions based on the user-generated feedback signal.
19. A binocular headset controller comprising at least one processor configured to: present computer-generated image data on first and second displays of a binocular headset, the computer-generated image data being rendered under a presumption that a user's left eye is located at a first position relative to the first display and the user's right eye is located at a second position relative to the second display, and the computer-generated image data comprising at least one graphical element shown on the first and the second displays respectively; present at least one updated version of the computer-generated image data that is rendered under the presumption that at least one of the user's left and right eyes is located at a position being different from the first and the second positions respectively in at least one spatial dimension; receive a user-generated feedback signal comprising either: a quality measure of the at least one updated version of the computer-generated image data relative to the computer-generated image data presented previously on the first and the second displays, or a confirmation command; iterate presenting the updated version of the at least one computer-generated image data and receiving the user-generated feedback signal until the user-generated feedback signal comprising the confirmation command is received, and define the first and the second positions based on the user-generated feedback signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention is now to be explained more closely by means of preferred embodiments, which are disclosed as examples, and with reference to the attached drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION
(16)
(17) In any case, the binocular headset 100 has first and second displays 110 and 120 configured to present visual information to a user's U left and right eye respectively. The visual information, in turn, may be based on computer-generated image data as described below.
(18)
(19) Referring now to
(20) As will be discussed in further detail below, the computer-generated image data contains at least one graphical element that is shown on the first and second displays 110 and 120 respectively.
(21) The at least one processor 220 is further configured to present at least one updated version of the computer-generated image data D.sub.L and D.sub.R. The updated version is rendered under the presumption that at least one of the user's left and right eyes is located at a position being different from the first and second positions P.sub.LE, and P.sub.RE respectively in at least one spatial dimension x, y and/or z.
(22) The at least one processor 220 is also configured to receive a user-generated feedback signal s.sub.UFB, for example over a wireless interface as illustrated in
(23) The quality measure expresses how the user U experiences the quality of the updated version of the computer-generated image data D.sub.L and D.sub.R relative to computer-generated image data D.sub.L, and D.sub.R presented previously on the first and second displays 110 and 120. More precisely, the quality measure may express a degree of misalignment between the computer-generated image data D.sub.L, and D.sub.R and the updated version thereof. Alternatively, or additionally, the quality measure may express a degree of skewedness between the computer-generated image data D.sub.L and D.sub.R and the updated version thereof.
(24) The user-generated feedback signal s.sub.UFB, as well as any updates thereof, may be caused by user input produced in response to manipulation of a button, a key or a touch sensor, controlling a pointing device, interaction with a gesture interface, and/or interaction with a voice interface.
(25) The at least one processor 220 is configured to iterate the steps of presenting the updated version of the computer-generated image data D.sub.L and D.sub.R and receiving the user-generated feedback signal s.sub.UFB until the confirmation command is received as part of the user-generated feedback signal s.sub.UFB. The user U is instructed to produce the confirmation command when he/she experiences that the computer-generated image data D.sub.L and D.sub.R is optimal, or at least reaches a quality standard acceptable by the user.
(26) In some cases, it may be advantageous to present two or more updated versions of the computer-generated image data D.sub.L and D.sub.R before receiving the user-generated feedback signal s.sub.UFB. For example, the at least one processor 220 may repeatedly generate different versions of the computer-generated image data D.sub.L and D.sub.R. The user U then enters the confirmation command when he/she finds the quality of the image data acceptable.
(27) According to one embodiment of the invention, the computer-generated image data D.sub.L and D.sub.R is presented for the user's U left and right eyes in a temporal parallel manner, i.e. the user U is prompted to evaluate the perceived quality of both the sets of image data D.sub.L and D.sub.R in parallel.
(28) According to another embodiment of the invention, the computer-generated image data D.sub.L and D.sub.R is presented for the user's U left and right eyes in a temporal interleaved manner. This means that at least one iteration of presenting the updated version of the computer-generated image data D.sub.L and receiving the user-generated feedback signal s.sub.UFB is completed for a one of the user's U eyes, say his/her left eye. Thereafter at least one iteration of presenting the updated version of the computer-generated image data D.sub.R and receiving the user-generated feedback signal s.sub.UFB is completed for the user's U other eye, say his/her right eye.
(29) Then, at least one iteration of presenting the updated version of the computer-generated image data D.sub.L and receiving the user-generated feedback signal s.sub.UFB is completed for left eye. Naturally, after this, another round of iterations may follow in which at least one iteration of presenting the updated version of the computer-generated image data D.sub.L and receiving the user-generated feedback signal s.sub.UFB is completed for right eye, and so on.
(30) In response to receiving the confirmation command, the at least one processor 220 is configured to define the first and second positions P.sub.LE and P.sub.RE. Preferably, these positions are assigned equal to the latest presumed eye positions before receiving the confirmation command. According to one embodiment of the invention, an estimated left eye position for the user U is assigned based on the latest first position P.sub.LE presumed when rendering the computer-generated image data D.sub.L before receiving the user-generated signal s.sub.UFB containing the confirmation command. Analogously, an estimated right eye position for the user U is assigned based on the latest second position P.sub.RE presumed when rendering the computer-generated image data D.sub.R before receiving the user-generated signal s.sub.UFB containing the confirmation command.
(31) Nevertheless, if the user U indicates an experienced image quality via the user-generated feedback signal s.sub.UFB, it is preferable if, in a subsequent presenting of the updated version of the computer-generated image data D.sub.L and D.sub.R, this data is based on the quality measure comprised in the user-generated feedback signal s.sub.UFB in such a way that the subsequent presenting is expected to result in a later received user-generated feedback signal s.sub.UFB comprising a further improved quality measure, or even the confirmation command. For example, if it has been found that reducing the horizontal distance x has led to gradually improved quality measures, a following update the computer-generated image data D.sub.L and D.sub.R is rendered under the presumption of a somewhat yet reduced horizontal distance x.
(32) Conversely, in some cases, it may be advantageous to, in a subsequent presenting of the updated version of the computer-generated image data D.sub.L and D.sub.R, base the rendering on the quality measure in such a way that the subsequent presenting is expected to result in a later received user-generated feedback signal s.sub.UFB comprising a lower quality measure. Namely, thereby it can be concluded which is the optimal position P.sub.RE and/or P.sub.RE in one or more dimensions.
(33) Moreover, according to embodiments of the invention, the order in which the optimal, or good enough, positions P.sub.RE and/or P.sub.RE are determined may be varied.
(34) For instance, one of the positions can be assigned before starting to find the other one. This may mean that the steps of presenting the updated version of the computer-generated image data D.sub.L and receiving the user-generated feedback signal s.sub.UFB are iterated until a user-generated feedback signal s.sub.UFB containing the confirmation command is received for one eye of the user's U eyes, say the left eye, before starting to iterate the steps of presenting the updated version of the computer-generated image data D.sub.R and receiving the user-generated feedback signal s.sub.UFB for the other eye, say the right eye.
(35) Alternatively, the positions P.sub.RE and P.sub.RE can be assigned in a temporal parallel manner. This means that the steps of presenting the updated version of the computer-generated image data D.sub.L and D.sub.R and receiving the user-generated feedback signal s.sub.UFB are iterated until a user-generated feedback signal s.sub.UFB containing the confirmation command is received for the user's U both eyes,
(36) Of course, a hybrid approach may likewise be applied in which the computer-generated image data D.sub.L and D.sub.R is presented for the user's U left and right eyes in a temporal interleaved manner. Specifically, this may mean that at least one iteration of presenting the updated version of the computer-generated image data D.sub.R and receiving the user-generated feedback signal s.sub.UFB is completed for a first eye, say the right eye. Thereafter, at least one iteration of presenting the updated version of the computer-generated image data D.sub.L and receiving the user-generated feedback signal s.sub.UFB is completed for a second eye, say the left eye. Then, at least one iteration of presenting the updated version of the computer-generated image data D.sub.R and receiving the user-generated feedback signal s.sub.UFB is again completed for first eye, i.e. here the right eye.
(37) Referring to
(38) Here, let us assume that the first cube 500 is rendered at a shorter distance and the second cube 501 is rendered at a longer distance, and the sizes, the angular positions of the cubes 500 and 501 and said distances are such that the first cube 500 would be perceived to overlap the second cube 501 perfectly if the virtual camera positions P.sub.RE and P.sub.RE were correct for the user's U eyes.
(39) However, if one or both of the positions of the virtual cameras is/are incorrect in a horizontal direction x in relation the respective display 110 and/or 120, the user will experience that there is a horizontal misalignment d.sub.x between the first and second cubes 500 and 501 as illustrated in
(40) Analogously, if at least one of the virtual cameras' positions is incorrect in a vertical direction y in relation the respective display 110 and/or 120, the user will experience that there is a vertical misalignment d.sub.y between the virtual graphics objects 600 and 601 as illustrated in
(41) Referring to
(42) In
(43) In
(44) According to one embodiment of the invention, a misalignment between the position of the eye and the virtual camera in the depth direction z in a multi-focal plane implementation is estimated and compensated for by applying the following procedure.
(45) The step of presenting the computer-generated image data D.sub.L, and D.sub.R on the first and second displays 110 and 120 involves presenting a first graphical element 701 at a first focal plane FP1 on the first and second displays 110 and 120 respectively. A second graphical element 702 is presented at a second focal plane FP2 on the first and second displays 110 and 120 respectively. Here, the quality measure in the user-generated feedback signal s.sub.UFB is configured to indicate if the presenting of the first and second graphical elements 701 and 702 at the first and second focal planes FP1 and FP2 respectively is perceived by the user U as a change in position of the first and second graphical elements 701 and 702. For example, the quality measure may indicate a magnitude and/or a direction of any movement occurring. If the user U perceives no, or an acceptably small, movement he/she generates a feedback signal s.sub.UFB containing the confirmation command.
(46)
(47)
(48) If, however, the assumed distance d.sub.CP is assigned to an incorrect value, i.e. the assumed distance d.sub.CP is too long or too short, the user U will experience misalignments in the peripheral view-field. This is illustrated in
(49) According to one embodiment of the invention, the step of presenting the computer-generated image data D.sub.L, and D.sub.R on the first and second displays 110 and 120 respectively therefore involves presenting a two-dimensional pattern 800 of graphical elements at a same first focal distance FP1 on the first and second displays 110 and 120 respectively. The two-dimensional pattern 800 is presented under a presumption that, for at least one of the user's U left and right eyes, a center-pupil distance d.sub.CP separates the position P.sub.ERC of the eye rotation center from the position P.sub.PC of a pupil of the user's U eye. Here, the quality measure in the user-generated feedback signal s.sub.UFB reflects a degree of mismatch perceived by the user U between the two-dimensional pattern 800 presented on the first display 110 and the two-dimensional pattern 800 presented on the second display 120 when the user U focuses his/her gaze at a predefined point GP in the two-dimensional pattern 800.
(50) In response to the quality measure, the step of presenting the updated version of the computer-generated image data D.sub.L and D.sub.R involves presenting the two-dimensional pattern 800 of graphical elements under the presumption that the center-pupil distance d.sub.CP is different from a previously assigned measure for this distance. Then, after having received the user-generated feedback signal s.sub.UFB containing the confirmation command is received, the method further involves assigning an estimated center-pupil distance d.sub.CP for the at least one of the user's U left and right eyes to the center-pupil distance d.sub.CP presumed latest before receiving the confirmation command.
(51) According to another embodiment of the invention, the step of presenting the computer-generated image data D.sub.L, and D.sub.R on the first and second displays 110 and 120 respectively further involves presenting a two-dimensional pattern 800 of graphical elements, for example in the form of squares, at a same second focal distance FP2 on the first and second displays 110 and 120 respectively. The same second focal distance FP2 is here different the same first focal distance FP1. The two-dimensional pattern 800 is presented under a presumption that, for at least one of the user's U left and right eyes, the above assigned estimated center-pupil distance d.sub.CP separates the position P.sub.ERC of the eye rotation center from the position P.sub.PC of the pupil of the user's U eye.
(52) In response to the quality measure, the step of presenting the updated version of the computer-generated image data D.sub.L and D.sub.R on the displays 110 and 120 respectively involves presenting the two-dimensional pattern 800 of graphical elements under the presumption that the center-pupil distance d.sub.CP is different from a previously assigned estimated center-pupil distance d.sub.CP.
(53) Finally, after having received the user-generated feedback signal s.sub.UFB containing the confirmation command, the method involves assigning an enhanced estimated center-pupil distance d.sub.CP for the at least one of the user's U left and right eyes to the assigned center-pupil distance d.sub.CP presumed latest before receiving the confirmation command. Thereby, the user's center-pupil distance d.sub.CP can be determined very accurately in an efficient and straightforward manner.
(54) To enable quick and convenient estimation of the key parameters for providing high-quality computer-graphics rendering in a for example a VR system, AR system or mixed reality (MR) system, it is useful to render different graphics objects or whole scenes with different choices of virtual camera positions during the calibration process, and allow the user U to choose the graphics object/scene that has the best perceived quality, i.e. the one with minimal degree of perceived misalignment and/or movement between different focus plane and view-points. Said key parameters comprise the spatial coordinates x.sub.LE, y.sub.LE, z.sub.LE and x.sub.RE, y.sub.RE, z.sub.RE for the first and second positions P.sub.LE and P.sub.RE respectively. Preferably, the distances d.sub.CP between the eyes' 400 rotation P.sub.ERC and the pupil P.sub.PC are also included in the key parameters as well as an interpupillary distance dip that will be described below with reference to
(55) Different subsets of the aspects of the virtual camera position may be presented to the user U, and different choices for the different aspects may be offered the user U. Typically, a combination of choices made by the user U for the different aspects of the virtual camera position provides a desired/adjusted/calibrated virtual camera position for the user U in question.
(56) A certain order of the presentation of the virtual graphics objects/scenes may be imposed. Moreover, one and the same adjusted value(s) may be assigned for both eyes in a common procedure, or the eyes may be calibrated separately by for example blacking out one of the displays 110 or 120 and only considering one eye at a time. Of course, optionally, the eyes may be calibrated separately by presenting two different virtual camera positions and vary the virtual-camera parameters for the two eyes independently.
(57) According to embodiments of the invention, many different types of virtual graphics object may be used for the above-mentioned calibration purposes.
(58) One example of a basic virtual graphics object is a cube that is rendered with a certain virtual camera position. This basic virtual graphics object may be rendered at a centered position in the user's U field of view, and the user U can be offered different choices of the horizontal component x and the vertical component y for the virtual camera position. Further, the virtual graphics object can be rendered off center in the user's U field of view to offer the user U different choices of the depth component z for the virtual camera position. Namely, in general, offsets in the depth direction are more noticeable at off center positions for the virtual graphics objects than at center positions.
(59) A number of graphical elements, for example in the form of cubes, may be rendered using different virtual camera positions, and where the graphical elements are presented in an organized fashion on the displays 110 and 120. Thereby, multiple choices for virtual camera positions may be presented to the user U at once, thus assisting the user U to quickly identity a virtual camera position that he/she finds the best in comparison to the other alternatives.
(60) The horizontal position x.sub.LE and/or x.sub.RE and the vertical position y.sub.LE and/or y.sub.RE may be determined independently from one another by for example first presenting a virtual graphics object extending in the horizontal direction, and then presenting a virtual graphics object extending in the vertical direction, or vice versa.
(61) According to one embodiment of the invention, the virtual graphics object is represented by at least one graphical element a rectilinear shape extending in two dimensions on each of the first and second displays 110 and 120 respectively.
(62) According to one embodiment of the invention, the at least one graphical element contains a number of graphical elements each having a rectilinear shape and extending in two dimensions, for example as illustrated by the ring of cubes 1301 in
(63) In many cases, it is desirable if the at least one graphical element includes a number of graphical elements, which each has a rectilinear shape and extends in two dimensions. The method of determining the virtual camera position may thus involve presenting a number of graphical elements as elements in a geometric symbol as exemplified by the ring of cubes 1301 in
(64) However, the virtual graphics object does not need to fulfil any such geometric requirements. On the contrary, according to embodiments of the invention, the at least one graphical element may contain a number of, preferably identical, graphical elements of any shape that are distributed over an area on the displays 110 and 120, for instance as exemplified by the arrangements of circles 1401 and stars 1402 in
(65) Referring now to
(66)
(67) If, however, the interpupillary distance dip between the user's pupils P.sub.LEe and P.sub.REe is incorrect, a jumping effect in the depth direction will be experienced by the user U when shifting focus between the first and second focal planes FP1.sub.L and FP1.sub.R respective FP2.sub.L and FP2.sub.R as illustrated in
(68) To correct such a misalignment of the virtual camera position, according to one embodiment of the invention, after having received the user-generated feedback signal s.sub.UFB containing the confirmation command, the method further involves calculating an estimated interpupillary distance dip between the estimated left and right eye positions for the user U. The interpupillary distance dip calculated as an absolute distance between first and second coordinates, where the first coordinate expresses the first position P.sub.LE of a pupil P.sub.PC of the user's U left eye relative to the first display 110 and the second coordinate expressing the second position P.sub.RE a pupil P.sub.PC of the user's U right eye relative to the second display 120.
(69)
(70)
(71) In order to sum up, and with reference to the flow diagram in
(72) In a first step 1510 computer-generated image data is presented on first and second displays of a binocular headset, for example forming part of a VR system, an AR system, an MR system, or some combination thereof. The computer-generated image data is rendered under a presumption that a user's left eye is located at a first position relative to the first display and the user's right eye is located at a second position relative to the second display. The computer-generated image data contain at least one graphical element, which is shown on both the first and second displays.
(73) In a subsequent step 1520, at least one updated version of the computer-generated image data is presented on the first and second displays. The at least one updated version is rendered under the presumption that one, or both, of the user's is located at a position being different from the first and second positions in step 1510 in at least one spatial dimension.
(74) Thereafter, in a step 1530, a user-generated feedback signal is received, which either contains a quality measure or a confirmation command. The quality measure expresses the user's experience of the at least one updated version of the computer-generated image data relative to computer-generated image data presented previously on the first and second displays, for example presented in step 1510 or in an earlier instance of step 1520. The confirmation command indicates that the user is satisfied with the quality experienced by the at least one updated version of the computer-generated image data. A subsequent step, 1540 checks if the user-generated feedback signal contains the confirmation command. If so, a step 1550 follows; and otherwise, the procedure loops back to step 1520.
(75) In step 1550, the first and second positions for the left and right eyes respectively are defined based on the user-generated feedback signal. Specifically, this preferably means that the first and second positions are set to the positions presumed when rendering a latest version of computer-generated image data before receiving the confirmation command. Thereafter, the procedure ends.
(76) All of the process steps, as well as any sub-sequence of steps, described with reference to
(77) The term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components. However, the term does not preclude the presence or addition of one or more additional features, integers, steps or components or groups thereof.
(78) The invention is not restricted to the described embodiments in the figures but may be varied freely within the scope of the claims.