WIDE BASELINE STEREO FOR LOW-LATENCY RENDERING
20170272729 · 2017-09-21
Assignee
Inventors
Cpc classification
G06F3/011
PHYSICS
G02B2027/011
PHYSICS
G06F3/002
PHYSICS
H04N13/117
ELECTRICITY
H04N13/279
ELECTRICITY
International classification
H04N13/00
ELECTRICITY
G06F3/00
PHYSICS
Abstract
A virtual image generation system and method of operating same are provided. A left synthetic image and a right synthetic image of a three-dimensional scene are rendered respectively from a first left focal center and a first right focal center relative to a first viewpoint. The first left and first right focal centers are spaced from each other a distance greater than the inter-ocular distance of an end user. The synthetic image and the right synthetic image are warped respectively to a second left focal center and a second right focal center relative to a second viewpoint different from the first viewpoint. The second left and right focal centers are spaced from each other a distance equal to the inter-ocular distance of the end user. A frame is constructed from the left and right warped synthetic images, and displayed to the end user.
Claims
1. A method of operating a virtual image generation system, the method comprising: rendering a left synthetic image and a right synthetic image of a three-dimensional scene respectively from a first left focal center and a first right focal center relative to a first viewpoint, the first left and first right focal centers being spaced from each other a distance greater than an inter-ocular distance of an end user; warping the left synthetic image and the right synthetic image respectively to a second left focal center and a second right focal center relative to a second viewpoint different from the first viewpoint, the second left and right focal centers spaced from each other a distance equal to the inter-ocular distance of the end user; constructing a frame from the left and right warped synthetic images; displaying the frame to the end user.
2. The method of claim 1, wherein the left and right synthetic images are rendered in three dimensions and warped in two dimensions.
3. The method of claim 1, further comprising generating left depth data and right depth data respectively for the left synthetic image and right synthetic image, wherein the left synthetic image and the right synthetic image are respectively warped using the left depth data and the right depth data.
4. The method of claim 3, wherein the left synthetic image and the right synthetic image are rendered based on a first look-at point in the three-dimensional scene, and the left synthetic image and the right synthetic image are warped based on a second look-at point in the three-dimensional scene.
5. The method of claim 1, further comprising detecting the inter-ocular distance of the user.
6. The method of claim 1, further comprising predicting an estimate of the first viewpoint and detecting the second viewpoint.
7. The method of claim 1, further comprising detecting each of the first and second viewpoints.
8. The method of claim 1, wherein the three-dimensional scene includes at least a portion of a virtual object that is not visible from the second left and right focal centers relative to the first view point, and is visible from the second left and right focal centers relative to the second view point.
9. A virtual image generation system for use by an end user having an inter-ocular distance, comprising: memory storing a three-dimensional scene; a control subsystem configured for rendering a left synthetic image and a right synthetic image of the three-dimensional scene respectively from a first left focal center and a first right focal center relative to a first viewpoint, the first left and first right focal centers being spaced from each other a distance greater than the inter-ocular distance of the end user, the control subsystem further configured for warping the left synthetic image and the right synthetic image respectively to a second left focal center and a second right focal center relative to a second viewpoint different from the first viewpoint, the second left and right focal centers spaced from each other a distance equal to the inter-ocular distance of the end user, the control subsystem further configured for constructing a frame from the left and right warped synthetic images; and a display system configured for displaying the frame to the end user.
10. The virtual image generation system of claim 9, wherein the display system is configured for being positioned in front of the eyes of the end user.
11. The virtual image generation system of claim 9, wherein the display system includes a projection subsystem and a partially transparent display surface, the projection subsystem configured for projecting the frame onto the partially transparent display surface, and the partially transparent display surface configured for being position in the field of view between the eyes of the end user and an ambient environment.
12. The virtual image generation system of claim 9, further comprising a frame structure configured for being worn by the end user, the frame structure carrying the display system.
13. The virtual image generation system of claim 9, wherein the control subsystem comprises a graphics control subsystem unit (GPU).
14. The virtual image generation system of claim 9, wherein the left and right synthetic images are rendered in three dimensions and warped in two dimensions.
15. The virtual image generation system of claim 9, wherein the control subsystem is further configured for generating left depth data and right depth data respectively for the left synthetic image and right synthetic image, wherein the left synthetic image and the right synthetic image are respectively warped using the left depth data and the right depth data.
16. The virtual image generation system of claim 15, wherein the left synthetic image and the right synthetic image are rendered based on a first look-at point in the three-dimensional scene, and the left synthetic image and the right synthetic image are warped based on a second look-at point in the three-dimensional scene.
17. The virtual image generation system of claim 9, further comprising one or more sensors configured for detecting the inter-ocular distance of the user.
18. The virtual image generation system of claim 9, further comprising a patient orientation module configured for predicting an estimate of the first viewpoint and detecting the second viewpoint.
19. The virtual image generation system of claim 9, further comprising a patient orientation module configured for detecting each of the first and second viewpoints.
20. The virtual image generation system of claim 9, wherein the three-dimensional scene includes at least a portion of a virtual object that is not visible from the second left and right focal centers relative to the first view point, and is visible from the second left and right focal centers relative to the second view point.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The drawings illustrate the design and utility of preferred embodiments of the present invention, in which similar elements are referred to by common reference numerals. In order to better appreciate how the above-recited and other advantages and objects of the present inventions are obtained, a more particular description of the present inventions briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DETAILED DESCRIPTION
[0036] The description that follows relates to display systems and methods to be used in virtual reality and/or augmented reality systems. However, it is to be understood that the while the invention lends itself well to applications in virtual reality, the invention, in its broadest aspects, may not be so limited.
[0037] Referring to
[0038] The virtual image generation system 100, and the various techniques taught herein, may be employed in applications other than augmented reality and virtual reality systems. For example, various techniques may be applied to any projection or display system. For example, the various techniques described herein may be applied to pico projectors where movement may be made by an end user's hand rather than the head. Thus, while often described herein in terms of an augmented reality system or virtual reality system, the teachings should not be limited to such systems of such uses.
[0039] At least for augmented reality applications, it may be desirable to spatially position various virtual objects relative to respective physical objects in a field of view of the end user 50. Virtual objects, also referred to herein as virtual tags or tag or call outs, may take any of a large variety of forms, basically any variety of data, information, concept, or logical construct capable of being represented as an image. Non-limiting examples of virtual objects may include: a virtual text object, a virtual numeric object, a virtual alphanumeric object, a virtual tag object, a virtual field object, a virtual chart object, a virtual map object, a virtual instrumentation object, or a virtual visual representation of a physical object.
[0040] To this end, the virtual image generation system 100 comprises a frame structure 102 worn by an end user 50, a display system 104 carried by the frame structure 102, such that the display system 104 is positioned in front of the eyes 52 of the end user 50, and a speaker 106 carried by the frame structure 102, such that the speaker 106 is positioned adjacent the ear canal of the end user 50 (optionally, another speaker (not shown) is positioned adjacent the other ear canal of the end user 50 to provide for stereo/shapeable sound control). The display system 104 is designed to present the eyes 52 of the end user 50 with photo-based radiation patterns that can be comfortably perceived as augmentations to physical reality, with high-levels of image quality and three-dimensional perception, as well as being capable of presenting two-dimensional content. The display system 104 presents a sequence of frames at high frequency that provides the perception of a single coherent scene.
[0041] In the illustrated embodiment, the display system 104 comprises a projection subsystem 108 and a partially transparent display surface 110 on which the projection subsystem 108 projects images. The display surface 110 is positioned in the end user's 50 field of view between the eyes 52 of the end user 50 and an ambient environment. In the illustrated embodiment, the projection subsystem 108 includes one or more optical fibers 112 (e.g. single mode optical fiber), each of which has one end 112a into which light is received and another end 112b from which light is provided to the partially transparent display surface 110. The projection subsystem 108 may also include one or more light sources 114 that produces the light (e.g., emits light of different colors in defined patterns), and communicatively couples the light to the other end 112a of the optical fiber(s) 112. The light source(s) 114 may take any of a large variety of forms, for instance, a set of RGB lasers (e.g., laser diodes capable of outputting red, green, and blue light) operable to respectively produce red, green, and blue coherent collimated light according to defined pixel patterns specified in respective frames of pixel information or data. Laser light provides high color saturation and are highly energy efficient.
[0042] The display system 104 may further comprise a scanning device 116 that scans the optical fiber(s) 112 in a predetermined pattern in response to control signals. For example, referring to
[0043] Referring back to
[0044] For example, in one embodiment, the virtual image generation system 100 comprises a head worn transducer system 126 that includes one or more inertial transducers to capture inertial measures indicative of movement of the head 54 of the end user 50. Such may be used to sense, measure, or collect information about the head movements of the end user 50. For instance, such may be used to detect measurement movements, speeds, acceleration, and/or positions of the head 54 of the end user 50. The virtual image generation system 100 may further comprise a forward facing camera 128. Such may be used to capture information about the environment in which the end user 50 is located. Such may be used to capture information indicative of distance and orientation of the end user 50 with respect to that environment and specific objects in that environment. When head worn, the forward facing camera 128 is particularly suited to capture information indicative of distance and orientation of the head 54 of the end user 50 with respect to the environment in which the end user 50 is located and specific objects in that environment. Such may, for example, be employed to detect head movement, speed, and/or acceleration of head movements. Such may, for example, be employed to detect or infer a center of attention of the end user 50, for example, based at least in part on an orientation of the head 54 of the end user 50. Orientation may be detected in any direction (e.g., up/down, left, right with respect to the reference frame of the end user 50).
[0045] The virtual image generation system 100 further comprises a patient orientation detection module 130. The patient orientation module 130 detects the instantaneous position of the head 54 of the end user 50 and predicts the position of the head 54 of the end user 50 based on position data received from the sensor(s). In one embodiment, the patient orientation module 130 predicts the position of the head 54 based on predicting the end user's 50 shift in focus. For example, the patient orientation module 130 may select a virtual object based at least on input indicative of attention of the end user 50, and determine the location of appearance of a virtual object in a field of view of the end user 50 relative to the frame of reference of the end user 50. As another example, the patient orientation module 130 may employ estimated speed and/or estimated changes in speed or estimated acceleration to predict the position of the head 54 of the end user 50. As still another example, the patient orientation module 130 may employ historical attributes of the end user 50 to predict the position of the head 54 of the end user 50. Further details describing predicting the head position of an end user 50 are set forth in U.S. Patent Application Ser. No. 61/801,219 (Attorney Docket No. ML-30006-US), which has previously been incorporated herein by reference.
[0046] The virtual image generation system 100 further comprises a control subsystem that may take any of a large variety of forms. The control subsystem includes a number of controllers, for instance one or more microcontrollers, microprocessors or central processing units (CPUs), digital signal processors, graphics processing units (GPUs), other integrated circuit controllers, such as application specific integrated circuits (ASICs), programmable gate arrays (PGAs), for instance field PGAs (FPGAs), and/or programmable logic controllers (PLUs).
[0047] In the illustrated embodiment, the virtual image generation system 100 comprises a central processing unit (CPU) 132, a graphics processing unit (GPU) 134, and one or more frame buffers 136. The CPU 132 controls overall operation, while the GPU 134 renders frames (i.e., translating a three-dimensional scene into a two-dimensional image) from three-dimensional data stored in the remote data repository 150 and stores these frames in the frame buffer(s) 136. While not illustrated, one or more additional integrated circuits may control the reading into and/or reading out of frames from the frame buffer(s) 136 and operation of the scanning device of the display system 104. Reading into and/or out of the frame buffer(s) 146 may employ dynamic addressing, for instance, where frames are over-rendered. The virtual image generation system 100 further comprises a read only memory (ROM) 138 and a random access memory (RAM) 140. The virtual image generation system 100 further comprises a three-dimensional data base 142 from which the GPU 134 can access three-dimensional data of one or more scenes for rendering frames.
[0048] The various processing components of the virtual image generation system 100 may be physically contained in a distributed system. For example, as illustrated in
[0049] The local processing and data module 144 may comprise a power-efficient processor or controller, as well as digital memory, such as flash memory, both of which may be utilized to assist in the processing, caching, and storage of data captured from the sensors and/or acquired and/or processed using the remote processing module 148 and/or remote data repository 150, possibly for passage to the display system 104 after such processing or retrieval. The remote processing module 148 may comprise one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. The remote data repository 150 may comprise a relatively large-scale digital data storage facility, which may be available through the internet or other networking configuration in a “cloud” resource configuration. In one embodiment, all data is stored and all computation is performed in the local processing and data module 144, allowing fully autonomous use from any remote modules.
[0050] The couplings 146, 152, 154 between the various components described above may include one or more wired interfaces or ports for providing wires or optical communications, or one or more wireless interfaces or ports, such as via RF, microwave, and IR for providing wireless communications. In some implementations, all communications may be wired, while in other implementations all communications may be wireless. In still further implementations, the choice of wired and wireless communications may be different from that illustrated in
[0051] In the illustrated embodiment, the patient orientation module 130 is contained in the local processing and data module 144, while CPU 132 and GPU 134 are contained in the remote processing module 148, although in alternative embodiments, the CPU 132, GPU 124, or portions thereof may be contained in the local processing and data module 144. The 3D database 142 can be associated with the remote data repository 150.
[0052] Significant to the present inventions, the GPU 134, based on the head position and head movements of the end user 50 obtained from the transducer system 126 via the local processing and data module 144, renders and warps frames in a manner that minimizes latency (i.e., the elapsed time between when the end user 50 moves his or her head and the time when the frame is updated and displayed to the end user 50), while also reducing the frequency and size of holes in the warped images.
[0053] In particular, and with reference to
[0054] Because the image points move different amounts, depending on their depth, points in the three-dimensional scene 70 not visible from the old focal points P.sub.L(x, y, z) and P.sub.R(x, y, z) may be visible from new focal points P.sub.L′(x, y, z) and P.sub.R′(x, y, z). These points (the disoccluded points) are problematic, because they create “holes” the newly warped synthetic images I.sub.L′(u, v) and I.sub.R′(u, v). All existing methods of filling those holes are computationally expensive and/or potentially create artifacts. For example, consider a convex object, such as a sphere 72, in the three-dimensional scene 70 illustrated in
[0055] With reference to
[0056] The focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) are spaced from each other a wider rendering distance greater than the inter-ocular distance of the end user 50 (in the exemplary case, greater than 2d). For example, the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) can be selected in accordance with the equations:
W.sub.L(x, y, z)=P.sub.R(x, y, z)+k(P.sub.L(x, y, z)−P.sub.R(x, y, z)); and [1]
W.sub.R(x, y, z)=P.sub.L(x, y, z)+k(P.sub.R(x, y, z)−P.sub.L(x, y, z)); [2]
where k>1 to set the spacing between the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) to be greater than the inter-ocular distance of the end user 50. The values for the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) can be selected to compromise between minimizing the size and number of holes in the synthetic image and the overall quality of the synthetic image. That is, the size and number of the holes in the synthetic image will decrease as the distance between the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) increases; however, the general quality of the synthetic image will decrease as the distance between the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) increases.
[0057] Assuming again that the head position of the end user 50 changes, such that the viewpoint changes from the position V to the position V′, the left eye 52a is now located at a new focal point W.sub.L′(x, y, z), and the right eye 52b is now located at a new focal point W.sub.R′(x, y, z), which are now pointed to the different look-at point P.sub.LA′. The synthetic images I.sub.L(u, v) and I.sub.R(u, v) are two-dimensionally warped using parallax in a conventional manner, using the depth buffers D.sub.L(u, v) and D.sub.R(u, v), the old and new look-at points P.sub.LA′ and P.sub.LA to create new synthetic images I.sub.L′(u, v) and I.sub.R′(u, v) of the three-dimensional scene 70 for new focal points W.sub.L′(x, y, z) and W.sub.R′(x, y, z).
[0058] Significantly, a larger portion 72e of the sphere 72 is visible from old focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) compared to the portion 72a of the sphere 72 seen from old focal points P.sub.L(x, y, z) and P.sub.R(x, y, z), while a smaller portion 72f of the sphere 72 remains invisible from the old focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) compared to the portion 72b of the sphere 72 invisible from old focal points P.sub.L(x, y, z) and P.sub.R(x, y, z). As such, the locus of sphere points visible from focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) is greater than the locus of sphere points from focal points P.sub.L(x, y, z) and P.sub.R(x, y, z). As a result, when looking at convex points, it will be expected that fewer and smaller holes exist after a two-dimensional image warp used to compute new images I.sub.L′(u, v) and I.sub.R′(u, v) of the three-dimensional scene 70 for new focal points W.sub.L′(x, y, z) and W.sub.R′(x, y, z). For example, a different portion 72g of the sphere 72 is visible from the new focal points P.sub.L′(x, y, z) and P.sub.R′(x, y, z), some 72h of which is included in the relatively large portion 72f of the sphere 72 invisible from the old focal points P.sub.L(x, y, z) and P.sub.R(x, y, z). That is, a smaller portion 72h of the sphere 72 compared to the portion 72d of the sphere 72 has been disoccluded when the eyes are moved from the old focal points P.sub.L(x, y, z) and P.sub.R(x, y, z) to the new focal points P.sub.L′(x, y, z) and P.sub.R′(x, y, z).
[0059] Referring now to
[0060] Next, the CPU 132 selects the wider rendering spacing (i.e., the difference between the focal points W.sub.L(x, y, z) and W.sub.R(x, y, z) (step 204). For example, the wider rendering spacing may be manually entered into the virtual image generation system 100. The wider rendering spacing may be selected in accordance with equations [1] and [2].
[0061] In the method illustrated in
[0062] Accordingly, the CPU 132 instructs the patient orientation module 130, using the associated head worn transducer system 126 and forward facing camera 128, to predict an estimated position of the head 54 of the end user 50, and thus an estimated viewpoint V, at the next time that a frame is to be displayed to the end user 50 (step 206). The predication of the position of the head 54 may be derived from the immediately previous detected actual position of the head 54 and other parameters, such as speed, acceleration, and historical attributes of the end user 50. The CPU 132, then, instructs the GPU 134 to render the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) of the three-dimensional scene 70 respectively from the wider left and right focal centers W.sub.L(x, y, z) and W.sub.R(x, y, z) relative to the estimated viewpoint V (step 208). In the preferred method, the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) are rendered in three dimensions, and may be rendered based on a look-at point P.sub.LA in the three-dimensional scene 70. The CPU 132 then stores the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) in memory (step 210). Steps 206-210 are repeated to continually render and store the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) at each estimated position of the head 54.
[0063] As the GPU 134 renders and stores these images, the CPU 132 determines whether it is time to display the next frame to the end user 50 (step 212). If so, the CPU 132 instructs the patient orientation module 130, using the associated head worn transducer system 126 and forward facing camera 128, to detect the actual position of the head 54 of the end user 50, and thus the actual viewpoint V′ (step 214).
[0064] The CPU 132 then instructs the GPU 134 to retrieve the most recent complete left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) rendered at the wider left and right focal centers W.sub.L(x, y, z) and W.sub.R(x, y, z) from the memory (step 216), and to warp the retrieved left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) respectively to the narrower left and right focal centers P.sub.L′(x, y, z) and P.sub.R′(x, y, z) relative to the actual viewpoint V′ (step 218). In the preferred method, the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) are warped in two dimensions, and may be rendered based on a different look-at point P.sub.LA′ in the three-dimensional scene. The left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) may be warped using left depth data and right depth data generated during the rendering of the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v). The CPU 132 then instructs the GPU 134 to construct a frame from the left and right warped synthetic images I.sub.L′(u, v) and I.sub.R′(u, v) (step 220), and then instructs the display system 104 to display the frame to the end user 50 (step 222). The CPU 132 returns to step 212 to periodically determine whether it is time to display the next frame to the end user 50.
[0065] Referring now to
[0066] The method 300 generally differs from the method 200 in that frames are only displayed at a rate greater than that at which the images of the three-dimensional scene 70 when the end user 50 moves his or her head 54. For example, if the maximum rate at which the frames are rendered is once every 15 ms, the frames may be displayed to the end user 50 once every 15 ms when the head 54 of the end user 50 is stable, and may be displayed to the end user 50 once every 5 ms when the head 54 of the end user 50 is moving.
[0067] To this end, the CPU 132 determines the inter-ocular distance of the end user 50 (step 302), and selects the wider rendering spacing (step 304), which can be accomplished in the manner described above with respect to steps 202 and 204 of the method 200. Next, the CPU 132 instructs the patient orientation module 130, using the associated head worn transducer system 126 and forward facing camera 128, to detect the actual position of the head 54 of the end user 50, and thus a baseline viewpoint V (step 306). The CPU 132, then, instructs the GPU 134 to render the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) of the three-dimensional scene 70 respectively from the wider left and right focal centers W.sub.L(x, y, z) and W.sub.R(x, y, z) relative to the baseline viewpoint V (step 308). In the preferred method, the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) are rendered in three dimensions, and may be rendered based on a look-at point P.sub.LA in the three-dimensional scene 70. The CPU 132 then stores the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) in memory (step 310). Steps 306-310 are repeated to continually render and store the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v).
[0068] As the GPU 134 renders and stores these images, the CPU 132 instructs the patient orientation module 130, using the associated head worn transducer system 126 and forward facing camera 128, to determine whether an actual and anticipated movement of the head 54 of the end user 50 occurs (step 312). If actual or anticipated movement does occur, the CPU 132 instructs the patient orientation module 130, using the associated head worn transducer system 126 and forward facing camera 128, to detect the new position of the head 54 of the end user 50, and thus the new viewpoint V′ (step 314).
[0069] Next, the CPU 132 determines whether there is a substantive difference between the baseline viewpoint V and the new view point V′ (step 316). If there is a substantive difference between the baseline viewpoint V and the new view point V′, the CPU 132 instructs the GPU 134 to retrieve the most recent complete left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) rendered at the wider left and right focal centers W.sub.L(x, y, z) and W.sub.R(x, y, z) (step 318), and to warp the retrieved left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) respectively to the narrower left and right focal centers P.sub.L′(x, y, z) and P.sub.R′(x, y, z) relative to the actual viewpoint V′ (step 320). In the preferred method, the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) are warped in two dimensions, and may be rendered based on a different look-at point P.sub.LA′ in the three-dimensional scene. The left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) may be warped using left depth data and right depth data generated during the rendering of the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v). The CPU 132 then instructs the GPU 134 to construct a frame from the left and right warped synthetic images I.sub.L′(u, v) and I.sub.R′(u, v) (step 320), and then instructs the display system 104 to display the frame to the end user 50 (step 324).
[0070] If at step 312 there was a determination that no actual or anticipated movement of the head 54 of the user 50 occurs, or if at step 316 there was a determination that there is no substantive difference between the baseline viewpoint V and the new viewpoint V′, the CPU 132 determines whether it is time to display the next frame to the end user 50 (step 226). If so, the CPU 132 then instructs the GPU 134 to retrieve the most recent complete left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) rendered at the wider left and right focal centers W.sub.L(x, y, z) and W.sub.R(x, y, z) from the memory (step 328), and to warp the retrieved left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) respectively to the narrower left and right focal centers P.sub.L′(x, y, z) and P.sub.R′(x, y, z) relative to the baseline viewpoint V (step 330). In the preferred method, the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) are warped in two dimensions, and may be rendered based on a different look-at point P.sub.LA′ in the three-dimensional scene. The left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v) may be warped using left depth data and right depth data generated during the rendering of the left and right synthetic images I.sub.L(u, v) and I.sub.R(u, v). The CPU 132 then instructs the GPU 134 to construct a frame from the left and right warped synthetic images I.sub.L′(u, v) and I.sub.R′(u, v) (step 322), and then instructs the display system 104 to display the frame to the end user 50 (step 324). The CPU 132 returns to step 326 to determine whether actual or anticipated movement of the head 54 of the end user 50 has moved.
[0071] In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.