COLLECTING AND PROCESSING STEREOSCOPIC DIGITAL IMAGE DATA TO PRODUCE A PARALLAX CORRECTED TILTED HEAD VIEW
20170237963 · 2017-08-17
Inventors
Cpc classification
H04N13/378
ELECTRICITY
H04N13/117
ELECTRICITY
H04N2013/0081
ELECTRICITY
H04N13/122
ELECTRICITY
H04N13/243
ELECTRICITY
International classification
Abstract
An apparatus for capturing digital stereoscopic images of a scene. The apparatus comprises a first pair of separated camera lens oriented such that a first imaginary line between the first pair of lens is substantially parallel with a horizon line a scene, wherein digital image data is capturable through the first pair of camera lens and storable in two separate digital image data bases corresponding to a left-eye horizontal view and a right-eye horizontal view respectively. The apparatus comprises a second pair of separated camera lens oriented such that a second imaginary line between the second pair of lens is substantially non-parallel with the horizon line, wherein digital image data is capturable through the second pair of camera lens and storable in two separate digital image data bases corresponding to a left-eye off-horizontal view and a right-eye off-horizontal view respectively.
Claims
1. An apparatus for capturing stereoscopic digital images of a scene, comprising: a first pair of separated camera lens oriented such that a first imaginary line between the first pair of lens is substantially parallel with a horizon line a scene, wherein digital image data is capturable through the first pair of camera lens and storable in two separate digital image data bases corresponding to a left-eye horizontal view and a right-eye horizontal view respectively; and a second pair of separated camera lens oriented such that a second imaginary line between the second pair of lens is substantially non-parallel with the horizon line, wherein digital image data is capturable through the second pair of camera lens and storable in two separate digital image data bases corresponding to a left-eye off-horizontal view and a right-eye off-horizontal view respectively.
2. The apparatus of claim 1, wherein the first pair of separated camera lens are separated from each other along the first line by a fixed distance and the second pair of separated camera lens are separated from each other along the second line by a same fixed distance.
3. The apparatus of claim 1, wherein the first pair of separated camera lens and the second pair of separated camera lens are located in a same plane that is substantially perpendicular to the horizon line.
4. The apparatus of claim 1, wherein the first line and the second line pass through a common center point and the second line is substantially perpendicular to the first line.
5. The apparatus of claim 1, wherein: a first segment of the second line is located between one of the second pair of lens and a center point of the second line, a first segment of the first line is located between one of the first pair of lens and a center point of the first line, and the first segment of the second line forms an off-horizontal angle of about +90 degrees with the first segment of the first line.
6. The apparatus of claim 5, wherein: a second segment of the second line is located between the other one of the second pair of lens and the center point of the second line, a second segment of the first line is located between the other one of the first pair of lens and the center point of the first line, and the second segment of the second line forms another off-horizontal angle of about −90 degrees with the second segment of the first line.
7. The apparatus of claim 1, further including a third pair of separated camera lens oriented such that a third imaginary line between the third pair of lens is substantially non-parallel with the first line and the second line.
8. The apparatus of claim 7, wherein: the first line, second line and the third line pass through a common center point, a first segment of the second line located between one of the second pair of lens and the center point, forms an about 90 degree off-horizontal angle with a first segment of the first line located between one of the first pair of lens and the center point, and a first segment of the third line located between one of the third pair of lens and the center point, forms an about 45 degree off-horizontal angle with the same first segment of the first line.
9. The apparatus of claim 7, further including a fourth pair of separated camera lens oriented such that a fourth imaginary line between the fourth pair of lens is substantially non-parallel with the first line, the second line and the third line.
10. The apparatus of claim 9, wherein: the first line, second line, the third line and the fourth line pass through a common center point, a first segment of the second line located between one of the second pair of lens and the center point, forms an about +90 degree off-horizontal angle with a first segment of the first line located between one of the first pair of lens and the center point, a first segment of the third line located between one of the third pair of lens and the center point, forms an about +45 degree off-horizontal angle with the same first segment of the first line, and a first segment of the fourth line located between one of the fourth pair of lens and the center point, form an about −45 degree off-horizontal angle with the same first segment of the first line.
11. The apparatus of claim 1, wherein: a first segment of the second line is located between one of the second pair of lens and a center point of the second line, a first segment of the first line is located between one of the first pair of lens and a center point of the first line, and the first segment of the second line forms an off-horizontal angle with the first segment of the first line, the off-horizontal angle in a range from about +20 to +70 degrees.
12. The apparatus of claim 1, further include separate depth-detection sensors near each one of the camera lens of the first and second pairs of separated camera lens, wherein the separate depth-detection sensors are configured to collect depth values corresponding to each of the pixels in a full field of view of the digital image data bases captured by the nearest one of the camera lens.
13. A method of processing stereoscopic digital images of a scene, for presentation on a head mounted stereoscopic display unit, comprising: loading, from a data store of an electronic computing device, separate digital image data bases of images corresponding to a first pair of left-eye and right-eye horizontal views, and a second pair of left-eye and right-eye off-horizontal views; selecting, in an electronic processing unit of the electronic computing device, a blend of pixels from the first pair of left-eye and right-eye horizontal views and the second pair of left-eye and right-eye off-horizontal views, wherein the blend is proportionate to a tilt angle of the head mounted stereoscopic display unit; and morphing, in the electronic processing unit, the blend of the pixels of the left-eye horizontal view with the pixels of the left-eye off-horizontal view to produce a left-eye image for presentation on a left screen side of the head mounted stereoscopic display unit, and, morphing the blend of the pixels of the right-eye horizontal view with the pixels of the right-eye off-horizontal view to produce a right-eye image for presentation on a right screen side of the head mounted stereoscopic display unit.
14. The method recited in claim 13, wherein the digital image data bases corresponding to the left-eye horizontal view and the right-eye horizontal view were captured, respectively, by a first pair of separated camera lens oriented such that a first imaginary line between the first pair of lens is substantially parallel with a horizon line in the scene, and, the digital image data bases corresponding to the left-eye off-horizontal view and the right-eye off-horizontal view, were captured, respectively, by a second pair of separated camera lens oriented such that a second imaginary line between the second pair of lens is substantially non-parallel with the horizon line.
15. The method recited in claim 13, wherein the digital image data bases corresponding to the left-eye horizontal view and the right-eye horizontal view were generated, respectively, from a first pair of separated virtual camera lens oriented such that a first imaginary line between the first pair of lens is substantially parallel with the horizon line in the scene generated as a computer graphics-generated scene, and, the digital image data bases corresponding to the left-eye off-horizontal view and the right-eye off-horizontal view, were generated, respectively, from a second pair of separated virtual camera lens oriented such that a second imaginary line between the second pair of lens is substantially non-parallel with the horizon line in the computer graphics-generated scene.
16. The method recited in claim 13, wherein the selected blend equals 100×θ1/θ2 percent of an intensity of the pixels of second pair of rotated left-eye and right-eye off-horizontal views and 100×(θ2−θ1)/θ2 percent of an intensity of the pixels of the first pair of left-eye and right-eye horizontal views, when the tilt angle equals θ1, an off-horizontal angle, θ2, is formed between a first imaginary line between left-eye and right eye horizontal camera view image data bases and a second imaginary line between left-eye and right-eye off-horizontal camera view image data bases, and θ1 is less than or equal to θ2.
17. The method recited in claim 16, wherein the morphing includes: producing separate depth map data bases, each of the depth map data bases holding sets of depth values, D1, D2, D3, and D4, corresponding to each of the pixels of one of the first pair of left-eye and right-eye horizontal view data bases and the second pair of left-eye and right-eye off-horizontal view data bases, respectively; calculating a weighted sum of intensities for each one of the pixels of the left-eye horizontal view data base and a corresponding one the pixels of the left-eye off-horizontal view data base, wherein the intensity of the pixels of left-eye horizontal view data base have a weighting proportional to (θ1/θ2)/D1 and the intensity of the pixels of the left-eye off-horizontal view data base have a weighting proportional to ((θ2−θ1)/θ2)/D3); and calculating a weighted sum of intensities for each one of the pixels of the right-eye horizontal view data base and a corresponding one the pixels of the right-eye off-horizontal view data base, wherein the intensity of the pixels of right-eye horizontal view data base have a weighting proportional to (θ1/θ2)/D2 and the intensity of the pixels of the right-eye off-horizontal view data base have a weighting proportional to ((θ2−θ1)/θ2)/D4).
18. The method recited in claim 17, wherein producing the separate depth map data bases includes calculating, in the electronic processing unit, the depth values from an amount of location shift between corresponding pixels from the left-eye horizontal view data base versus the right-eye horizontal view data base, and, calculating the depth values from an amount of location shift between corresponding pixels from the left-eye off-horizontal view data base versus the right-eye off-horizontal view data base.
19. The method recited in claim 17, wherein producing the separate depth map data bases includes retrieving, from the data store, depth values from a depth map data base collected from depth-detection sensors located nearby camera lens used to capture images stored in the first pair of left-eye and right-eye horizontal view data bases and the second pair of left-eye and right-eye off-horizontal view data bases.
20. An electronic computing image processing system for processing stereoscopic digital images of a scene, comprising: an electronic computing device, the electronic computing device including: a data store configured to hold separate digital image data bases of images corresponding to a first pair of left-eye and right-eye horizontal views, and a second pair of left-eye and right-eye off-horizontal views; and graphical processing and central processing units configured to: select a blend of pixels from the first pair of left-eye and right-eye horizontal views and the second pair of left-eye and right-eye off-horizontal views, wherein the blend is proportionate to a tilt angle of a head mounted stereoscopic display unit; and morph the blend of the pixels of the left-eye horizontal view with the pixels of the left-eye off-horizontal view to produce a left-eye image for presentation on a left screen side of the head mounted stereoscopic display unit, and, morph the blend of the pixels of the right-eye horizontal view with the pixels of the right-eye off-horizontal view to produce a right-eye image for presentation on a right screen side of the head mounted stereoscopic display unit.
Description
BRIEF DESCRIPTION
[0006] Reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
[0007]
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
DETAILED DESCRIPTION
[0014] To view digitally captured virtual reality stereoscopic images (e.g., fixed images or movies) of a scene, a person typically wears a head-mounted display unit. The head-mounted display unit can provide a 360 degree view of the captured image of the scene as the person rotates their head around. If, however, the person tilts their head, and head-mounted display unit, to the left or right, then the image display may become distorted to such an extent that the stereoscopic viewing effect is lost. The distortion can be so bad that the person may have trouble continuing to view the image. Consequently, the realism of the viewing experience is detracted from because the viewer has to keep their head straight to avoid these distortions.
[0015] As part of the present invention, the inventor has recognized that such distortions are due to a failure of present methods to produce stereoscopic images to reproduce the parallax experience of human vision when a viewer tilts their head.
[0016] Parallax refers to the difference in the apparent position of an object viewed along two different lines of sight, such as from the left eyes versus the right eye. For example, consider a person sitting or standing upright while looking at a scene in reality with both eyes such that an imaginary line between the eyes is parallel to a horizon line in the scene. The different views experienced by each eye of the person will correspond to a parallax between eyes that is substantially horizontal. If, however, the person then tilts their head sideways while looking at the same scene, such that the line between the eyes is perpendicular to the horizon line in the scene, then different views experienced by each eye will be of a parallax between eyes that is substantially vertical. If the person views the scene with their head partially tilted by less than a perpendicular amount, then different views experienced by the eyes will be of a parallax that is intermediate between horizontal and vertical parallax effects.
[0017] Existing stereoscopic image processing methods fail to provide proper left and right eye views when a viewer's head, and head-mounted display unit, is tilted because the vertical parallax experience is either not collected as part of the captured image data and/or the method does not consider such data when producing images as the viewer tilts their head while wearing a head-mounted display unit to view the image.
[0018] This invention addresses such deficiencies by providing an apparatus that is constructed to capture stereoscopic image data for each eye, corresponding to a horizontal zero-degree-angled un-tilted orientation, and, to off-horizontal non-zero-degree-angled tilted orientations of the eyes, relative to a horizon in a scene. This invention further addresses such deficiencies by providing a method of processing such captured data such that the rendered images sent to either eye has a blend of horizontal and off-horizontal parallax effects to account for the depth of object in the image and the extent to which a viewer's head is tilted while viewing the image.
[0019]
[0020] With continuing reference to
[0021] The apparatus 100 also comprises a second pair of separated camera lens 120, 122 oriented such that a second imaginary line 125 between the second pair of lens 120, 122 is substantially non-parallel with the horizon line 112. The digital image data is capturable through the second pair of camera lens 120, 122 and storable in two separate digital image data bases 127, 129 corresponding to a left-eye off-horizontal view and a right-eye off-horizontal view, respectively.
[0022] One skilled in the pertinent arts would understand how the pixels of a digital image could be stored in a digital image data base, e.g., as binary data, e.g., in bitmap or pixmap formats in a data store 130. In some embodiments, the data store 130 can be part of the apparatus 100 while in other embodiments the data store 130 can be located remotely from the apparatus 100. Non-limiting examples of data stores include random access memory (RAM) hard disk drives, solid state drives, removable storage drives, such as floppy disk drives, magnetic tape drives, or a compact disk drives or other storage familiar to those skilled in the pertinent arts. One skilled in the art would understand how such binary data could be communicated over a computer network (e.g. a cloud network) via a transmission control protocol/internet protocol, or similar protocols, to a remotely located data store.
[0023] As further illustrated in
[0024]
[0025] As illustrated in
[0026] Although subsequent embodiments of the apparatus 100 are described in the context of the front-side 202 camera lens 105, 107, 120, 122 or additional camera lens on the front-side 202, any of these embodiments would be equally applicable to the back-side 302 camera lens 305, 307, 320, 322 or additional lens on the back-side 302. Images captured from the back-side 302 camera lens would be stored in separate digital image data bases analogous to that described for the front-side 202 camera lens.
[0027] As further illustrated in
[0028] As further illustrated in
[0029] As further illustrated in
[0030] As illustrated in
[0031] In some embodiments, a second segment 270 of the second line 125 is located between the other one of the second pair of lens (e.g., lens 122) and the center point 255 of the second line 125, and, a second segment 272 of the first line 110 is located between the other one of the first pair of lens (e.g., lens 107) and the center point 255 of the first line 110. In some such embodiments, the second segment 270 of the second line 125 forms another off-horizontal angle 275 of about −90 degrees with the second segment 272 of the first line 110.
[0032] In some embodiments, to provide more image data that may more accurately account for off-horizontal parallax effects, additional separate pairs of camera lens can be positioned at different locations on the front-side 202 or back-side 302. For instance,
[0033] As illustrated in
[0034] As further illustrated in
[0035] In some such embodiments, as described in the context of
[0036] In some embodiments, to provide image data that can more accurately account for off-horizontal parallax effects, the second pair separated camera lens 120, 122 can be oriented such that a second imaginary line between the second pair of lens forms a non-parallel line that is commensurate with a viewer's expected maximum head tilt, which, e.g., may be of substantially less than +90 or −90 degrees.
[0037] In such embodiments, the second pairs of lens 120, 122 can be used to account off-horizontal parallax effects experienced by the left eye for head tilt angles to the same off-horizontal angle 265 ranges as described above.
[0038] Additionally, to account for off-horizontal parallax effects experienced by the right eye, the apparatus 100 can further include a third pair of lens 520, 522, e.g., oriented such that a third imaginary line 525 between the third pair of lens 520, 522 is substantially non-parallel with the first line 110 and the second line 125 and passing through the same center point 255.
[0039] Analogous to that described above, a first segment 560 of the third line 525 can be located between one of the third pair of lens 520 and the center point 255 of the third line 525. In some such embodiments, the first segment 560 of the third line 525 can form an off-horizontal angle 570 with the first segment 262 of the first line 110, the off-horizontal angle 570 in a range, e.g., from about −20 to −70 degrees, and in some embodiments, from about −35 to −55 degrees. For instance, as illustrated in
[0040] As further illustrated in
[0041] Such depth-detection sensors 140, 141, 142 143 can be positioned as described above on both the front-side 202 and back-side 302 of the apparatus 100. In some embodiments, each of the separate depth-detection sensors includes, or is, a light detection and ranging (LIDAR) sensor, or, a radio detection and ranging sensor (RADAR) sensor, or, and ultrasonic detection and ranging sensor. One skilled in the pertinent arts would understand how to configure such sensors to emit signals (e.g., infrared or visible light, radio frequency pulses or sound wave signals), which can be reflected off of the surfaces of objects in the scene 102 back to the sensor to detect the reflected signals, which in turn, can be used to calculated a depth value for the part of an object's surface that the signals reflected off of and to relate that to depth values of a pixel or pixels in the digital image data bases 115, 117, 127, 129 representing that object's surface.
[0042] Other aspects of the invention include embodiments of a method of processing stereoscopic digital images of a scene, for presentation on a head mounted stereoscopic display unit and a system for processing such digital image data.
[0043]
[0044] With continuing reference to
[0045] The method 600 also includes selecting in step 610, in an electronic processing unit (e.g., one or both of GPU 710 and CPU 715), a blend of pixels from the first pair of left-eye and right-eye horizontal views and the second pair of left-eye and right-eye off-horizontal views. The blend is proportionate to the off-horizontal tilt angle (e.g., tilt angle 735) of the head mounted stereoscopic display unit 730.
[0046] The method 600 also includes morphing in step 615, in the electronic processing unit, the blend of the pixels of the left-eye horizontal view with the pixels of the left-eye off-horizontal view to produce a left-eye image for presentation on a left screen side (e.g., left screen 740) of the head mounted stereoscopic display unit 730, and, morphing also as part of step 615, the blend of the pixels of the right-eye horizontal view with the pixels of the right-eye off-horizontal view to produce a right-eye image for presentation on a right screen side (e.g., right screen 745) of the display unit 730.
[0047] Similar to that discussed in the context of the apparatus 100 depicted in
[0048] Alternatively, in other embodiments of the method 600, the digital image data bases corresponding to the left-eye horizontal view and the right-eye horizontal view loaded in step 605 were respectively generated, in step 622, from a first pair of separated virtual camera lens oriented such that a first imaginary line between the first pair of lens is substantially parallel with the horizon line in the scene generated as a computer graphics-generated scene. The digital image data bases corresponding to the left-eye off-horizontal view and the right-eye off-horizontal view, were generated respectively, as part of step 622, from a second pair of separated virtual camera lens oriented such that a second imaginary line between the second pair of lens is substantially non-parallel with the horizon line in the computer graphics-generated scene. One skilled in the pertinent arts would be familiar with computer graphics rendering procedures to generate such artificial scenes as well as how to generate left-eye and right-eye views of such artificial scenes from the different perspectives of horizontally and off-horizontally positioned virtual camera lens.
[0049] In some embodiments, as part of step 610, the selected blend of pixels equals 100 percent of an intensity of the pixels of the second pair of left-eye and right-eye off-horizontal views and 0 percent of an intensity of the pixels of the first pair of left-eye and right-eye horizontal views. Such a blend is used when the head mount display unit off horizontal tilt angle 735 is substantially equal (e.g., within about ±1 degree) to an off-horizontal angle formed between a first imaginary line (e.g., line 110) between left-eye and right eye horizontal camera view image data bases (e.g., data bases 115 and 117 respectively) and a second imaginary line (e.g., line 125) between left-eye and right-eye off-horizontal camera view image data bases (e.g., data bases 127 and 129, respectively).
[0050] For example, when the tilt angle 735 is substantially equal to the 90 degree off-horizontal angle 265 of the apparatus 100 configured as depicted in
[0051] For example, when the tilt angle 735 is substantially equal to a 45 degree off-horizontal angle 425, or, the 45 degree off-horizontal angle 265, then the selected blend (step 610) equals 100 percent of an intensity of the pixels in the data bases 127, 129 (corresponding to the images captured from camera lens 120, 122) are selected and 0 percent of an intensity of pixels in data bases 115, 117 (corresponding to the images captured from camera lens 105, 107) are selected.
[0052] In other embodiments, if the tilt angle 735 is greater than zero, but less than the off-horizontal angle (e.g., less than angles 265, 425, depending of the configuration of the apparatus 100) then less than 100 percent of an intensity of the pixels in the data bases 127, 129 and greater than 0 percent of an intensity of pixels in data bases 115, 117 are selected as part of step 610.
[0053] For instance, consider embodiments where the tilt angle 735 is equal to θ1, the off-horizontal angle 265, 425, equals θ2, and θ1 is less than or equal to θ2. In such embodiments, the selected blend equals 100×θ1/θ2 percent of an intensity of the pixels of second pair of rotated left-eye and right-eye off-horizontal views (e.g., data bases 127, 129) and 100×(θ2−θ1)/θ2 percent of an intensity of the pixels of the first pair of left-eye and right-eye horizontal views (e.g., data bases 115, 117).
[0054] In some embodiments, morphing in step 615 includes a step 625 of producing separate depth map data bases. Each of the depth map data bases holding sets of depth values, D1, D2, D3, and D4, corresponding to each of the pixels of one of the first pair of left-eye and right-eye horizontal view data bases 115, 117 and the second pair of left-eye and right-eye off-horizontal view data bases 127, 129, respectively. As used herein the sets of depth values, D1, D2, D3, and D4 refer to pixel relative depths which range from an arbitrary maximum value (e.g., 100 arbitrary depth units) for those pixels that are associated with most distant objects in the scene 102, to a minimum value (e.g., 1 arbitrary depth units) for those pixels that are associate with most close objects in the scene 102.
[0055] In some embodiments, as part of step 625, producing the separate depth map data bases includes calculating in step 630, in the electronic processing unit, the depth values D1, D2, from an amount of location shift between corresponding pixels from the left-eye horizontal view data base 115 versus the right-eye horizontal view data base 117, and, calculating the depth values D3, D4, from an amount of location shift between corresponding pixels from the left-eye off-horizontal view data base 127 versus the right-eye off-horizontal view data base 129.
[0056] Alternatively, in some embodiments, as part of step 625, producing the separate depth map data bases includes retrieving in step 635, from the data store 720, depth values from a depth map data base collected from depth-detection sensors 140, 141, 142, 143 located nearby cameras lens (e.g., lens 105, 107, 120, 122) used to capture images stored in the first pair of left-eye and right-eye horizontal view data bases 115, 117 and the second pair of left-eye and right-eye off-horizontal view data bases 127, 129.
[0057] Morphing in step 615 can also include a step 640 of calculating, in the electronic processing unit, a weighted sum of intensities for each one of the pixels of the left-eye horizontal view data base 115 and a corresponding one the pixels of the left-eye off-horizontal view data base 127. The intensity of the pixels of left-eye horizontal view data base 115 have a weighting proportional to (θ1/θ2)/D1 and the intensity of the pixels of the left-eye off-horizontal view data base 127 have a weighting proportional to ((θ2−θ1)/θ2)/D3.
[0058] Morphing in step 615 can also include a step 645 of calculating, in the electronic processing unit, a weighted sum of intensities for each one of the pixels of the right-eye horizontal view data base 107 and a corresponding one the pixels of the right-eye off-horizontal view data base 129. The intensity of the pixels of right-eye horizontal view data base 117 have a weighting proportional to (θ1/θ2)/D2 and the intensity of the pixels of the right-eye off-horizontal view data base 129 have a weighting proportional to ((θ2−θ1)/θ2)/D4).
[0059] Those skilled in the art to which this application relates will appreciate that other and further additions, deletions, substitutions and modifications may be made to the described embodiments.