Multiscopic image capture system
11558600 · 2023-01-17
Assignee
Inventors
- Jonathan Sean Karafin (Morgan Hill, CA, US)
- Miller H. Schuck (Erie, CO)
- Douglas J. McKnight (Boulder, CO)
- Mrityunjay Kumar (Ventura, CA)
- Wilhelm Taylor (Boulder, CO)
Cpc classification
H04N13/111
ELECTRICITY
H04N13/282
ELECTRICITY
International classification
H04N13/282
ELECTRICITY
H04N13/111
ELECTRICITY
Abstract
Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, T.sub.d, the plurality of intermediate views being extrapolated from the captured views.
Claims
1. A multiscopic content system, comprising: a plurality of modules each comprising: at least one device configured to at least sense electromagnetic energy and generate image data; and at least one energy directing element operable to at least direct electromagnetic energy to the at least one device; wherein the at least one device is configured to be positioned along at least a first direction by an offset distance, the first direction being perpendicular to an energy propagation axis of the at least one energy directing element; and wherein the offset distance of the at least one device of the plurality of modules are determined so frustums of the at least one device of the plurality of modules substantially overlap to define a convergence volume; wherein the at least one device of the plurality of modules are positioned in a first plane and wherein the convergence volume comprises a frustum width at a perpendicular distance from the first plane.
2. The system of claim 1, wherein the perpendicular distance from the first plane is ((D.sub.Inf−D.sub.Max)*CA %)+D.sub.Max, wherein CA % is a percent between 0 and 100%, D.sub.Max is a distance between the first plane and a closest object in a scene, and D.sub.Inf is a distance where less than 1 pixel of disparity is possible between adjacent modules of the plurality of modules.
3. The system of claim 1, wherein an adjustment to the offset distance of the at least one device of the plurality of modules alters the frustum width of the convergence volume.
4. The system of claim 1, wherein the at least one device of the plurality of modules are configured to be position along a second direction, the second direction being perpendicular to the first direction.
5. The system of claim 4, wherein the at least one device of the plurality of modules are configured to be positioned along a third direction, the third direction being orthogonal to both the first and second directions.
6. The system of claim 5, wherein the second or third direction is parallel to the optical axis of the respective at least one energy directing element of the plurality of modules.
7. A holographic content system, comprising: first and second clusters of modules, each module comprising: at least one device configured to at least sense electromagnetic energy and generate image data; and at least one energy directing element operable to at least direct electromagnetic energy to the at least one device; and wherein the at least one device is configured to be positioned along at least a first direction by an offset distance, the first direction being perpendicular to an energy propagation axis of the at least one energy directing element; wherein the offset distance of the at least one device of the first cluster of modules are determined so frustums of the at least one device of the first cluster of modules overlap to define a first convergence volume; and wherein the offset distances of the at least one device of the second cluster of modules are determined so frustums of the at least one device of the second cluster of modules overlap to define a second convergence volume; wherein the at least one device of the first cluster of modules are positioned in a first plane and wherein the first convergence volume comprises a frustum width at a first cluster perpendicular distance from the first plane.
8. The system of claim 7, wherein the first cluster perpendicular distance from the first plane is ((D.sub.Inf−D.sub.Max)*CA %)+D.sub.Max, wherein CA % is a percent between 0 and 100%, D.sub.Max is a distance between the first plane and a closest object in a scene, and D.sub.Inf is a distance where less than 1 pixel of disparity is possible between adjacent modules of the first cluster of modules.
9. The system of claim 7, wherein an adjustment to the offset distance of the at least one device of the first cluster of modules alters the frustum width of the first convergence volume.
10. The system of claim 7, wherein the at least one device of the first cluster of modules are configured to be positioned along a second direction, the second direction being perpendicular to the first direction.
11. The system of claim 10, wherein the at least one device of the first cluster of the modules are configured to be positioned along a third direction, the third direction being orthogonal to both the first and second directions.
12. The system of claim 11, wherein the second or third direction is parallel to the energy propagation axis of the respective at least one energy directing element of the first cluster of the modules.
13. The system of claim 7, wherein the at least one device of the second cluster of modules are positioned in the first plane and wherein the second convergence volume comprises a frustum width at a second cluster perpendicular distance from the first plane.
14. The system of claim 13, wherein the second cluster perpendicular distance from the first plane is ((D.sub.Inf−D.sub.Max)*CA %)+D.sub.Max, wherein CA % is a percent between 0 and 100%, D.sub.Max is a distance between the first plane and a closest object in a scene, and D.sub.Inf is a distance where less than 1 pixel of disparity is possible between adjacent modules of the second cluster of modules.
15. The system of claim 13, wherein an adjustment to the offset distance of the at least one device of the second cluster of modules alters the frustum width of the second convergence volume.
16. The system of claim 7, wherein the at least one device of the second cluster of modules are configured to be positioned along a second direction, the second direction being perpendicular to the first direction.
17. The system of claim 16, wherein the at least one device of the first cluster of the modules are configured to be positioned along a third direction, the third direction being orthogonal to both the first and second directions.
18. The system of claim 17, wherein the second or third direction is parallel to the energy propagation axis of the respective at least one energy directing element of the first cluster of the modules.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments are illustrated by way of example in the accompanying figures, in which like reference numbers indicate similar parts, and in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION
(13)
(14)
(15) While the M.sub.D in
(16)
(17) In an embodiment, the capture system 400 may include at least S number of optical modules, such as optical modules 402, 404, and 406. While only optical modules 402, 404, and 406 are shown in
(18) The capture system 400 may be configured so that the number S is greater than or equal to (T.sub.D/M.sub.D)+1, in which S may be a number rounded up to the nearest integer. The ratio T.sub.D/M.sub.D may be greater than 1. The imaging sensors 472, 474, 476 may pair with an adjacent imaging sensor to define a maximum effective disparity MD, which may be less than or equal to (D.sub.IA*P.sub.X*F.sub.L)/(S.sub.w*D.sub.MAX), in which: S.sub.w is an effective sensor width of the image sensors 472, 474, 476, the S.sub.w being defined along a first direction 490; P.sub.X is an effective pixel resolution of the image sensors 472, 474, 476 along the first direction 490; D.sub.IA is an interaxial distance between optical centers 422, 424, 426 of adjacent optical modules 402, 404, 406; and D.sub.MAX is a distance between the closest object 108 in the scene and the optical center 424 of the optical module 404 closest to the closest object 108.
(19) In an embodiment, the image sensors 472, 474, 476 may have pixels that are not active for any reason, such as digital scaling, and the effective sensor width, S.sub.w and the effective pixel resolution, P.sub.x, of the image sensors 472, 474, 476 may be understood to be defined by the active pixels only.
(20) It is to be appreciated from that M.sub.D allows for the determination of the S number of sensors in the capture system 400 to allow for the interpolation of intermediate views from the capture views within the T.sub.D substantially without artifacts. Additionally, various physical configurations of the capture system 400 may be adjusted to achieve a combination of D.sub.IA, P.sub.X, F.sub.L, and S.sub.w to achieve M.sub.D, thereby allowing for the interpolation of intermediate views from the capture views within the T.sub.D substantially without artifacts. In an embodiment, to allow for the interpolation of intermediate views from the capture views within the T.sub.D substantially without artifacts, M.sub.D maybe less than a percentage of a pixel resolution of a first intermediate view. For an intermediate view having a pixel resolution of at least 1K in one dimension, the percentage may be about 25%, or preferably about 10%, or most preferably about 1%.
(21) The first direction 490 along which the effective pixel resolution is defined may be referred to as the x-direction, and a second direction 492 orthogonal to the first direction 490 may be referred to as the y-direction. In this geometry, the optical centers 422, 424, 426 may be fixed in both the x- and y-directions and define an array of the optical modules 402, 404, and 406. In an embodiment, the modules 402, 404, and 406 may have optical axes 432, 434, 436, respectively, extending along a third direction 494 referred to as the z-direction, which is orthogonal to the x- and y-directions and perpendicular to the surface of imaging sensors 472, 474, 476, respectively.
(22) In an embodiment, the surfaces of the imaging sensors 472, 474, 476 may be configured to be parallel to each other. For reasons to be discussed below in greater detail, in an embodiment, the imaging sensors 472, 474, 476 may be configured to translate along the x-, y-, or z-direction. In an embodiment, the lens 482, 484, 486 may be rotatable about the x-, y-, or z-direction, resulting in 3 degrees of freedom plus focal adjustment.
(23) In an embodiment, the lens of capture system 400 may have a maximum optical distortion of <OMax %, in which the OMax may be a maximum distortion value to ensure rectilinear or near rectilinear image acquisition. OMax may be user defined, automatically calculated, or predefined. The lens may also have a maximum focal length differential of <TMax %, in which TMax may be a maximum differential value between the lenses' field of view as captured during image acquisition such that the optical characteristics between each individual module are corrected for optomechanically within the below established tolerances. TMax may be user defined, automatically calculated, or predefined. The resulting captured image, given the above tolerances, may be individually calibrated (if necessary) through use of calibration targets (or similar) to include individual optical distortion correction displacement maps per module to ensure rectilinear image output. Images may be calibrated both optomechanically and/or through hardware and/or software image processing to ensure all capture perspective images contain the lowest possible distortion and variance. In an embodiment, the captured pixels may be aligned, before and/or after image processing calibration, within a tolerance of +/−TMax % (represented as a percent of pixel width of frame) at the corners of each frame at a distance greater than D.sub.Inf about the X image axis. D.sub.Inf may be the distance where less than 1 pixel of disparity is possible between any two adjacent optical modules and may be calculated as (F.sub.L*D.sub.IA*P.sub.X)/S.sub.W.
(24) The captured pixels may be aligned within a tolerance of +/−(TMax/TC) % (represented as a percent of pixel width of frame) at the center of each frame at a distance greater than D.sub.Inf, before and/or after image processing calibration, within a tolerance of +/−TYMax % (represented as a percent of pixel width of frame) at the corners of each frame at a distance greater than D.sub.Inf about the Y image axis, in which: TMax %=PMax/P.sub.X; TYMax %=TMax %*(P.sub.Y/P.sub.X). P.sub.Y may be the effective pixel resolution along Y axis produced by imaging sensor; PMax may be a number of pixels, and TC may be a threshold divisor
(25) Referring now to
(26) In an embodiment as illustrated in
(27)
(28) It is to be appreciated that the limiting factors for configuring D.sub.IA, P.sub.X, F.sub.L, and S.sub.w to achieve M.sub.D may include sensor module and/or electronics board width, lens outer diameter, sensor offset (for convergence or alignment), and mechanical design and other hardware considerations. This limiting factor can be expressed as the minimum distance possible between each optical module, DMin, where DMin=max(SPW+DS (or DS2), HW, LD). SPW refers to the sensor package width including all components, cables, connectors, boards and/or electronics; DS refers to the distance of maximum travel required for viewpoint convergence; HW refers to the maximum required width of the mechanical design for each individual capture module; LD refers to the lens outer diameter and includes any other optical components necessary for practical use.
(29) Referring to the exemplary capture system 800 as shown in
(30) In an embodiment as shown in the capture system 1000 in
(31) Referring to
(32)
(33) It should be noted that embodiments of the present disclosure may be used in a variety of optical systems and projection systems. The embodiment may include or work with a variety of projectors, projection systems, optical components, computer systems, processors, self-contained projector systems, visual and/or audiovisual systems and electrical and/or optical devices. Aspects of the present disclosure may be used with practically any apparatus related to optical and electrical devices, optical systems, presentation systems or any apparatus that may contain any type of optical system. Accordingly, embodiments of the present disclosure may be employed in optical systems, devices used in visual and/or optical presentations, visual peripherals and so on and in a number of computing environments including the Internet, intranets, local area networks, wide area networks and so on.
(34) Additionally, it should be understood that the embodiment is not limited in its application or creation to the details of the particular arrangements shown, because the embodiment is capable of other variations. Moreover, aspects of the embodiments may be set forth in different combinations and arrangements to define embodiments unique in their own right. Also, the terminology used herein is for the purpose of description and not of limitation.
(35) As may be used herein, the terms “substantially” and “approximately” provide an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from zero percent to ten percent and corresponds to, but is not limited to, component values, angles, et cetera. Such relativity between items ranges between approximately zero percent to ten percent. i. While various embodiments in accordance with the principles disclosed herein have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with any claims and their equivalents issuing from this disclosure. Furthermore, the above advantages and features are provided in described embodiments, but shall not limit the application of such issued claims to processes and structures accomplishing any or all of the above advantages.
Additionally, the section headings herein are provided for consistency with the suggestions under 37 CFR 1.77 or otherwise to provide organizational cues. These headings shall not limit or characterize the embodiment(s) set out in any claims that may issue from this disclosure. Specifically, and by way of example, although the headings refer to a “Technical Field,” the claims should not be limited by the language chosen under this heading to describe the so-called field. Further, a description of a technology in the “Background” is not to be construed as an admission that certain technology is prior art to any embodiment(s) in this disclosure. Neither is the “Summary” to be considered as a characterization of the embodiment(s) set forth in issued claims. Furthermore, any reference in this disclosure to “invention” in the singular should not be used to argue that there is only a single point of novelty in this disclosure. Multiple embodiments may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the embodiment(s), and their equivalents, that are protected thereby. In all instances, the scope of such claims shall be considered on their own merits in light of this disclosure, but should not be constrained by the headings set forth herein.