Plenoptic imaging device
09936188 · 2018-04-03
Assignee
- UNIVERSITÄT DES SAARLANDES (Saarbrücken, DE)
- Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (München, DE)
Inventors
- Oliver KLEHM (Saarbrücken, DE)
- Ivo Ihrke (Isle-St-Georges, FR)
- John Restrepo (Bordeaux, FR)
- Alkhazur Manakov (Talence, FR)
- Ramon HEGEDÜS (Talence, FR)
Cpc classification
H04N23/55
ELECTRICITY
G02B27/0075
PHYSICS
International classification
Abstract
A plenoptic imaging device according to the invention comprises an image multiplier (130) for obtaining a multitude of optical images of an object or scene and a pick-up system (140) for imaging at least some of the multitude of images to a common image sensor (170) during the same exposure of the sensor.
Claims
1. An imaging device, insertable between a lens and a body of a camera, the camera comprising an image sensor, the imaging device comprising: an image multiplier using reflection for obtaining a multitude of optical images of an object or scene; a pick-up system for imaging at least some of the multitude of optical images to the image sensor during the same exposure of the image sensor, wherein the pickup system has a stoppable aperture, characterized in that when the imaging device is inserted, the pick-up system is arranged between the image multiplier and the image sensor of the camera; and said stoppable aperture of said pickup system regulates a depth of field of individual optical images of said multitude of optical images.
2. The imaging device of claim 1, further comprising: an array of optical filters for filtering the multitude of optical images imaged by the pick-up system.
3. The imaging device of claim 2, further comprising an imaging system for imaging the filtered images onto the imaging sensor.
4. The imaging device of claim 1, further comprising a pupil-matching lens to adjust for external optical device properties.
5. The imaging device of claim 2, wherein the filters comprise neutral density, multispectral, polarization or temporal filters.
6. The imaging device of claim 1, further comprising a diffuser screen.
7. A photo and/or video camera, comprising an imaging device according to claim 1.
8. A camera add-on, comprising an imaging device according to claim 1.
9. A system, comprising an imaging device according to claim 1 and an objective lens.
10. A computer-implemented method, comprising the steps of: receiving at least some images or image sequences acquired using an imaging device according to claim 1; reconstructing a digital image or video, based on the at least some images or image sequences; and outputting the digital image or video.
11. An imaging device, insertable between a lens and a body of a camera, the camera comprising an image sensor, the imaging device comprising: a prism coated with a reflective material for obtaining a multitude of optical images of an object or scene; a pick-up system for imaging at least some of the multitude of optical images to the image sensor during the same exposure of the image sensor, wherein the pickup system has a stoppable aperture, characterized in that when the imaging device is inserted, the pick-up system is arranged between the image multiplier and the image sensor of the camera; and characterized in that said stoppable aperture of said pickup system regulates a depth of field of individual optical images of said multitude of optical images.
12. The imaging device of claim 11, further comprising: an array of optical filters for filtering the multitude of optical images imaged by the pick-up system.
13. The imaging device of claim 12, further comprising an imaging system for imaging the filtered images onto the image sensor.
14. The imaging device of claim 12, wherein the filters comprise neutral density, multispectral, polarization or temporal filters.
15. The imaging device of claim 11, further comprising a pupil-matching system to adjust for external optical device properties.
16. The imaging device of claim 11, further comprising a diffuser screen.
17. A photo and/or video camera, comprising an imaging device according to claim 11.
18. A camera add-on, comprising an imaging device according to claim 11.
19. A system, comprising an imaging device according to claim 11 and an objective lens.
Description
BRIEF SUMMARY OF THE FIGURES
(1) These and other aspects and advantages of the invention will become more apparent when considering the following detailed description of various embodiments of the invention, in conjunction with the annexed drawing in which
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
DETAILED DESCRIPTION
(15)
(16) The original image of the standard camera lens 110 is projected on a diffuser screen 120 that is placed in the location that would usually be occupied by the cameras sensor. This diffuser 120 is observed through an image multiplier 130, e.g. a mirror arrangement, which produces a number of copies of the original image that still carry the physical information of the plenoptic function, except for directional light variation. A pick-up imaging system 140 projects the information exiting the mirror system on a filter plane 150. This projected image on the filter plane 150 has the dimensions of the original sensor, but contains spatially separated copies of the original image. These copies can be individually modulated by optical filters placed in the filter plane, hereby, enabling, among others, snapshot high dynamic range, multispectral, and polarization imaging. It would be possible to place a custom sensor in this plane with the according filters attached to its surface. In order to obtain a reversible add-on, the filtered results are projected onto the original camera sensor 170 by employing a 1:1 imaging system 160.
(17) The main lens 110 is imaging the scene onto a plane that would typically contain the camera sensor. According to the invention, a diffuser 120 is placed at this location. Its size matches what the main optics are optimized for, as important imaging characteristics like the field of view directly depend on it. The diffuser 120 acts as a rear-projection screen, i.e., observing it from the left shows the image that would be observed by a sensor at this location. Intuitively, this image appears identical when viewed from different directions, as the diffuser of the present embodiment removes all directional variation via its bidirectional transmittance distribution function (BTDF), but otherwise all physical attributes of the plenoptic function are maintained.
(18) The image multiplier 130 uses multiplexing to transfer image content from the diffuser into the directional component of the plenoptic function. It is important that the diffuser lobe is wide enough to accommodate the different viewing directions that create the image copies, otherwise, vignetting occurs. However, if the lobe is too wide, stray light is spread into the system. The diffuser scattering profile should therefore be adapted to the maximum observation angle , see figure, for best performance and light efficiency of the system.
(19) In addition, a pupil matching lens may be used to adapt the entrance pupil of the image multiplier to the exit pupil of the main lens. In the present embodiment, this lens homogenizes the illumination picked up from the entrance plane in the case of a weak diffuser for which strong and directionally varying illumination may otherwise manifest itself in a non-uniform transmission of the system.
(20) Once the image is visible on the diffuser screen 120 the image multiplier 130 copies it, e.g. by means of mirror reflections. A kaleidoscope with parallel walls is a suitable multiplier, resulting in a virtual plane of image copies.
(21) Since the width and the height of the image multiplier are defined by the sensor size, the only variable is its length along the optical axis. This length is determined by the 1:N minification that is to be achieved by the pickup imaging system and by its focal length f.sub.ps. The effect of the pickup imaging system is that NN views of the diffuser are compressed to the size of a standard sensor image and made accessible as a real image in the filter plane. Following geometrical optics, the relation between image multiplier length N.Math.z, number of image copies N, and focal length of the pickup system f.sub.ps is approximately given by the thin lens formula
(22)
(23) In practice, this means that a short focal length f.sub.ps and a low image multiplication factor N lead to short lengths of the image multiplier.
(24) Another aspect of the design is the aperture of the pickup lens. In conjunction with the diffuser lobe, it determines the light efficiency of the system. Hence, it should be chosen as large as possible. A large aperture for the pickup system does not involve a loss of image quality in case of a planar object, i.e. the entrance plane. In general however, it is difficult to obtain large aperture for short focal length lenses, as they become bulky and have a strong curvature, leading to significant deviations from the geometric optics model. When setting the image multiplier length, a tradeoff exists between the aperture and the focal length of the pickup system. An additional effect of the length parameter is the observation angle under which the different copies of the entrance plane are seen. A larger length leads to smaller observation angles and therefore to weaker diffusion requirements.
(25) For example, the field of view of the pickup system may be dimensioned according to how many copies need to be seen, depending on the sensor size and the focal length of the pickup system. Then, the remaining density may be set accordingly.
(26)
(27) More specifically, the maximum observation angle is given by
(28)
where l.sub.f is the original sensor (and therefore the diffuser) size, N is the targeted number of image copies, .sub.ps the aperture of the pickup lens, and l.sub.mt the length of the image multiplier. The angle can be reduced by a longer image multiplier, a low number of image copies, a smaller sensor size, and to a minor effect by reducing the aperture of the pickup system.
(29) For the best optical quality and geometric accuracy, the multiplier can be made from glass, utilizing the effect of total internal reflection to create the mirror images. In this case, its length is approximately multiplied by the refractive index of the glass, which can be derived by considering two planar air/glass-glass/air interfaces. The condition on the maximum observation angle does not change; since the diffusion lobe refracts into the image multiplier, it narrows by the same amount as the maximum observation angle.
(30) The system generates a real image of NN copies of the physical image that a standard camera would have captured and it makes these copies accessible in the filter plane where an array of optical filters allows gaining access to the different plenoptic dimensions.
(31) In order to prevent the image in the filter plane from diverging in the direction of the sensor, causing vignetting, the exit pupil of the image multiplier system may be adapted to the entrance pupil of the 1:1 imaging system. In the present embodiment, a pair of plano-convex lenses is inserted at the filter plane that together form an additional optical relay system between the aperture plane of the pickup system and that of the 1:1 imaging system.
(32) The 1:1 imaging system 160 projects the NN optically pre-filtered copies of the diffuser-plane image onto the sensor 170 that integrates the incoming photons. Since 1:1 imaging occurs at two focal lengths, the system is dimensioned with respect to the focal length f of the 1:1 imaging lens. The choice of placing the pickup system 140 at a distance of 2f from the filter plane is determined by keeping all imaging planes of the system equal in size to the original sensor dimensions. The overall length of the system is therefore (6+2N).Math.f and the individual lens components have focal lengths of 2f for the pair of the planoconvex lenses and 2N/(N+1)f for the pickup lens.
(33)
(34) By omitting the diffuser component, one preserves the directionality of the plenoptic function on the entrance plane and can sample it in the sub-images. A difficulty is the divergent nature of the image cast by the main lens onto the entrance plane, see
(35) This problem is circumvented by introducing a pupil-matching system that images the aperture plane of the main lens onto the aperture plane of the pickup system. The mirror operation introduced by the image multiplier generates virtual viewpoints through the mirrored pickup apertures that are imaged onto stable regions of the main lens aperture. As shown in
(36) An additional modification is to equip the pickup system with an aperture. This way, the depth-of-field of the individual light field views can be regulated at the expense of light efficiency. This option is not available in any existing integrated light-field camera design; e.g., in a lenslet-based lightfield camera [Ng et al. 2005] this effect cannot be controlled since each of the micro-lenses would have to be equipped with an individual aperture all of which would have to be moved in a synchronized manner.
(37)
(38)
(39) The diffuser of the prototype has a thickness of 1 mm and polarization preserving properties since it was designed for polarization based 3D rear-projection screens (ScreenTech GmbH, material type ST-Professional-DCF). The diffuser scattering profile falls to 50% transmittance at about 20 428 off-axis, which is well above the maximum observation angle of the system (12.95 429 for 15 mm pickup lens aperture). The requirements for the elimination of the directional light variation are thus met.
(40) To create copies of the imaged scene, a rectangular kaleidoscope, 36 mm24 mm300 mm in size was employed. It was made from optical front-surface mirrors and constructed by an artisan kaleidoscope maker (Kaleidoskope GmbH). Since an individual pixel covers about 18 m of diffuser surface, a perfectly parallel arrangement of the mirrors is necessary. Due to misalignment, the kaleidoscope suffers from some imaging imperfections that most prominently show in the corner views of the kaleidoscope. In a alternative implementation, a rectangular prism utilizing total internal reflection can be used as an accurate image multiplier.
(41) While an ideal design features two plano-convex lenses with the filter array being placed in the aperture of the resulting effective bi-convex lens, in practice this arrangement is more easily implemented by a single bi-convex lens at a small distance to the filter array. Shifting the filter array out of the aperture has the additional benefit of masking imperfections in the optical filters themselves: manual construction of the filter array results in unavoidable scratches and other minor deviations from a perfectly planar optical filter of infinite width. If the filter array was placed directly into the aperture of the filter-plane pupil matching arrangement, these imperfections would readily become apparent in the recorded images, while they are now blurred and less noticeable.
(42) The 1:1 imaging system of the prototype is implemented by using a Canon 100 mm, f/2.8 macro lens. This results in a distance of about 300 mm between the lens and the filter plane. In addition, the distance between the pickup system and the filter plane has to be adjusted to this length to ensure 1:1 imaging, preserving the overall width and height of the imaging system to match that of a full-frame sensor, i.e. 36 mm24 mm. Overall, this leads to an overall system length of about 1000 mm including the camera and the main lens for the prototype system.
(43) The pre-processing procedure consists in registering the 33 subimages that are recorded by the sensor with one another. Since the images are located in the entrance plane and are coincident, a single geometric calibration procedure suffices for all applications presented below. The sub-images suffer from misregistration on the sensor primarily for two reasons: an imperfect arrangement of the mirror planes due to manual construction and geometric/chromatic aberrations induced by the prototypical optical setup.
(44) These imperfections are addressed in two steps. While keeping the diffuser in place and removing the main lens, a transparency slide with a checkerboard pattern is used, that is placed in close distance to the diffuser. The slide is then illuminated with a far-away point light source, hereby projecting the pattern onto the diffuser. Taking distortions introduced by misalignments of the mirrors. The corner images of the 33 matrix of views encounter two levels of reflection. These images show a noticeable disagreement along their diagonals. Thus, each half of these images is compensated separately.
(45) This first compensation is performed by estimating homographies between the outer and the central views and aligning all images to the central view.
(46) Residual registration imperfections are caused by geometrical and chromatic aberrations that are addressed with a wavelet noise pattern [Cook and DeRose 2005], using a transparency slide as for the checkerboard. The distortions are estimated via an optical flow [Horn and Schunck 1981] with a strong smoothness prior.
(47) To register the outer views to the central image, the composite of all displacements, i.e. homography-based warping and residual optical flow compensation is used. In the filter-based system, this process matches the images. In the light-field imaging case, the views are brought into agreement, as expected by the geometrical construction; horizontally-neighboring views show only horizontal and vertically-neighboring views only vertical parallax, diagonally displaced views show combinations thereof.
(48) The system also shows radiometric distortions, i.e. vignetting is observable throughout the geometrically-registered images. To measure the effect, the main lens is reintroduced into the system and a strong diffuser added, which is illuminated from a far away diffuse light source. The resulting image is used to divide out vignetting effects caused by the optical system.
(49) As a result of these pre-processing steps, a stack of images I.sub.i is obtained that are optically pre-filtered, as if taken in a time-sequential manner:
s.sup.j=.sub., , tl.sub..sup.j(, y, , , t)f.sub.i(, )dddt, j=0 . . . 3,
I.sub.i={[1000].Math.M.sub.i.Math.[s.sup.0, s.sup.1, s.sup.2, s.sup.3].sup.T}.sub.0.sup.1.(2)
(50) The formulation includes polarization parameters. The plenoptic function l.sub. consists of four parts; the four Stokes parameters s.sup.j with the following definitions:
l.sub..sup.1=E.sub.XE.sub.X*+E.sub.YE.sub.Y*, l.sub..sup.2=E.sub.XE.sub.X*E.sub.YE.sub.Y*, l.sub..sup.3=E.sub.XE.sub.Y*+E.sub.YE.sub.X*, l.sub..sup.4=i(E.sub.XE.sub.Y*E.sub.YE.sub.X*),
where E.sub.X and E.sub.Y are the two orthogonal plane wave components of the electric field E and * signifies complex conjugation. The optical filters are denoted by sets {M.sub.i, f.sub.i}, consisting of a standard optical filter f.sub.i and a Mueller matrix M.sub.i. For the plenoptic dimensions, wavelength is denoted as , directions as , and time as t. Multiplication by the [1000] vector extracts the irradiance measurement that is being registered by the sensor. The clamping operation {.Math.}.sub.0.sup.1 models the saturation limit imposed by a real sensor. Not all of the filter dimensions (wavelength, polarization, and direction) are used simultaneously in the following. Rather, each of the described application areas uses one dimension at a time.
(51)
(52) For HDR imaging, the filter array consists of 33 neutral density filters and the optical filters in Eq. 2 become {1, c.sub.i}, i=1 . . . 9 with a unit Mueller matrix and constant spectral filters f.sub.i()=c.sub.i. A set with transmittance values of {1.0, 0.5, 0.25, 0.126, 0.063, 0.032, 0.016, 0.008, 0.004} is chosen, yielding a dynamic range improvement of about 8 f-stops over that of the sensor dynamic range. These images have a verified linear response and can be merged by standard mechanisms [Debevec and Malik 1997]. For video operation, the camera applies an adaptive response curve. The radiometric response is estimated by a variant of Mitsunaga and Nayars [1999] polynomial technique that estimates the response from a series of photographs of a MacBeth color checker while enforcing curvature constraints on the final curve [Ihrke 2012].
(53) For multispectral imaging, the system is equipped with 33 broadband spectral filters as manufactured by Rosco Labs (Roscolux swatchbook). The filters in equation 2 become {1, c.sub.i}, i=1 . . . 9.
(54) Broadband spectral filters are used because the imaging system features a color filter array. Imaging 9 spectral filters through three different Bayer filters each results in an overall measurement of 27 broadband filtered images. Using narrow band filters would only yield 9 different measurements since the Bayer filters are largely orthogonal. The filters in the array are {Cyan #4360, Yellow #4590, Red #26, Orange #23, Green #89, Blue-Green #93, Lavender #4960, Blue #80, Magenta #4760}. Their spectral response was measured using a spectrometer (Thorlabs CCS 200).
(55) For spectral calibration of the Bayer filters, a scene containing a MacBeth color checker is illuminated with a high pressure mercury vapor lamp with a previously acquired spectrum s.sub.mv. In the multispectral imaging case, Eq. 2 can be simplified to
(56)
where f.sup.r|g|b() denotes the spectral sensitivity of the camera for the R, G, and B channels, f.sub.i() are the known spectra of the Roscolux filters, and s is the spectrum of the light source. In this case, the plenoptic function l.sub.(, y, ) only depends on the spectral scene reflectance whose spectrum l.sub.(, y, ) is known through collected measurements obtained from http://www.babelcolor.com/main_level/ColorChecker.htm. The spectrum of the light source is s.sub.mv. Therefore, all components of the integral in Eq. 3 except for the Bayer filter responses f.sup.r|g|b() are known and can be estimated by an expansion into basis functions similar to Toyooka and Hayasaka [Toyooka and Hayasaka 1997]. A set of 50 overlapping Gaussians distributed in the range between 400 and 700 nm is chosen as basis. The optimization problem uses images through all 116 Roscolux filters and enforces a non-negativity constraint via quadratic programming.
(57) Once the spectral response of the sensor is known, arbitrary scenes can be imaged. If the spectrum of the light source s() is known, a neutral reflectance spectrum can be recovered, otherwise, only the product l.sub.(, y, )s() is accessible. The scene spectra are recovered similar to spectral calibration of the sensor, except that now the spectral sensitivities f.sup.r|g|b() of the sensor are known whereas the scene spectrum l.sub.(, y, ) or its product with the illumination spectrum l.sub.80 (, y , )s() are estimated. In this case, spectral white balancing, similar to RGB white balancing can be performed by dividing all spectra by the spectrum of a known white scene patch.
(58) In contrast to the spectral calibration step, image spectra have to be estimated for every pixel and quadratic programming becomes too costly. Instead, the non-negativity constraint may be dropped and a least squares problem solved per-pixel and negative values clamped to zero. For improved regularization, a PCA basis as in [Toyooka and Hayasaka 1997] is used. The performance of the prototypical multispectral imaging pipeline was verified by imaging a Gretag Macbeth color checker under known illumination. The reconstructed spectral reflectance agrees well with collected data (babelcolor), see
(59)
(60)
(61)
(62) Hereby, the camera is made sensitive to the polarization state of light and acts as a pixel-by-pixel polarization state analyzer. To this end, at least three independent measurements have to be carried out and four if the full polarization state that also includes the circular polarization component is to be retrieved [Goldstein 2003].
(63) The scope of the prototype was restricted to linear polarization imaging, since, apart from some special cases of, e.g., circular dichroism and circularly-polarized luminescence, significant levels of circular polarization are rarely encountered in nature [Hegedus et al. 2006].
(64) For this purpose, five linear sheet polarizers with differently orientated transmission axes {0, 36, 72, 108, 144} were placed into the filter array of the system. In terms of Eq. 2, the filters become {M.sub.i, 1}, i=1 . . . 5, where 1 denotes an optical filter f.sub.i=1. The four corners of the array were left empty and the corresponding sub-images were ignored. The setup still provides more measurements per pixel than needed. Those images produced by second-order reflections are avoided, which are more prone to optical aberrations and complex polarization modulation.
(65) When only linear polarization is measured, the first three Stokes components s.sup.j, j=0 . . . 2 can be retrieved and the fourth circular component s.sup.3, if any, is considered as part of the unpolarized component s.sup.0 of the light. Correspondingly, 33 Mueller matrices are employed, which is a common procedure in linear polarimetry [Neumann et al. 2008]. To determine the Stokes vectors, the 35 matrix W is constructed whose consecutive rows are identical to the upper row of the respective Mueller matrices M.sub.i, i=0 . . . 4.
(66) For each pixel, the measured intensities through the five polarization filters are stored in a vector p, the Stokes vector s=(s.sup.0, s.sup.1, s.sup.2) is obtained by least-squares regression
s=(W.sup.TW).sup.1W.sup.T.sub.p. (4)
(67) Some additional care is needed because the filter array is placed inside the optical system, whose reflections and scattering affect the polarization state of light. The total influence of the system including that of the polarization filters can be characterized by an effective Mueller matrix M.sub.sys(, y), which is spatially dependent. The most prominent effect is caused by the mirrors of the image multiplier. This pixel-wise Mueller matrix is determined by a calibration procedure that uses a ground truth polarimeter to obtain the Stokes vectors of 6 scenes with homogenous (i.e. spatially non-dependent) polarization states and relating these values to the ones observed by the system. The linear relation s.sub.sys.sup.(i)(, y)=M.sub.sys(, y)s.sub.gt.sup.i, i=1 . . . 6 is then solved in a least-squares sense for M.sub.sys. Here, s.sub.sys.sup.(i) are the Stokes parameters measured by the system, whereas s(i) gt are the Stokes parameters measured by the ground truth polarimeter. In practice, 30 differently polarizer/analyzer pair images are used to perform the polarization calibration.
(68) The low angular resolution of the 33 light fields necessitates an angular up-sampling scheme in order to perform convincing refocusing and view-point changes at a reasonable distance outside the focal plane of the system. In practice, the observed parallax in the system can exceed 100 pixels. The spatial resolution of the images returned by the system is, however, large with a resolution of approx. 18001200 pixels for each sub-view. The angular interpolation problem may be addressed by first performing a depth estimate, and a parallax-based morphing operation. This morphing also makes view extrapolation possible, which enables an aperture synthesis beyond the limits of the main lens.
(69) Optical flow techniques and the adaptation of Horn-Schunck [Horn and Schunck 1981] are applied to estimate depth.
(70) Modifications consist in introducing a coupling between the flow variables of different views. It is well known that optical flow suffers from the so-called aperture problem, i.e. two variables are sought at every image location, but only a single constraint is available. In the case of light-field imaging, the flow is known to be constrained to the directions of the epipolar lines between views.
(71) Moreover, the structure of these epipolar lines is very regular due to the fixed spacing between the virtual views. The optical-flow vectors are therefore replaced by depth estimates d(, y) that couple the flow estimates in all surrounding light field views via the depth-induced parallax d(, y).Math.[u.sub.i, v.sub.i], where the vector [u.sub.i, v.sub.i] is a constant for every view I.sub.i and describes the slope of the epipolar lines. Due to the constraints of the prototyped setup, one can safely assume the epipolar lines to be parallel in every sub-view.
(72) This depth estimation is based on the optical flow brightness constancy assumption and, therefore, does not estimate the real scene depth. However, it computes an estimate of apparent depth. Since one is not interested in depth per se, but in its view interpolating properties, this approach is reasonable for angular light-field upsampling.
(73) The view interpolation and extrapolation, a depth map is estimated for each of the sub-views, which allows to generate a new view by morphing the sub-views I.sub.i according to the parallax displacement d.Math.[u.sub.i, v.sub.i].sup.T. The main challenges for a high-quality interpolation are a proper handling of the occlusion boundaries, the handling of multiple pixels of an input view mapping to the same destination pixel, and the avoidance of interpolation holes by forward warping. The proposed morphing uses forward and backward warping steps followed by a blending procedure.
(74) Each of the nine sub-views may contain exclusive information not available in any other subview but valuable for the interpolated view. However, warping all views can lead to blur because the depth estimation is only approximate. Using the four neighboring views of the interpolated position on the main lens aperture is a good tradeoff. A similar scheme can be used for extrapolation; using the two (for extrapolation in u or v) or the one closest view (for extrapolation in u and v).
(75)
(76) The figure shows that the inventive extrapolation solution allow to virtually extend the aperture of the main lens to generate increased parallax and extremely shallow depth-of field effects.
(77) The size of the overall system is determined by the distance between the sensor and the first imaging element of the optical design. In SLR type cameras, this distance is bounded from below by the moving mirror of these cameras and can be assumed to be around 50 mm for a full-frame sensor. In the filter-based design, this requirement determines the focal length f of the 1:1 imaging system and with it the overall length of the optical system as (6+2N).Math.f if NN copies are to be imaged.
(78) The focal length f is therefore fixed to 25 mm. With N=3 (9 sub-images), a length of 300 mm is needed. The diameters of the 1:1 imaging lens and the pickup lens determine the maximum pickup aperture and are therefore uncritical. The pupil matching lenses in the filter plane and in the entrance plane, however, have to cover the full sensor size. Fortunately, these lenses have focal lengths of 50 mm each, given the previous considerations of a full-frame sensor and a 50 mm distance between the sensor and the optical system. All required lenses would therefore be available as stock parts with a reasonable optical performance.
(79) For the light field design, the 1:1 imaging and the filter plane optics can be omitted. The minimum distance of the system is now determined by the closest position that the pickup lens can assume.
(80) Given these considerations, z in Eq. 1 equals 50 mm and the overall system length is 4.Math.z=200 mm for 33 copies.
(81) Overall, the system is suitable for imaging a low number of copies with its size increasing linearly for a larger number. The system size also scales linearly with the sensor size of the camera being employed. Smaller units could thus be designed for smaller sensors.
(82) It is also possible to remove the mirror in SLR cameras since an optical view finder is not strictly necessary for computational cameras, therefore miniaturizing the design even further.
REFERENCES
(83) ADELSON, E. H., AND BERGEN, J. R. 1991. The Plenoptic Function and the Elements of Early Vision. In Computational Models of Visual Processing, MIT Press, 3-20. ADELSON, E., AND WANG, J. 1992. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. PAMI 14, 2, 99-106. BAYER, B. E., 1976. Color Imaging Array. U.S. Pat. No. 3,971,065. BONNET, H., ABUTER, R., BAKER, A., ET AL. 2004. First Light of SINFONI at the VLT. ESO Messenger 117, 17, 17-24. CAO, X., TONG, X., DAI, Q., AND LIN, S. 2011. High-Resolution Multi-Spectral Video Capture with a Hybrid Camera System. In Proc. CVPR, 297-304. COOK, R. L., AND DEROSE, T. 2005. Wavelet Noise. ACM TOG 24, 3, 735-744. DEBEVEC, P. E., AND MALIK, J. 1997. Recovering High Dynamic Range Radiance Maps from Photographs. In Proc. SIGGRAPH, 369-378. DESCOUR, M., AND DERENIAK, E. 1995. Computed-tomography Imaging Spectrometer: Experimental Calibration and Reconstruction Results. Appl. Optics 34, 22, 4817-4826. DU, H., TONG, X., CAO, X., AND LIN, S. 2009. A Prism-Based System for Multispectral Video Acquisition. In Proc. ICCV, 175-182. GEHM, M. E., JOHN, R., BRADY, D. J., WILLETT, R. M., AND SCHULZ, T. J. 2007. Single-Shot Compressive Spectral Imaging with a Dual-Disperser Architecture. Optics Exp. 15, 21, 14013-14027. GOLDSTEIN, D. H. 2003. Polarized Light, 2nd edition. CRC Press, New York, US. GORTLER, S., GRZESZCZUK, R., SZELINSKI, R., AND COHEN, M. 1996. The Lumigraph. In Proc. SIGGRAPH, 43-54. HABEL, R., KUDENOV, M., AND WIMMER, M. 2012. Practical Spectral Photography. CGF 31, 2 (May), 449-458. HAN, J. Y., AND PERLIN, K. 2003. Measuring Bidirectional Texture Reflectance with a Kaleidoscope. In Proc. SIGGRAPH, 741-748. HEGEDUS, R., SZEL, G., AND HORVATH, G. 2006. Imaging polarimetry of the circularly polarizing cuticle of scarab beetles (Coleoptera: Rutelidae, Cetoniidae). Vision Research 46, 2786-2797. HORN, B. K. P., AND SCHUNCK, B. G. 1981. Determining Optical Flow. Artif. Intell. 17, (1-3), 185-203. HORSTMEYER, R., EULISS, G., ATHALE, R., AND LEVOY, M. 2009. Flexible Multimodal Camera Using a Light Field Architecture. In Proc. ICCP, 1-8. IHRKE, I., WETZSTEIN, G., AND HEIDRICH, W. 2010. A Theory of Plenoptic Multiplexing. In Proc. CVPR, 1-8. IHRKE, I., 2012. Color Calibration Toolbox for MATLAB v2.0. http://giana.rnmci.uni-saarland.de/software.html. ISAKSEN, A., MCMILLAN, L., AND GORTLER, S. J. 2000. Dynamically Reparameterized Light Fields. In Proc. SIGGRAPH, 297-306. IVES, H., 1903. Parallax Stereogram and Process of Making Same. U.S. Patent No. 725,567. KUDENOV, M., AND DERENIAK, E. 2011. Compact Snapshot Real-Time Imaging Spectrometer. In SPIE Conf. on Elec.-Opt. Rem. Sens., Phot. Tech., and Appl. V, 81860W-1-81860W-12. LANMAN, D., RASKAR, R., AGRAWAL, A., AND TAUBIN, G. 2008. Shield Fields: Modeling and Capturing .sub.3D Occluders. ACM TOG 27, 5, 131. LEVOY, M., AND HANRAHAN, P. 1996. Light Field Rendering. In Proc. SIGGRAPH, 31-42. LEVOY, M., CHEN, B., VAISH, V., HOROWITZ, M., MCDOWALL, I., AND BOLAS, M. 2004. Synthetic Aperture Confocal Imaging. ACM TOG 23, 3, 825-834. LIPPMANN, G. 1908. La Photographie Inte' grale. Academie des Sciences 146, 446-451. LUMSDAINE, A., AND GEORGIEV, T. 2009. The Focused Plenoptic Camera. In Proc. ICCP, 1-8. MANN, S., AND PICARD, R. W. 1995. Being Undigital with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures. In Proc. IS&T, 442-448. MCGUIRE, M., MATUSIK, W., PFISTER, H., CHEN, B., HUGHES, J. F., AND NAYAR, S. K. 2007. Optical Splitting Trees for High-Precision Monocular Imaging. IEEE CG&A 27, 2, 32-42. MITSUNAGA, T., AND NAYAR, S. K. 1999. Radiometric Self Calibration. In Proc. CVPR, 374-380. NARASIMHAN, S., AND NAYAR, S. 2005. Enhancing Resolution along Multiple Imaging Dimensions using Assorted Pixels. IEEE Trans. PAMI 27, 4, 518-530. NAYAR, S., AND MITSUNAGA, T. 2000. High Dynamic Range Imaging: Spatially Varying Pixel Exposures. In Proc. CVPR, vol. 1, 472-479. NEUMANN, L., HEGEDUS, R., HORVATH, G., AND GARCIA, R. 2008. Applications of High Precision Imaging Polarimetry. In Proc. Computational Aesthetics in Graphics, Visualization and Imaging, 89-97. NG, R., LEVOY, M., BRE'DIF, M., DUVAL, G., HOROWITZ, M., AND HANRAHAN, P. 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Tech. Rep. Computer Science CSTR 2005-02, Stanford University. NG, R. 2005. Fourier Slice Photography. ACM TOG 24, 3, 735-744. OKAMOTO, T., AND YAMAGUCHI, I. 1991. Simultaneous Acquisition of Spectral Image Information. Optics Lett. 16, 16, 1277-1279. PARK, J.-I., LEE, M.-H., GROSSBERG, M. D., AND NAYAR, S. K. 2007. Multispectral Imaging Using Multiplexed Illumination. In Proc. ICCV, 1-8 PEZZANITI, J. L., CHENAULT, D., ROCHE, M., REINHARDT, J., PEZZANITI, J. P., AND SCHULTZ, H. 2008. Four Camera Complete Stokes Imaging Polarimeter. In Proc. SPIE 6972, Polarization: Measurement, Analysis, and Remote Sensing VIII, 69720J-1-69720J-12. REINHARD, E., WARD, G., DEBEVEC, P., PATTANAIK, S., HEIDRICH, W., AND MYSZKOWSKI, K. 2010. High Dynamic Range Imaging: Acquisition, Display and Image-Based Lighting. Morgan Kaufmann Publishers. RESHETOUSKI, I., MANAKOV, A., SEIDEL, H.-P., AND IHRKE, I. 2011. Three-Dimensional Kaleidoscopic Imaging. In Proc. CVPR, 353-360. RUMP, M., AND KLEIN, R. 2010. Spectralization: Reconstructing spectra from sparse data. In Proc. EGSR, 1347-1354. SCHECHNER, Y., AND NAYAR, S. 2005. Generalized Mosaicing: Polarization Panorama. IEEE Trans. PAMI 27, 4, 631-636. SCHECHNER, Y., NARASIMHAN, S. G., AND NAYAR, S. K. 2001. Instant Dehazing of Images using Polarization. In Proc. CVPR, 325-332. SPIERING, B. A., 1999. Multispectral Imaging System. U.S. Pat. No. 5,900,942. TOCCI, M. D., KISER, C., TOCCI, N., AND SEN, P. 2011. A Versatile HDR Video Production System. ACM TOG 30, 4. TOYOOKA, S., AND HAYASAKA, N. 1997. Two-Dimensional Spectral Analysis using Broad-Band Filters. Optical Communications 137 (Apr), 22-26. VEERARAGHAVAN, A., RASKAR, R., AGRAWAL, A., MOHAN, A., AND TUMBLIN, J. 2007. Dappled Photography: Mask Enhanced Cameras For Heterodyned Light Fields and Coded Aperture Refocussing. ACM TOG 26, 3, 69. WANNER, S., AND GOLDLUECKE, B. 2012. Globally Consistent Depth Labeling of .sub.4D Lightfields. In Proc. CVPR, 41-48. WANNER, S., AND GOLDLUECKE, B. 2012. Spatial and Angular Variational Super-Resolution of .sub.4D Light Fields. In Proc. ECCV, 608-621. WETZSTEIN, G., IHRKE, I., LANMAN, D., AND HEIDRICH, W. 2011. Computational Plenoptic Imaging. CGF 30, 8, 2397-2426. WILBURN, B., JOSHI, N., VAISH, V., ET AL. 2005. High Performance Imaging using Large Camera Arrays. ACM TOG 24, 3, 765-776. ZHOU, C., AND NAYAR, S. 2011. Computational Cameras: Convergence of Optics and Processing. IEEE Trans. IP 20, 12 (Dec), 3322-3340.