Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging
10782311 ยท 2020-09-22
Assignee
Inventors
- Wolfgang Heidrich (Thuwal, SA)
- Jinhui Xiong (Thuwal, SA)
- Xiong DUN (Thuwal, SA)
- Ramzi Idoughi (Thuwal, SA)
- Sigurdur Tryggvi Thoroddsen (Thuwal, SA)
- Andres A. Aguirre-Pablo (Thuwal, SA)
- Abdulrahman B. Aljedaani (Thuwal, SA)
- Erqiang LI (Thuwal, SA)
Cpc classification
G01S17/58
PHYSICS
G01P5/26
PHYSICS
International classification
G01P3/36
PHYSICS
G01P5/00
PHYSICS
G01P5/26
PHYSICS
Abstract
Imaging of complex, non-stationary three dimensional (3D) flow velocities is achieved by encoding depth into color. A flow volume 22 is illuminated with a continuum 40 of light planes 42 whereby each depth corresponds to a respective light plane 14 having a specific wavelength of light. A diffractive component 46 in the camera 24 optics, which records the trajectories of illuminated particles 20 within the flow volume 22, ensures that all light planes 42 are in focus simultaneously. The setup permits a user to track 3D trajectories of particles 20 within the flow volume 22 by combining two dimensional (2D) spatial and one dimensional (1D) color information. For reconstruction, an image formation model for recovering stationary 3D particle positions is provided. 3D velocity estimation is achieved with a variant of a 3D optical flow approach that accounts for both physical constraints as well as the color (rainbow) image formation model.
Claims
1. A particle image velocimetry system comprising: a light source configured to emit white light; a filter configured to receive the white light from the light source and to generate a plurality of light sheets into a flow volume containing a quantity of fluid and at least one particle, each light sheet having a single wavelength, wherein each one of the light sheets has a wavelength that is different from a wavelength of another light sheet; a detection apparatus configured to detect reflections, which are associated with the plurality of light sheets, from the at least one particle within the flow volume; and a hybrid diffractive-refractive element located between the flow volume and the detection apparatus, the hybrid diffractive-refractive element being configured to focus the reflections associated with the plurality of light sheets, from the flow volume, to a same plane in the detection apparatus.
2. The particle image velocimetry system of claim 1 wherein the hybrid diffractive-refractive element is associated with the detection apparatus for providing a wavelength selective focus for the detection apparatus, wherein the detection apparatus is in focus with the wavelengths of all of the light sheets simultaneously.
3. The system of claim 1, further comprising: a collimator for directing the white light from the light source to the hybrid diffractive-refractive element.
4. The system of claim 1 wherein the wavelength of at least one light sheet varies linearly from the wavelength of an adjacently positioned light sheet.
5. The system of claim 1 wherein the plurality of light sheets are arranged to form a continuum of light sheets.
6. The system of claim 5 wherein each of the light sheets is associated with a respective wavelength of light and the wavelengths of the individual light sheets dimensionally vary linearly over the continuum of light sheets.
7. The system of claim 1 wherein the detection apparatus also includes means for recording the reflections of light from the at least one particle.
8. The system of claim 7 wherein the detection apparatus is a camera.
9. The system of claim 1, wherein the hybrid diffractive-refractive element includes a Fresnel phase plate.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
(17) As illustrated in
(18)
(19) A comparison between the setups for a regular PIV and the Rainbow PIV is shown in
(20) In contrast to the conventional PIV arrangement, in applicant's Rainbow PIV system, shown to the right in
(21) More specifically, in the aspect of applicant's invention shown in
(22) A second part of the optical system, namely the diffractive optical element 46, may include a Fresnel phase plate 50, to provide a wavelength-selective focus. A example of such a Fresnel phase plate is shown in
(23) Turning now to the method for processing of data, extracted from an operation of the hardware described above, the method involves two primary steps. In the fist step a time sequenced series of images of a flow volume containing at least one particle resident within a fluid, wherein depth information of the at least one particle within the fluid volume is color coded into the images as a consequence of the particular wavelength of the light, which is reflected from the at least one particle is provided. In a second step, a three dimensional (3D) trajectory of the at least one particle is obtained by analyzing the two dimensional (2D) spatial motion of the at least one particle in a plane of the images and a change in color of the wavelength of light reflected from the at least one particle. The step of providing a time sequenced series of images is a consequence of the operation of the setup described above over a predetermined period of time with photographs being taken of the flow volume at selected time intervals. Once the user has obtained the images, the task of analyzing those images involves two primary steps, first a reconstruction task to estimate a particle 20's position from the observed color associated with each particle 20, and secondly to track those particles over time to obtain a 3D velocity field. The inventive method permits a full 3D, three component (3D-3C) measurement. The steps of the inventive method are shown in greater detail in
(24) This method is made more complicated by the fact that the camera 24 captures only red-green-blue (RGB) information, and not a full hyperspectral image, which makes the position reconstruction less robust. To tackle this problem, an iterative approach is employed. An initial position estimate for each time step can be used to obtain a first estimate of the time-dependent velocity field. This velocity field can be used to refine the position estimate by adding physical priors that tie together all the time steps. These two iterative steps are described in detail below.
(25) To estimate the positioning of particles within a flow volume an inverse problem is proposed for recovering particle locations in 3D spatial domain. We start by introducing an image formation model that relates the obtained particle positions to the observed image. Three regularization terms are then added to formulate an optimization problem, which can be efficiently solved with guaranteed convergence, thereby addressing our ill-posed inverse problem.
(26) As mentioned above, the illumination in the volume 16 is designed to include a continuum of light sheets 42 (
(27) Since we are operating with incoherent light, the imaging process of the optical system can be modeled as a set of point spread functions (PSF), one for each color channel: g.sub.C(x, ) where C{red, green, blue}. With these definitions, the image formation model is:
i.sub.C(x)=.sub..sub.Xg.sub.C(xx,).Math.i.sub.r(x,).Math.P(x,)dxd,(1)
where i.sub.c(x) are the color channels of the capture RGB image, and i.sub.r(x, ) is the corresponding spectral distribution incident on the image sensor. The spatial integral corresponds to a convolution representing a potentially imperfect focus, while the wavelength integral represents the conversion from a spectral image to an RGB image encoding 3D particle positions.
(28) After discretization, we can formulate the convolution of PSFs and reflected light intensity as a matrix AR.sup.3NNL, where N is the number of image pixels, L is the number of discretization levels along the wavelength coordinate, and the value of 3 refers to three color channels. Moreover, i.sub.tR.sup.3N represents the observed image at time t and p.sub.tR.sup.NL is the occupancy probability of a specific voxel, given i.sub.t. Hence, the distribution of particles at each time step of a video can be retrieved by solving the linear system
Ap.sub.t=i.sub.t.(2)
(29) However, this inverse problem is problematic, as we have compressed the full spectral information, encoding the particle position, into just three color channels. To handle this inverse problem, some prior knowledge of the distribution of particles is introduced as regularization terms, resulting in the following minimization problem:
(30)
where the first line is a least-square data fitting term, corresponding to Equation 2. The second line defines a weighted L.sub.1 term that encourages sparse distributions of particles in the volume, and the indicator function forces occupancy probabilities to be between zero and one.
(31) Finally, the term of the third line provides temporal coherence by mandating that occupancy probabilities of successive time frames are consistent with advection under a previously estimated flow field U.sub.t=(u.sub.t, v.sub.t, w.sub.t). We call this term the particle motion consistency term, and it allows for refining position estimates once a velocity field has been estimated, and ties the reconstruction of all frames together into a single optimization problem. The particle motion consistency term is discussed in more detail below. The above optimization problem is non-smooth because of the L.sub.1 term and the indicator function, and therefore the problem cannot be solved by general optimization tools, such as a gradient descent. The strategy for tackling this kind of issue is to decouple non-smooth terms from the original optimization problem, such that distinct parts can be handled separately. We apply this strategy using the ADMM framework, which is systematically discussed in [Boyd et al. 2011].
(32) TABLE-US-00001 Algorithm 1 ADMM Framework of Computing Particle Distribution 1: procedure COMPUTEPARTICLELOCATION(F.sub.1, H.sub.1) 2: for from 1 to maximum ADMM iteration do 3: // p-minimization step 4: p.sup.j+1 prox.sub..sub.
(33) The pseudo code for solving Equation 3 using ADMM is shown in Algorithm 1, where j is the iteration number, z is a slack variable, q is a dual variable, or Lagrange multiplier. prox.sub.1F1 and prox.sub.1H1 are proximal operators [Parikh et al 2014] based on F.sub.1 and H.sub.1 respectively. F.sub.1 and H.sub.1 are defined as:
(34)
(35) The derivation of the proximal operators in Algorithm 1 follows. To simplify the notations, we denote z.sup.jq.sup.j as d.sup.j, p.sup.j+1+q.sup.j as e.sup.j, u.sub.t.sup.j+1+q.sup.j as h.sup.j, p.sub.t+1(u.sub.t, t) as {circumflex over (p)}.sub.t+1.sup.. For Algorithm 1:
(36)
In the first term, p is represented by procedural operator on the left hand side of Equation 10, and it is solved by Conjugate Gradients. The second term is the point-wise shrinkage operation followed by a projection onto the domain of [0,1].
(37) The L.sub.1 penalized term ensures a sparse distribution of particles in the volume. It is further weighted by a diagonal matrix diag (w). Unlike the algorithm proposed in [Candes et al. 2008], which iteratively changes the weight coefficients based on previous results for enhancing sparsity. Weights in our approach are fixed during iterations, but vary with particle depth. The motivation for this process is to compensate for different sensitivities of the camera to different wavelengths. For example, wavelengths in the yellow or in the blue-green part of the spectrum elicit a strong response in two or even three color channels, while wavelengths in the far blue or far red parts only trigger one channel. This can result in a non-uniform particle distribution, where particles are more likely to be placed at certain preferred depths. The weighting term allows us to eliminate this bias by compensating for the photometric non-uniformity.
(38) As previously mentioned, particle motion consistency ensures that estimated particle locations in successive frames are consistent with advection through a previously estimated flow field. This transforms the position estimation process from a set of independent problems, one for each time step, to a single joint estimation problem for the whole sequence. This term can be improved by adding a mask to suppress the impact of low confidence flow estimates.
(39) Having discussed a method for estimating the particle location, we now describe how to estimate the fluid flow vectors from the reconstructed 3D particle distributions in a video frame. First, we introduce the physical properties of fluid flow formulated in Navier-Stokes equations, and then formulate an optimization problem, constructed by combining conventional optical flow with those physical constraints.
(40) As incompressible flow can be described as a solenoidal flow vector field u.sub.sol, which is divergence free:
.Math.u.sub.sol=0.(8)
(41) Based on the Helmholtz decomposition, any arbitrary vector field u (in our case an intermediate flow vector obtained that does not satisfy the divergence-free constraints) can be decomposed into a solenoidal (divergence-free) part and an irrotational (curl-free) part. The irrotational flow vector is the gradient of some scalar function (pressure P in our case) hence, we can express the Helmholtz decomposition as
u=u.sub.sol+P/,(9)
where defines density. Taking the differential (divergence) of both sides, we obtain
.Math.u=.sup.2P/ (since .Math.u.sub.sol=0).(10)
With the intermediate vector field u, the scalar function P can be computed by solving the above Poisson equation, and then the solenoidal flow vector field can be simply retrieved as
u.sub.sol=uP/.(11)
(42) Equations 10 and 11 represent a pressure projection .sub.C DIV operation that projects an arbitrary flow field onto the space of divergence-free flows C.sub.DIV, and is widely used in fluid simulation. Mathematically, this step corresponds to an operator splitting method [Gregson et al. 2014].
(43) With reference to temporal coherence, the incompressible Navier-Stokes equation describes the time evolution of fluid velocity vector fields given by:
(44)
where P is the pressure,
(45)
which refers to an approximate evolution of fluid velocity over time. On the basis of this equation, we can advect the fluid velocity at the current time step by itself and then project it onto a space of divergence-free flows to generate an estimation of the subsequent velocity field, and vice versa. This time evolution equation will be introduced into the optimization problem discussed in the following as a soft constraint.
(46) We aim to reconstruct the fluid flow velocity vector fields based on a physically constrained optical flow model. The extended optical flow model is formulated as:
(47)
Each line of which is explained hereafter: (1) The first line describes the conventional Horn-Schunck optical flow model except that the brightness constancy constraint is replaced with the masked particle motion consistency as discussed above. (2) The second and third lines describe the temporal coherence regularization as explained above: the fluid velocity at the current time step is approximated by either forward warping the flow vector at the previous time step by itself, followed by a projection operation or by backward warping, the flow vector at the next time step by the current flow, followed again by a projection operation. The binary mask M is employed to ensure confidence-based weighting, giving 0 for the flow vectors near the boundary and 1 for vectors in the central region. The fourth line represents an indicator function of the projection method introduced above. Gregson et al [2014] found that the projection operation is equivalent to the proximal operator for the space of the divergence-free constraint into the original optical flow model, which can still be efficiently solved by well-known optimization frameworks.
(48) We formulate this optimization problem in the ADMM framework in Algorithm 2, where the definitions of the functions F.sub.2 and H.sub.z are given below.
(49)
(50) TABLE-US-00002 Algorithm 2 ADMM Framework of Computing Fluid Velocity Vector Fields 1: procedure COMPUTEVELOCITY(F.sub.2, H.sub.2) 2: for from 1 to ADMM iterations do 3: // u-minimization step 4: u.sub.l.sup.j+1 prox.sub..sub.
For Algorithm 2:
u.sub.t=prox.sub..sub.
A=p.sub.t({circumflex over (p)}.sub.t+1.sup.).sup.2+3.sup.2+2.sub.4M
b=p.sub.t(({circumflex over (p)}.sub.t+1.sup.p.sub.t)
where
.sub.t+1.sup.k=u.sub.t+1(u.sub.t.sup.k,t),
.sub.t1.sup.k+=u.sub.t1(u.sub.t.sup.k,t),
z=prox.sub.r.sub.
(51) By applying the fixed-point theorem to tackle the nonlinear optimization problem, u.sup.k in the first term refers to the result in the k.sup.th iteration. We use Conjugate Gradients to solve this linear system in combination with an incomplete Cholesky factorization. The second term is a simple pressure projection step.
(52) In addition, a coarse-to-fine strategy is applied to deal with large displacements. The algorithm begins from the coarsest level, and an initial guess of optical flow at the next finer level is obtained by scaling up the flow computed in the coarser level. It should be noted that in this case, the above optimization problem becomes non-linear in u.sub.t on account of the warping term p.sub.t+1 (u.sub.t, t). To tackle this issue, the non-linear term is linearized by using first order Taylor expansion and u.sub.t is updated iteratively based on fixed-point theorem. More detailed descriptions about this approach are given in Meinhardt-Llopis et al 2013.
(53) For a sequence of fluid velocity vector fields, each of them is solved independently, in subsequent flows in current iteration, and also the previous flows in the subsequent iterations.
EXAMPLE
(54) Experimental setup:
(55) Rainbow light generation: The experiments were performed using a high power plasma light source 14, combined with a liquid light guide (HPLS245, Thorlabs) to generate a white light (output spectrum: [390,730 nm]. A collimator 16 was added to obtain a parallel light beam. It is important to have a parallel light beam, to guarantee that two particles, having the same depth, will be illuminated by the same colored light, i.e. light having the same wavelength.
(56) To split the white light into a rainbow beam, a continuously linearly varying bandpass filter (LF103245, Delta Optical Thin Film) was employed. Other alternatives such as a prism and blaze grating were also considered for this purpose in view of their respective ability to generate a rainbow beam. Although a bandpass filter was eventually selected for purposes of the invention, it should be understood that these alternatives for splitting the white light also form part of the invention. However, after comparison, the linear filter appeared as the best solution for its effectiveness and simplicity. The generated beam encompassed a spectral range from 480 nm to 680 nm, corresponding to a depth range of 18 mm in the z direction Given the height of the beam and the length of the flow volume (tank), the two other dimensions of the measurement volume were 50.1 mm along the x axis and 25.6 mm along the y axis.
(57) Acquisition device: To record the particle images, a digital camera 24 was used (RED SCARLET-X DSMC, sensor: MYSTERIUM-X [30 mm15 mm], 40962160 pixels). A lens with a focal length equal to 50 mm was mounted on the camera 24.
(58) As can be seen in
(59) To rectify this situation a DOE (
(60) When designing the DOE 26 we must ensure that all wavelengths are focused on the same sensor plane of the camera 24. Moreover, the aperture and the magnification of the hybrid lens should allow for an image of good quality.
(61)
(62) A DOE is characterized by its phase, which can be expressed as follows;
(63)
Where r is the radial distance to the center of the DOE, is a given wavelength, and f.sub..sup.DOE is the focal length of the DOE associated to the wavelength . For a DOE the focal length is spectral-dependent, and obeys the following relationship:
.Math.f.sub..sup.DOE=constant.(20)
Thereafter, the wavelength
(64)
will be used to design the DOE. Thus, we only need to determine f.sub..sub.
(65) On one hand, the thickness of the measurement volume z=L.sub.1L.sub.2 is enforced by the used setup. With reference to
(66)
Furthermore, (f.sub.) can be expressed as:
(67)
By combining the Equations 21 and 22 in the expression of z, we obtain:
(68)
(69) Here f.sub.L is fixed by the choice of the refractive lens, z, .sub.1, and .sub.2 are measured and depend on the illumination setup. Therefore, the focal length of the DOE f.sub..sub.
(70) To ensure a good quality of the obtained image, we have to add some constraints to this minimization. These constraints involve the aperture
(71)
and the magnification
(72)
of the hybrid lens. Where D is the diameter of the hybrid lens, and L.sub. is the distance between the hybrid lens and the plane illuminated by a light with a wavelength equal to . The constraint on the aperture will improve the signal to noise ratio of the obtained image, while the constraint on the magnification will warrant a good size match between the measurement volume and the acquired image. Once the optimal focal length f.sub..sub.
(73) The following table presents the different characteristics of the hybrid lens (DOE+lens):
(74) TABLE-US-00003 Symbol Description Value D.sub.DOE DOE diameter 16 mm f.sub.0.sup.DOE DOE focal length for .sub.0 = 563 nm 401.8 mm Magnification 2.065 F# Aperture 4.125 L.sup.1 Distance hybrid lens - sensor 66 mm L.sub.2 Distance hybrid lens - volume 127.3 mm
(75) The image of particles acquired using the hybrid lens is presented in
(76) Measured Flows: Two types of experiments were realized using transparent, rectangular flow volumes (tanks) made of glass plates placed on a brass metal support.
(77) (I) Experiments with a ground truth were performed using a high viscosity transparent fluid (PSF1,000,000 cSt Pure Silicone Fluid). Its viscosity is one million times higher than that of water. White particles (White Polyethylene Microspheres with a diameter in the range [90, 106 m]) were introduced into this liquid. This involved heating the liquid while stirring in the particles 20, followed by vacuum treatment to eliminate bubbles. After cooling the liquid, the particles 20 become frozen in place. Then, experiments were conducted by applying a known movement (translations or rotation) to the tank using micro-meter stages. Therefore, the particle motion is known, since they are immobile with respect to the flow volume 22 (tank). (II) An experiment without ground truth was realized using the same white particles 20, after introducing them into a flow volume (tank) 22 containing tap water. A small amount of surfactant (Polysorbate 80 Water) was added to the water in order to reduce the surface tension of the water. This was to avoid the agglomeration of particles 20 in the flow volume (tank) 22. In this case, the particle motion was generated manually through stirring, pouring, and similar excitations.
Velocity Vector Field Reconstruction Results: In this section we first evaluate our proposed approaches based on synthetic examples for ground truth comparisons. Then, we conducted two types of experiments, wherein the first one was to move particles 20 with known motion vectors, verifying the accuracy of our methods on real data, the second one was to work on practical fluids.
Synthetic simulations: To quantitatively assess our reconstruction method, we tested our algorithm on simulated data. A volume with the size of 10010020 (XYZ) was simulated and we randomly generated 1000 particles in the volume. The particles were advected by ground truth flow vectors over time, such that we can obtain time evolved particle distributions. Using the image formation model from Equation 1 we simulated a time sequence of five captured images.
(78) We compared our proposed velocity vector reconstruction algorithm, referred to S-T div-H&S with the general multi-scale Horn-Schunck algorithm H&S [Meinhardt-Llopis et al. 2013] and its extension by introducing divergence free constraint as a proximal operator div-H&S [Gregson et al 2014]. Note that the last two approaches compute the motion between one pair of frames independently, while our approach works on a sequence of frames simultaneously.
(79) The average end-point error and average angular error for these three approaches at different time steps are shown in
(80) Experiments with a ground truth: To evaluate the effectiveness of our proposed methods on real captured data, we firstly conducted the experiments with a flow volume (tank) containing seeded particles in a high viscosity liquid. The flow volume (tank) was placed on a multi-dimensional translation/rotation stage such that reconstruction results of the algorithm could be compared with ground truth movements. Three independent tests were performed: 1. Translation (displacement) in the x direction (i.e. perpendicular to the camera's line of sight): five (5) frames were acquired. Between each two successive frames, a translation (displacement) of 0.2 mm in the x direction was applied. 2. Translation (displacement) in the z direction (i.e. along the camera's line of sight): five (5) frames were acquired. Between each two successive frames, a translation (displacement) of 0.5 mm in the z direction was applied. In this case, the translation is larger, in order to observe more easily the color change. 3. An approximation of a rotation around the vertical (y) axis in a clockwise direction. With our setup of frozen particles in a flow volume, only an approximation of this rotational motion was possible, since it is not possible to tilt the flow volume (tank) relative to the camera's line of sight in order to avoid distorting the flow volume by refraction. We therefore approximated rotational flow by rotating the rainbow illumination pattern relative to the flow volume (tank). In practice, the flow volume (tank) and the camera were mounted together on a rotation table with a fixed relative positioning, and the lighting setup was fixed. The rotations were performed from an angle of 8 to 8 (the reference was defined when the flow volume (tank) is aligned with the (x, z) directions). Between each two successive frames, a rotation of an angle equal to 4 was applied.
(81) Before processing the captured images, we first passed them through a Gaussian filter and then downsampled them by a factor of eight (8), hence the resolution for the downsampled image is about 100 m/pixel, approximately one particle per image pixel. We discretized the wavelength coordinate into 20 levels, corresponding to 90 m/layer. The calibrated point spread functions for each levels are shown in
(82) The reconstructed velocity vector fields are visualized in
(83) Furthermore, we can numerically analyze the reconstructed results with respect to the ground truth movements. In the experiments, the x-axis and the z-axis translations moved respectively 200 m and 500 m in one time step, which corresponds in the captured images to 2 and 5 pixels. In the rotation test, the total rotation /45 rad, and the 2D plane of the test section has the physical size of 10 mm18 mm (xy) and the distance from the center of the test section to the center of rotation was 10 mm, hence the practical magnitudes of the displacements are about 334 m (3.3 pixel sizes) on the part at the near-end of the center and 506 m (5.1 pixel sizes) on the part at the far-end of the center of rotation. The computed magnitudes of the flow vectors were encoded by color in our represented results.
(84) The mean of the norm of the velocity in the left translational experiment is 1.75 pixel sizes with standard deviation of 0.15, while the mean of that in the experiment of translating towards camera was 3.48 pixel sizes with standard deviation of 0.79. We can see that reconstructed flow vectors reveal higher accuracy for the flow perpendicular to the optical axis with respect to the flow in a longitudinal direction. This is reasonable since: (1) depth resolution is highly limited compared to lateral resolution as camera is much more sensitive to the spatial change of objects in 2D plane than the change of wavelength, which results in coarser reconstructed flow vectors along the wavelength coordinate. (2) the error may also come from a bias of reconstructed particle distributions.
(85) Determination of the spatial positions of the particles along the z axis involves higher uncertainties. Moreover, distortion caused by the refractive effect of the applied high viscosity materials, arises when moving the flow volume (tank) along the z axis. As the thickness of the material between the camera and the illuminated particles changes, the PSFs are altered simultaneously. Fortunately, this issue does not exist when measuring practical fluid flow, where the particles move, instead of the light beam. Though facing the fact of relatively low reconstruction accuracy for flow in an axial direction, not only in simple translational structures, vortical flows were also reasonably reproduced, and the error in the wavelength axis is within a certain tolerance, which, in general is no more than half of length of the discretization intervals.
(86) Experiments without ground truth: Finally, we test our Rainbow PIV system on four different flows of varying complexity (
(87) Comparing the motion of these chosen particles with the corresponding flow vectors in the reconstructed results, it reveals that overall agreement is achieved. In addition, the actual stirred flow structure is supposed to be a vortex, rotating in a clockwise direction. We observe that the key features of the vortex structure are well reconstructed by our developed methods. A path line visualization of the same velocity data is shown in
(88)
(89) Finally, the host complex example is shown in
(90) The present application discloses a novel Rainbow PIV system together with optimization strategies, which enables the user to recover the 3D fluid flow structures using a single color camera, greatly reducing the hardware setup requirements and easing calibration complexity compared to the other approaches handling 3D-3C measurements. Our approach is implemented by illuminating particles in the volume with rainbow light such that the depth information for the particles is color-coded into the captured images, and the 3D trajectory of particles can be tracked by analyzing the 2D spatial motion in the image plane and the color change in the wavelength domain. A specially designed DOE helps to focus all the wavelength planes on the sensor plane simultaneously, to achieve high lateral resolution and relatively large depth of focus at the same time.
(91) We then formulated an inverse problem to reconstruct the particle positions in 3D using a sequence of frames to alleviate the ambiguity issues of identifying particle positions from a single frame. With the recovered particle locations at different time steps, a further step is taken to reconstruct the fluid velocity vector fields. An optimization problem integrating the conventional Horn-Schunck algorithm with physical constraints is proposed to compute the flow vectors.
(92) We demonstrate our approach both on synthetic flows induced by moving a frozen particle volume and by using a real stirred flow. Overall, our method can robustly reconstruct a significant part of the flow structures, and also good accuracy. The primary drawback of our system is the limited spatial resolution along the wavelength (depth) coordinate. Due to the existence of noise and light scattering issues, and relatively low sensitivity of the camera to the wavelength change, at current stage the wavelength coordinate is not allowed to be discretized any further. In the further this situation could be improved by making use of the IR end of the spectrum instead of blue light, where camera sensitivity is rather low. Other possible improvements include the use of cameras with additional color primaries, or primaries that are optimized for this task. Despite these issues, on account of the simple setup and the good accuracy, our system can be easily implemented and applied to investigate new types of fluid flows in the future.
(93) While this disclosure has been described using certain embodiments, it can be further modified while keeping within its spirit and scope. This application is therefore intended to cover any variations, uses, or adaptations of the disclosure using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practices in the art to which it pertains and which fall within the limits of the appended claims.
REFERENCES
(94) 1. Adrian, R. J., and Westerweel, J. 2011. Particle image velocimetry, Cambridge University Press. 2. Atchson, B. Ihrke, I Heidrich, W., Texs, A., Bradley, D., Magnor, M. and Seidel, H.-P. 2008. Time-resolved 3D Capture of Non-stationary Gas Flows. ACM Trans. Graph 27, 5, 132. 3. Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning 3, 1, 1-122. 4. Candes, E. J., Wakin M. B., and Boyd, S. P. 2008. Enhancing sparsity by reweighted 1 minimization. J. Fourier analysis and applications 14, 5-6, 877-905. 5. Casey, T. A., Sakakibara, J., and Thoroddsen, S. T. 2013. Scanning tomographic particle image velocimetry applied to a turbulent jet, Phys Fluids 25, 025102. 6. Elsinga, G. E., Scarano, F., Wieneke, B., and Van Oudheusden, B. W. 2006. Tomographic particle image velocimetry. Experiments in Fluids 41, 6, 933-947. 7. Fediw, R., Stam, J., and Jensen, H. W. 2001. Visual simulation of smoke. In Proc. ACM Siggraph, 15-22. 8. Foster, N., and Metaxas, D. 1997. Modeling the motion of a hot, turbulent gas. In Proc. ACM Siggraph, 181-188. 9. Gregson, J., Krimerman, M., Hullin, M. B., and Heidrich, W. 2012. Stochastic tomography and its applications in 3d Imaging of mixing fluids. ACM Trans. Graph. 33, 4, 52-1. 10. Gregson, J., Ihrke, I., Thuerey, N., and Heidrich, W. 2014. From capture to simulation: connecting forward and inverse problems in fluids. ACM Trans. Graph. 33, 4, 139. 11. Gu, J., Nayar, S., Grinspun, E., Belhumeur, P., and Rammoorthi, R. 2013. Compressive Structured Light for Recovering Inhomogeneous Participating Media. IEEE PAMI 35, 3, 555-567. 12. Hasinoff, S. W., and Kutulakos, K. N. 2007. Photo-consistent Reconstruction of Semitransparent Scenes by Density-sheet Decomposition. IEEE PAMI 29, 5, 870-885. 13. Hawkins, T., Einarsson, P. and Debevec, P. 2005. Acquisition of Time-Varying Participating Media. ACM Tans. Graph. 24, 3, 812-815. 14. Heitz, D., Mmin, E., and Schnrr, C. 2010. Variational fluid flow measurements from image sequences: synopsis and perspectives, Experiments in Fluids 48, 3, 369-393. 15. Herlin, I., Brziat, D., Mercier, N., and Zhuk, S. 2012. Divergence-free motion estimation. In Proc. ECCV, 15-27. 16. Hinsch, K. D. 2002 Holographic particle image velocimetry. Measurement Science and Technology 13, 7, R61. 17. Horn, B. K., and Schunck, B. G. 1981 Determining optical flow. Artificial Intelligence 17, 1-3, 185-203. 18. Ihrke, I. and Magnor, M. 2004. Image-Based Tomographic Reconstruction of flames. In Proc. SC, 367-375. 19. Levoy, M., Ng, R., Adams, A. Footer, M., and Horowitz, M. 2006. Light field microscopy. ACM Trans, Graph. 25, 3, 924-934. 20. Liu, T., and Shen, L. 2008. Fluid flow and optical flow. J. Fluid Mechanics 614, 253-291. 21. Liu, T., Merat, A., Makhmalbaf, M., Fajardo, C., and Merati, P. 2015. Comparison between optical flow and cross-correlation methods for extraction of velocity fields from particle images. Experiments in Fluids 56, 8, 1-23. 22. Lourenco, L. Krothapalli, A., and Smith, C. 1989. Particle image velocimetry. In Advances in Fluid Mechanis Measurements, Springer, 127-199. 23. Lynch, K., Fahringer, T., and Thurow, B. 2012. Three dimensional particle image velocimetry using a plenoptic camera. American Institute of Aeronautics and Astronautics (AIAA). 24. McGregor, T., Spence, D., and Coutts, D. 2007. Laser based volumetric colour-coded three-dimensional particle velocimetry, Optics and Lasers in Engineering 45, 8, 882-889. 25. Meinhardt-Llopis, E., Prez, J. S., and Kondermann, D. 2013. Horn-schunck optical flow with a multi-scale strategy. Image Processing on line 2013, 151-172. 26. Ng., R., Levoy, M., Duval, G. Horowitz, M., and Hanrahan, P. 2005. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR 2, 11, 1-11. 27. Okamoto. K., Nishio, S., Saga, T., and Kobayashi, T. 2000. Standard images for particle-image velocimetry. Measurement Science and Technology 11, 6, 685. 28. Parikh, N., Boyd, S. P. et al 2014. Proximal algorithms. Foundations and Trends in Optimization 1, 3, 127-239. 29. Prasad, A. K. 2000. Particle image velocimetry. Current ScienceBangolore79, 1, 51-60. 30. Ruhnau, P. Stahl, A., and Schnrr, C. 2007. Variational estimation of experimental fluid flows with physics-based spatio-temporal regularization. Measurement Science and Technology 18, 3, 755. 31. Stam, J. 1999. Stable fluids. In Proc. ACM Siggraph. 121-128. 32. Stansislas, M., Okamoto, K. Khler, C. J., Westerweel, J. and Scarano, F. 2008. Main results of the third international piv challenge. Experiments in Fluids, 45, 1, 27-71. 33. Watamura, T. Tasaka, Y., and Murai, Y. 2013. LCD-projector-based 3D color PTV. Experimental Thermal and Fluid Science 47, 68-80. 34. Willert, C. and Gharib, M. 1992. Three-dimensional particle imaging with a single camera. Experiments in Fluids 12, 6, 353-358. 35. Yuan, J., Schdrr, C., and Steidl, G. 2007. Simultaneous higher-order optical flow estimation and decomposition. SIAM Journal on Scientific Computing 29, 6, 2283-2304.