FLUID DENSITY GRADIENT DETECTION METHOD AND FLUID DENSITY GRADIENT DETECTION SYSTEM
20220392097 · 2022-12-08
Assignee
Inventors
Cpc classification
G01N9/00
PHYSICS
International classification
G01N9/00
PHYSICS
Abstract
A fluid density gradient detection method includes capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area, and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width of the pattern on the captured image.
Claims
1. A fluid density gradient detection method comprising: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device, wherein the imaging condition is determined based on a relationship between a width of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width of the pattern on the captured image.
2. The fluid density gradient detection method according to claim 1, wherein the imaging condition is set such that the pattern period width on the captured image is in a range of 150% to 250% with respect to the width of the pixel.
3. The fluid density gradient detection method according to claim 1, wherein the imaging condition is further determined based on a period of moire generated on the captured image.
4. The fluid density gradient detection method according to claim 1, further comprising: performing a first adjustment of adjusting the imaging condition so that the pattern period width on the captured image is within a predetermined range with respect to the width of the pixel; and performing a second adjustment of further adjusting the imaging condition adjusted by the first adjustment based on the period of moire generated on the captured image captured under the imaging condition.
5. The fluid density gradient detection method according to claim 4, wherein in the second adjustment, the imaging condition is adjusted such that the period of the moire is longest.
6. The fluid density gradient detection method according to claim 1, wherein the imaging condition is changed by adjusting an optical condition of the imaging device including an angle of view.
7. The fluid density gradient detection method according to claim 1, wherein the imaging condition is changed by adjusting a pattern period width of the pattern.
8. The fluid density gradient detection method according to claim 7, wherein the pattern period width is changed in accordance with a magnitude of a gas density gradient in the observation target area.
9. The fluid density gradient detection method according to claim 1, further comprising: determining a direction in which the pattern periodically changes in accordance with the direction of the gas density gradient in the observation target area.
10. The fluid density gradient detection method according to claim 9, further comprising: dividing the background image into a plurality of sections; and determining, for each of the sections, a direction of the pattern in accordance with the direction of the gas density gradient in the observation target area.
11. The fluid density gradient detection method according to claim 4, wherein the first adjustment and the second adjustment are performed in a state where a focus of the imaging device is adjusted to the background image.
12. A fluid density gradient detection system comprising: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; and an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit, wherein the imaging condition is determined based on a relationship between a width of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width of the pattern on the captured image.
13. The fluid density gradient detection system according to claim 12, further comprising: a control unit configured to change an optical condition of the imaging unit, wherein the control unit adjusts the optical condition so that a period of moire of the captured image is longest.
14. The fluid density gradient detection system according to claim 12, further comprising: a background creation unit configured to change the background image, wherein the background creation unit adjusts the background image such that a period of moire of the captured image is longest.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
DESCRIPTION OF EMBODIMENTS
[0028] (Knowledge that is the Basis of Fluid Density Gradient Detection According to the Present Disclosure)
[0029] Moire (interference fringe) is a stripe pattern generated by optical interference between periodic patterns when the periodic patterns are superimposed. For example, when the periodic pattern is projected onto an image sensor of a digital camera which is an aggregate of periodic pixels, moire may be generated. The moire generated by interference with the image sensor specific to the digital camera may be regarded as a problem in comparison with a film camera, and imaging may be performed by shifting a focus or attaching a low-pass filter in order to prevent generation of the moire.
[0030] However, in the present disclosure, by actively generating moire due to interference with the image sensor, a displacement enlargement effect of the moire is utilized to realize the visualization of a minute density change which is difficult to perform by the general BOS method.
[0031] Moire has a characteristic that a small relative displacement between patterns is largely enlarged as a movement amount of the Moire. For example,
[0032] In the following embodiments, an example will be described in which high precision visualization of a density gradient by a general-purpose camera is implemented without using special optical equipment by using moire due to interference with an image sensor.
[0033] Hereinafter, embodiments specifically disclosing a fluid density gradient detection method and a fluid density gradient detection system according to the present disclosure will be described in detail with reference to the drawings as appropriate. Unnecessarily detailed description may be omitted. For example, a detailed description of a well-known matter or a repeated description of substantially the same configuration may be omitted. This is to avoid unnecessary redundancy in the following description and to facilitate understanding of those skilled in the art. The accompanying drawings and the following description are provided for a thorough understanding of the present disclosure for those skilled in the art, and are not intended to limit the subject matter in the claims.
First Embodiment
[0034]
[0035] In the present embodiment, the observation target 10 is assumed to be a gas such as air in which an airflow is generated and which has a density gradient, but may be a liquid, a solid, or the like through which light having a wavelength that can be captured by the camera 20 can be transmitted.
[0036] The imaging system 100 may further include a display medium 35 for installing or displaying the background 30. That is, the display medium 35 may be provided on a back side of the background 30 as viewed from the camera 20. The display medium 35 may be configured by a part of a facility such as, for example, a wall. In this case, the background 30 may be formed as a pattern of the wall or the like, or may be projected onto the wall or the like by a projector. Alternatively, the display medium 35 may be a portable medium such as paper or plastic. In this case, the background 30 is printed on the portable medium and attached to the wall, a screen, or the like in the facility. Alternatively, the display medium 35 may be configured by, for example, a display. In this case, the background 30 is displayed on the display.
[0037] The imaging system 100 may further include a computer (hereinafter referred to as a “camera control unit 26”) that transmits a control signal for controlling an operation of the camera 20. The control signal includes, for example, a signal for changing optical conditions such as focus and zoom of the camera 20, and a signal for controlling start or stop of imaging. If the camera 20 can pan or tilt, the camera control unit 26 may transmit a control signal for controlling the pan or tilt to the camera 20.
[0038] The camera control unit 26 may be integrally incorporated in the camera 20. All or a part of the control of the camera 20 may be executed by a user directly operating an input unit (a button, a lens, a touch panel, or the like) attached to the camera 20 instead of the camera control unit 26.
[0039]
[0040] The processor 41 controls each element of the camera control unit 26 via the bus 42. The processor 41 is configured by using, for example, a general-purpose central processing unit (CPU), and a digital signal processor (DSP), or a field programmable gate array (FPGA). The processor 41 may execute a predetermined program stored in the memory 43 to generate a focus adjustment signal or a zoom adjustment signal based on a captured image captured by the camera 20 or to visualize the density gradient by processing to be described later using the captured image captured by the camera 20.
[0041] The memory 43 acquires various kinds of information from other elements, and temporarily or permanently holds the information. The memory 43 is a generic term for a so-called primary storage device and a secondary storage device. A plurality of memories 43 may be physically disposed. As the memory 43, for example, a dynamic random access memory (DRAM), a hard disk drive (HDD), or a solid state drive (SSD) is used.
[0042] The display 44 is configured by using, for example, a liquid crystal display (LCD) or an organic electroluminescence (EL) device, and may display a captured image sent from the camera 20 or a result of visualization processing of the density gradient.
[0043] The input unit 45 is configured by using an operation device for receiving an operation from the user, and may be, for example, a mouse or a keyboard. The input unit 45 may constitute a touch panel integrated with the display 44.
[0044] The communication unit 46 is configured by using a communication circuit for functioning as a communication interface with the camera 20, and, for example, outputs the focus adjustment signal or the zoom adjustment signal to the camera 20 or receives a captured image from the camera 20. In addition, the communication unit 46 may transfer the captured image and the result of the visualization processing of the density gradient to another external device via a network such as the Internet.
[0045]
[0046] The optical unit 21 includes, for example, a plurality of lenses, includes a focus function unit 22 and a zoom function unit 23, and changes the optical condition of the camera 20. By adjusting a position or the like of one or more lenses constituting the optical unit 21, the focus function unit 22 for adjusting a focus position and the zoom function unit 23 for adjusting a magnitude of the angle of view by zooming are implemented. The optical unit 21 may execute the focus function unit 22 and the zoom function unit 23 by a direct operation by the user or by the focus adjustment signal and the zoom adjustment signal from the camera control unit 26 to be described later.
[0047] The image sensor 25 is an aggregate of a plurality of pixels disposed two-dimensionally, and has R.sub.H pixels in a length W.sub.S of an imaging surface with respect to a direction in which the density change is desired to be visualized. That is, in the image sensor 25 of the camera 20 according to the present embodiment, a total of R.sub.H pixels are arranged in a horizontal direction. The image sensor 25 obtains an image signal or image data as an output obtained by imaging, but may image visible light incident on the imaging surface or may image invisible light such as infrared light incident on the imaging surface.
[0048]
[0049] In the present embodiment, the stripe pattern 31 is a monochrome binary stripe pattern, but may be a color stripe pattern, a stripe pattern due to a gradation change, or a lattice pattern.
[0050]
[0051] In the present embodiment, in the captured image 32 shown in
[0052] Under the imaging condition described above, a displacement enlargement effect due to moire appears in the captured image 32 obtained by capturing the observation target 10 and the background image by the camera 20. Accordingly, it is possible to enlarge a displacement less than one pixel generated by a density change gradient of an imaging target (in other words, the observation target 10), and it is possible to improve the detection sensitivity of the density gradient of the BOS method.
[0053] The reason why IP.sub.H=2S.sub.H is selected as the optical condition will be described with reference to
[0054]
[0055] Then, in the pixels of the lower two stages due to the shift ΔS, a ratio of the bright portion (white) and the dark portion (black) of the stripe pattern in the pixels changes as compared with the pixels of the upper two stages, and thus values of the pixels constituting the image captured by the image sensor 25 change. When the change of the pixel appears as moire, movement of the background (that is, the refraction of the light due to the density gradient of the gas) can be visualized.
[0056] For example, when the optical condition (for example, the period I.sub.PH) is larger than 2S.sub.H, in a state of I.sub.PH=3S.sub.H shown in
[0057] Here, when the I.sub.PH is brought close to 2S.sub.H to be in a state of I.sub.PH=2.5S.sub.H shown in
[0058] Even when the optical condition (for example, the period I.sub.PH) is smaller than 2S.sub.H, the shift ΔS can be visualized as the I.sub.PH approaches 2S.sub.H.
[0059] For example, in a state of I.sub.PH=1 S.sub.H shown in
[0060] Here, when the I.sub.PH is brought close to 2S.sub.H to be in a state of I.sub.PH=1.5S.sub.H shown in
[0061] As described above, in the relationship between I.sub.PH and 2S.sub.H, in the range where I.sub.PH is larger than 1.5S.sub.H and smaller than 2.5S.sub.H, the density gradient of the gas can be clearly visualized by moire particularly under the condition of I.sub.PH=2S.sub.H.
[0062] Next, an adjustment procedure of the angle of view α of the camera 20 will be described with reference to
[0063] First, the camera control unit 26 sets a zoom to a telephoto end using the zoom function unit 23 of the camera 20, and images the stripe pattern 31 of the background 30 in an enlarged manner (ST100).
[0064] Next, the camera control unit 26 adjusts the focus on the background 30 using the focus function unit 22 (ST110). The focus adjustment may be performed by a general autofocus function.
[0065] Next, the camera control unit 26 performs rough adjustment (an example of a first adjustment) of the angle of view α by the zoom function unit 23 (ST120). An adjustment target value of the angle of view α is calculated from the following (Formula 1) using the distance L from the camera 20 to the background 30, the horizontal pixel number R.sub.H of the image sensor 25, and the period P.sub.H of the stripe pattern 31 of the background 30 (see
[0066] When a focus length f is used as a unit of zoom adjustment, a relationship between the angle of view α and the focus length f can be obtained using the following (Formula 2). W.sub.S is a length of an effective region of the imaging surface of the image sensor 25 (see
[0067] In the captured image 32 captured by focusing on the background 30 under the optical condition of (Formula 1), the relationship between the period I.sub.PH of the stripe pattern 31 and the pixel width S.sub.H of the image is 1.5S.sub.H<I.sub.PH<2.5S.sub.H, and moire occurs due to interference between a pixel row of the image sensor 25 and the stripe pattern 31.
[0068] When the image sensor 25 is an image sensor that generates a grayscale image, since one pixel on the image sensor corresponds to one pixel of generated grayscale image data, the horizontal pixel number R.sub.H corresponds to the horizontal pixel number on the image sensor 25. On the other hand, when the image sensor 25 is configured by a Bayer array and generates a color image, since one set of arrays corresponds to one pixel in image data, the horizontal pixel number R.sub.H corresponds to the number of sets of arrays.
[0069] After the rough adjustment (an example of the first adjustment) of the angle of view α in step ST120, the camera control unit 26 performs fine adjustment (an example of second adjustment) of the zoom position (focus length f) using moire (ST130).
[0070] Here, the fine adjustment of the zoom position of the camera will be described in detail with reference to
[0071] In step ST131, the camera control unit 26 stores the focus length corresponding to the angle of view adjusted in step ST120 as a focus length f.sup.(0) when the number of repetitions k=0.
[0072] In step ST132, the camera control unit 26 measures a moire period P.sub.M.sup.(0) by a method described later.
[0073] In step ST133, the camera control unit 26 changes the zoom position so that the focus length f is shifted by Δf.sup.(K).
[0074] In step ST134, the camera control unit 26 measures the moire period P.sub.M.sup.(K) after the zoom position is changed.
[0075] In step ST135, the camera control unit 26 calculates a difference ΔP.sub.M.sup.(K) between P.sub.M.sup.(K) and P.sub.M.sup.(K−1) before the zoom change.
[0076] In step ST136, the camera control unit 26 compares an absolute value of ΔP.sub.M.sup.(K) with an end determination threshold P.sub.TH. The end determination threshold P.sub.TH is set to a value at which a change in ΔP.sub.M.sup.(K) is reduced to such an extent that the change cannot be visually determined, for example.
[0077] When ΔP.sub.M.sup.(K) is larger than P.sub.TH, the processing proceeds to the next step. When ΔP.sub.M.sup.(K) is smaller than P.sub.TH, the processing is completed assuming that the adjustment is completed.
[0078] A processing completion condition may include not only the comparison between ΔP.sub.M.sup.(K) and P.sub.TH, but also a condition to end when a loop counter k exceeds a predetermined value. Accordingly, even when the zoom adjustment is performed, it is possible to end the processing even when the zoom adjustment does not converge to the end determination threshold P.sub.TH or less.
[0079] In step ST137, the camera control unit 26 determines whether ΔP.sub.M.sup.(K) is equal to or greater than 0. When ΔP.sub.M.sup.(K) is 0 or more, the processing proceeds to step ST137. When ΔP.sub.M.sup.(K) is smaller than 0, the processing proceeds to step ST138.
[0080] In step ST138, a next zoom adjustment width Δf.sup.(K+1) is set to be equal to Δf.sup.(K), and the processing returns to step ST133.
[0081] In step ST139, the camera control unit 26 sets the next zoom adjustment width Δf.sup.(K+1) to −XΔf.sup.(k), and the processing returns to step ST133. X is an update coefficient that satisfies 0<X<1.
[0082] Here, a method of measuring the period P.sub.M of the moire (interference fringe) will be described.
[0083] According to the above procedure, the optical unit 21 of the camera 20 can be adjusted such that the relationship between the stripe period I.sub.PH of the captured image 32 and the pixel width S.sub.H of the image is I.sub.PH≈2S.sub.H.
[0084] Next, a method of visualizing the density gradient of the observation target 10 using the captured image 32 captured under the above-described optical conditions will be described.
[0085] The density gradient of the airflow is visualized by performing image processing using a reference background image captured using a camera in a state where the density gradient of the airflow is not generated in the observation target area and an observation background image captured in a state where the density gradient of the airflow is generated in the observation target area.
[0086] The image processing is performed by the camera control unit 26, and for example, a method described in Patent Literature 2 may be used.
[0087] Alternatively, the background image for observation may be continuously acquired without using a difference between the reference background image and the observation background image, and a frame difference between the latest background image and the background image captured a unit time before (one frame before) the latest background image may be used.
[0088] By using the frame difference, it is possible to prevent the influence when the position of the camera is shifted between a time of capturing the reference image and a time of capturing the observation image and the optical condition is changed.
Second Embodiment
[0089] Next, a second embodiment will be described. Members described in the first embodiment are denoted by the same reference numerals, and detailed description thereof will be omitted. In the first embodiment, the stripe pattern 31 is fixed on the background 30. The second embodiment is an imaging system projected by a projector 50 on the background 30 as shown in
[0090] The projector 50 projects an image input from the camera control unit 26.
[0091] The camera control unit 26 includes a background creation unit 70 that is connected to the camera 20, analyzes the background image captured by the camera 20, and changes the period P.sub.H of the stripe pattern 31 projected onto the display medium 35 such as a screen or a wall by the projector 50. The background creation unit 70 is implemented by a processor 41 (see
[0092] In the second embodiment, a single focus camera can be used as the camera 20. In the first embodiment, a camera having an optical zoom function is used. However, in general, there is also a single focus camera without the optical zoom function.
[0093] When the single focus camera without the optical zoom function is used to approach IP.sub.H=2S.sub.H, it is necessary to change the distance L because the angle of view α cannot be adjusted in the first embodiment. However, when installation places of the camera 20 and the display medium 35 are restricted, it is difficult to change the distance L.
[0094] In the second embodiment, even if the angle of view α of the camera 20 is fixed, or even if the installation places of the camera 20 and the display medium 35 are restricted, the imaging condition of I.sub.PH≈2S.sub.H can be adjusted by changing the period P.sub.H that can be said to be an interval of the stripes of the stripe pattern 31 of the background 30.
[0095] Hereinafter, a method of generating the background 30 in the background creation unit 70 will be described. In order to adjust the imaging condition by changing the background 30, first, the background creation unit 70 creates a stripe pattern of the background so as to satisfy the above-described (Formula 1) as an initial value P.sub.H.sup.(0) of the stripe pattern (an example of the first adjustment).
[0096] Here, the distance L between the camera 20 and the background in (Formula 1) may be actually measured and input, or the background creation unit 70 may input a figure of a known size to the projector 50, measure the size of the figure from the image captured by the camera 20, and estimate the distance L from the size.
[0097] Next, according to
[0098] In step ST201, the background creation unit 70 measures a moire period P.sub.M.sup.(0).
[0099] In step ST202, the background creation unit 70 changes the period P.sub.H of the stripe pattern 31 by an adjustment width ΔP.sub.H.sup.(k) of the stripe pattern.
[0100] In step ST203, the background creation unit 70 measures a moire period P.sub.M.sup.(k).
[0101] In step ST204, the background creation unit 70 calculates a difference ΔP.sub.M.sup.(k) between the P.sub.M.sup.(k) and P.sub.M.sup.(k−1) before the stripe pattern is changed.
[0102] In step ST205, the background creation unit 70 compares an absolute value of ΔP.sub.M.sup.(k) with the end determination threshold P.sub.TH. When |ΔP.sub.M.sup.(k)| is equal to or greater than P.sub.TH, the processing proceeds to the next step. When |ΔP.sub.M.sup.(k)| is smaller than the threshold, the processing is completed assuming that the adjustment of the period P.sub.H of the stripe pattern 31 is completed. The end determination threshold P.sub.TH is set to a value at which a change in ΔP.sub.TH.sup.(K) is reduced to such an extent that the change cannot be visually determined, for example.
[0103] In step ST206, the background creation unit 70 determines whether ΔP.sub.M.sup.(K) is equal to or greater than 0. When ΔP.sub.M.sup.(K) is 0 or more, the processing proceeds to step ST207. When ΔP.sub.M.sup.(K) is smaller than 0, the processing proceeds to step ST208. In step ST207, the background creation unit 70 sets the adjustment width ΔP.sub.H.sup.(K+1) of the next stripe pattern to be equal to ΔP.sub.H.sup.(K), and the processing returns to step ST202.
[0104] In step ST208, the background creation unit 70 changes the adjustment width ΔP.sub.H.sup.(K+1) of the next stripe pattern to −XΔP.sub.H.sup.(K), and the processing returns to step ST202. X is an update coefficient that satisfies 0<X<1.
[0105] According to the above procedure, the stripe pattern 31 of the background 30 can be adjusted such that the relationship between the stripe period I.sub.PH of the captured image 32 and the pixel width S.sub.H of the image is IP.sub.H≈2S.sub.H.
[0106] The background creation unit 70 may rotate the stripe pattern 31 of the background 30 so that the stripe pattern 31 is distributed in the direction of the density gradient desired to be visualized.
[0107] As shown in
[0108] Further, when the displacement due to the density change is large, the stripe pattern 31 of the background 30 may be changed, and the method may be switched to the BOS method in which the effect of enlarging the moire is not used. As the stripe pattern 31 in this case, for example, the stripe pattern described in Non-Patent Literature 1 may be used.
[0109] According to the method, when atmospheric pressure density change is large, measurement can be performed by a general BOS method, and a range of the measurable density change can be enlarged.
[0110] A fluid density gradient detection method according to an aspect of the present disclosure is a method of visualizing a density gradient of an observation target area using a background image in which a stripe pattern is present as a background of the observation target area and the background is captured over the observation target area. A background image captured under an optical condition in which a period of stripes of a stripe pattern in the background image falls within a range of 150% to 250% of a width of one pixel.
[0111] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0112] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. In the imaging condition, a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image is determined such that the pattern period width (I.sub.PH) is in a range of 150% to 250% with respect to the width (S.sub.H) of the pixel. Accordingly, it is possible to enlarge a change of less than one pixel of the captured image, and it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0113] A fluid density gradient detection method according to another aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern on the captured image periodically changes and a pattern period width (I.sub.PH) of the pattern on the captured image and a period of moire generated on the captured image. Accordingly, since the observation target area is captured under the appropriate imaging condition in which the relationship between the width of the pixel of the captured image and the pattern period width is determined by the period of the moire generated in the captured image, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0114] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition performs a first adjustment based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and further performs a second adjustment based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
[0115] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition performs a first adjustment based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and further performs a second adjustment based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment so that the period of the moire becomes the longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
[0116] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and the imaging condition is changed by adjusting an optical condition of the imaging device including an angle of view. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0117] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing by an imaging device, under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and the imaging condition is changed by adjusting the pattern period width (I.sub.PH) of the pattern. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient even in a camera without a zoom function.
[0118] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and the imaging condition is changed by adjusting the pattern period width (I.sub.PH) of the pattern. Further, the pattern period width (I.sub.PH) is changed in accordance with a magnitude of a gas density gradient in the observation target area. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship in accordance with the magnitude of the gas density gradient of the observation target area, it is possible to change the detection sensitivity in accordance with the magnitude of the fluid density gradient without using a special optical system.
[0119] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing by an imaging device, under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and further determines a direction in which the pattern periodically changes in accordance with a direction of a gas density gradient in the observation target area. Accordingly, even when the direction of the density gradient of the measurement target changes, the detection sensitivity of the fluid density gradient can be improved by maintaining an appropriate relationship between the width of the pixel of the captured image and the pattern period width.
[0120] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image. Further, the background image is divided into a plurality of sections, and the direction of the pattern is determined for each section in accordance with a direction of a gas density gradient in the observation target area. Accordingly, even when there are a plurality of directions of the density change of the measurement target in an imaging target area, it is possible to maintain an appropriate relationship between the width of the pixel of the captured image and the pattern period width according to each direction, and it is possible to improve the detection sensitivity of the fluid density gradient.
[0121] A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. Adjustment of the imaging condition is performed in a state where a focus of the imaging device is adjusted to the background image. First, a first adjustment is performed based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image, and further a second adjustment is performed based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment. Accordingly, since the observation target area is captured under the appropriate imaging condition in which the relationship between the width of the pixel of the captured image and the pattern period width is determined by the period of the moire generated in the captured image, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0122] A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; and an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
[0123] A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit; and a control unit configured to change an optical condition of the imaging unit. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image. The control unit adjusts the optical condition such that a period of moire of the captured image is longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
[0124] A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit; and a background creation unit configured to change the background image. The imaging condition is determined based on a relationship between a width (S.sub.H) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (I.sub.PH) of the pattern on the captured image. The background creation unit adjusts the background image such that a period of moire of the captured image is longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship by using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved even in a camera without a zoom function.
INDUSTRIAL APPLICABILITY
[0125] The present disclosure can improve detection sensitivity of a method of visualizing a density gradient of an airflow in a general-purpose camera, and is useful for a fluid density gradient detection method and a fluid density gradient detection system that visualize a minute airflow in an indoor space and visualize a temperature change.