PHOTOGRAMMETRIC SOIL DENSITY SYSTEM AND METHOD
20170260711 · 2017-09-14
Inventors
- Ernest S. Berney, IV (Vicksburg, MS, US)
- Thad C. Pratt (Vicksburg, MS, US)
- Naveen B. Ganesh (Vicksburg, MS, US)
Cpc classification
E02D1/027
FIXED CONSTRUCTIONS
International classification
G06F17/16
PHYSICS
Abstract
The present invention is an apparatus which executes a photogrammetry method for calculating soil density. After a user excavates soil, measures the mass of the excavated soil and takes multiple images of the excavation site in combination with a calibration object, a data processor uses the various values obtained from the collected images to create a point cloud data object. The processor used this point cloud data object to create a visual representation of the hole. The processor rotates and scales the visual representation. The processor also uses the point cloud data object in volumetric calculations to determine the volume of the hole. Together with the soil mass, the volume allows calculation of soil density.
Claims
1. An apparatus for analyzing soil density utilizing a user-selected ground plane, comprised of: a data processor configured with software to perform a photogrammetry method, said photogrammetry method comprising the steps of: receiving a soil mass value M of excavated soil; receiving a variant image set of a calibration object and excavation site; creating a point cloud from said variant image set and at least one camera data value; instantiating a point cloud data object with point cloud data values to display a visual representation of said excavation site and said calibration object on a graphic user interface (GUI); updating said point cloud data values using an autorotation method to orient said visual representation on said GUI; updating said point cloud data values using a scaling method to scale said visual representation on said GUI; displaying a visual representation of said point cloud data object on said GUI; receiving plane coordinate values for a user-selected ground plane; calculating an excavated volume V.sub.h using a cubic volumetric method, wherein said plane coordinate values are utilized to define at least one boundary of a volumetric cube utilized in said cubic volumetric method; calculating a soil density value D using said excavated volume V.sub.h and said soil mass value M; and displaying said soil density value D on said GUI.
2. The method of claim 1, wherein said photogrammetry method further comprises the step of receiving updated plane coordinate values and re-calculating said soil density value D.
3. The method of claim 2, wherein said photogrammetry method further comprises the step of calculating a comparative soil density by repeating said method using a different soil mass value M, a different variant image set and different plane coordinate values.
4. The method of claim 1 wherein said photogrammetry method further comprises the step of iteratively receiving updated plane coordinate values and iteratively re-calculating said soil density value D.
5. The method of claim 1, wherein said photogrammetry method further comprises the step of updating said plurality of pixel x-coordinate data values, said plurality of pixel y-coordinate data values and said plurality of pixel z-coordinate data values of said point cloud data object using said autorotation method.
6. The method of claim 1, wherein said photogrammetry method further comprises the step of receiving an input moisture value for a moisture content ω of said excavated soil.
7. The method of claim 6, wherein said photogrammetry method further comprises the step of adjusting said value for mass M of said excavated soil based on said value for moisture content ω of said excavated soil.
8. The method of claim 1, wherein said autorotation method comprises the steps of: extracting a largest pixel y-coordinate data value y.sub.max and a smallest pixel y-coordinate data value y.sub.min from said point cloud data object, along with a plurality of corresponding pixel z-coordinates, z.sub.ymax and z.sub.ymin, respectively; updating each of said plurality of pixel x-coordinate data values, said plurality of y-coordinate data values and said plurality of z-coordinate data values in said point cloud data object with an updated pixel x-coordinate data value x′.sub.n, updated pixel y-coordinate data value y′.sub.n and updated pixel z-coordinate data value z′.sub.n, respectively, using the equation:
9. The method of claim 8, wherein both or either of said x-axis angle of adjustment θ.sub.x and said y-axis angle of adjustment θ.sub.y are predetermined or entered manually.
10. The method of claim 8, wherein at least one of said x-axis angle of adjustment θ.sub.x and said y-axis angle of adjustment θ.sub.y is calculated using at least one of the equations:
11. The method of claim 1, wherein said scaling method comprises the steps of: instantiating a scaling data object; updating said scaling data object with data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S, wherein said scale value S is a known quantity; calculating a coordinate distance C between an inner point and an outer point using the equation:
C=√{square root over ((x.sub.o−x.sub.i).sup.2+(y.sub.o−y.sub.i).sup.2)} calculating a scaling factor F.sub.s using the equation:
F.sub.s=S/C updating each of said plurality of pixel x-coordinate data values, each of said plurality of pixel y-coordinate data values and each of said plurality of pixel z-coordinate data values by multiplying each of said plurality of pixel x-coordinate data values, each of said plurality of pixel y-coordinate data values and each of said plurality of pixel z-coordinate data values in said point cloud data object by said scaling factor F.sub.s.
12. The method of claim 11, wherein said scale value S is a distance between two opposing sides of said calibration object.
13. The method of claim 11, wherein said scale value S is a distance between two defined points on an object.
14. The method of claim 11, wherein a user manually enters said data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S.
15. The method of claim 11, wherein at least one of said data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S is entered manually and at least one of said data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S is entered by clicking on a point on said visual representation of said point cloud data object on said GUI.
16. The method of claim 1, wherein said cubic volumetric method comprises the steps of: instantiating a perimeter data object; updating said perimeter data object with point cloud information extracted from said point cloud, wherein said point cloud information includes data values for a plurality of pixel identifiers, a plurality of pixel x-coordinates and a plurality of pixel y-coordinates of pixels that intersect said ground plane; extracting a largest perimeter x-coordinate data value x.sub.pmax, a smallest perimeter x-coordinate data value x.sub.pmin, a largest perimeter y-coordinate data value y.sub.pmax and a smallest perimeter y-coordinate data value y.sub.pmin from said perimeter data object; creating said volumetric cube having outer boundaries defined by said largest perimeter x-coordinate data value x.sub.pmax, said smallest perimeter x-coordinate data value x.sub.pmin, said largest perimeter y-coordinate data value y.sub.pmax, said smallest perimeter y-coordinate data value y.sub.pmin, z-axis intersection point z.sub.gp, and said smallest z-coordinate data value z.sub.min from said point cloud data object; dividing said volumetric cube symmetrically into a cube grid comprising a plurality of sub-cubes having identical volume, wherein each of said plurality of sub-cubes is located between said largest perimeter x-coordinate data value x.sub.pmax, said smallest perimeter x-coordinate data value x.sub.pmin, said largest perimeter y-coordinate data value y.sub.pmax, said smallest perimeter y-coordinate data value y.sub.pmin, z-axis intersection point z.sub.gp, and said smallest z-coordinate data value z.sub.min, wherein boundaries of each of said plurality of sub-cubes are defined by a largest perimeter x-coordinate data value x.sub.cmax, a smallest perimeter x-coordinate data value x.sub.cmin, a largest perimeter y-coordinate data value y.sub.cmax, a smallest perimeter y-coordinate data value y.sub.cmin, a largest perimeter z-coordinate data value z.sub.cmax and a smallest perimeter z-coordinate data value z.sub.cmin; instantiating a cube grid data object; updating said cube grid data object with cube grid information extracted from said cube grid, wherein said cube grid information includes data values for a plurality of sub-cube identifier, a plurality of sub-cube volume, a plurality of largest sub-cube perimeter x-coordinate data values x.sub.cmax, a plurality of smallest sub-cube perimeter x-coordinate data values x.sub.cmin, a plurality of largest sub-cube perimeter y-coordinate data values y.sub.cmax, a plurality of smallest sub-cube perimeter y-coordinate data values y.sub.cmin, a plurality of largest sub-cube perimeter z-coordinate data values z.sub.cmax and a plurality of smallest sub-cube perimeter z-coordinate data values z.sub.cmin; removing from said cube grid data object all data values for any sub-cubes located directly between said point cloud and said volumetric cube, as determined by said largest sub-cube perimeter x-coordinate data value x.sub.cmax, said smallest sub-cube perimeter x-coordinate data value x.sub.cmin, said largest sub-cube perimeter y-coordinate data value y.sub.cmax, said smallest sub-cube perimeter y-coordinate data value y.sub.cmin, said largest sub-cube perimeter z-coordinate data value z.sub.cmax and said smallest sub-cube perimeter z-coordinate data value z.sub.cmin; and calculating said excavated volume V.sub.h by summing all sub-cube volume data values remaining in said cube grid data object.
17. The method of claim 16, wherein said cubic volumetric method further comprises the step of updating said sub-cube volume data value of any sub-cubes that pass through pixels in said point cloud to exclude any volume located directly between said point cloud and said volumetric cube.
18. The method of claim 16, wherein said number of sub-cubes may be entered manually, selected from a menu or preprogrammed.
19. The apparatus of claim 1, further comprising a calibration object, wherein said calibration object has a low-reflective coating.
20. The apparatus of claim 1, further comprising a calibration object, wherein said calibration object is a flat, annular ring having known inner and outer diameters.
Description
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWING(S)
[0011]
[0012]
[0013]
[0014]
[0015]
TERMS OF ART
[0016] As used herein, the term “autorotation method” means a method for adjusting the relative angulation of a visual representation.
[0017] As used herein, the term “calibration object” means an object of known dimensions.
[0018] As used herein, the term “cubic volumetric method” means a method for determining the volume of an excavation site using at least one volumetric cube.
[0019] As used herein, the term “excavated volume” means a volume of soil removed from an excavation site.
[0020] As used herein, the term “excavation site” means a location from which soil is removed.
[0021] As used herein, the term “plane coordinate values” means the values of coordinates of a ground plane.
[0022] As used herein, the term “point cloud” means a data set containing three dimensional information extracted from two or more images of the same object. This information may include, but is not limited to, a quasi-unique set of coordinate values along the X-, Y- and Z-axis and/or a quasi-unique set of color levels using red, green and blue levels, denoted as R, G and B, respectively.
[0023] As used herein, the term “scaling method” means a method for adjusting the relative location of points in a visual representation.
[0024] As used herein, the term “soil density” means the bulk density of soil.
[0025] As used herein, the term “soil mass value” means the mass of an excavated volume of soil.
[0026] As used herein, the term “variant image set” means more than one image of the same subject matter wherein each image is adjusted for a particular variable including but not limited to magnification, elevation and angle.
[0027] As used herein, the term “user-selected ground plane” means a horizontal plane selected by a user and extending parallel to a ground surface, intersecting a z-axis at a z-axis intersection point z.sub.gp.
[0028] As used herein, the term “visual representation” means data displayed visually.
[0029] As used herein, the term “volumetric cube” means a cube which contains the boundaries of a visual representation of an excavation site.
DETAILED DESCRIPTION OF THE INVENTION
[0030]
[0031] Calibration object 10 is any object having a known size. In the exemplary embodiment, calibration object is a flat, annular ring having known inner and outer diameters. The respective dimensions of calibration object 10 vary based on the application, but are large enough to accommodate removal of soil to create an excavation site. In the exemplary embodiment, calibration object 10 is brightly colored to increase contrast and visibility against soil. In certain embodiments, calibration object 10 has a low-reflective coating.
[0032] Container 20 is a container having known mass, of sufficient volume to hold all soil removed to create the excavation site. In various embodiments, container 20 is a bowl, box or bag. Placing the soil in container 20 allows measurement of soil mass using scale 30.
[0033] Imaging apparatus 40 is a digital imaging apparatus capable of capturing multiple images of the excavation site which form a variant image set. In various embodiments, imaging apparatus 40 is a cell phone camera, still camera, video camera or scanner. Imaging apparatus 40 is connected to data processor 50 or a removable data storage unit, allowing transfer of the images. The connection may be wired or wireless. Optionally, imaging apparatus 40 may include a light source to illuminate the excavation site in low-light conditions.
[0034] Data processor 50 is configured with software allowing it to process the images received and determine soil density using a photogrammetry method 200. In the exemplary embodiment, data processor 50 is a laptop computer with a user interface 51.
[0035]
[0036] In step 202, method 200 places calibration object 10 on the upper surface of a soil.
[0037] In step 204, method 200 excavates soil to form an excavation site. In the exemplary embodiment, the excavation site is a hole having a convex, substantially conical shape.
[0038] In step 206, method 200 places all excavated soil within container 20. In the exemplary embodiment, steps 204 and 206 occur simultaneously.
[0039] In step 208, method 200 obtains a soil mass value M of the excavated soil using scale 30.
[0040] In step 210, method 200 creates a variant image set from digital images of the excavation site from multiple angles, magnifications and/or elevations. These digital images show both the excavation site and calibration object 10. In the exemplary embodiment, step 210 creates at least 16 images: eight images at a first magnification every 45 degrees and eight images at a second, increased magnification every 45 degrees.
[0041] In step 212, method 200 opens a graphic user interface (GUI).
[0042] In step 214, method 200 receives at least one camera data value. Camera data values are any metadata describing the camera configuration when the digital images were created. Camera data values may include, but are not limited to, camera make and model, lens aperture, focal length, camera shutter speed, exposure program, focal ratio, lens type, metering mode, flash configuration and ISO sensitivity. These may be entered by a user or automatically retrieved from the digital images.
[0043] In step 216, method 200 creates a point cloud from information extracted from each digital image, as well as the camera data values. The point cloud is a plurality of pixels extracted from each digital image. Each pixel has a quasi-unique set of coordinate values along the X-, Y- and Z-axis. Each pixel also has a quasi-unique set of color levels using red, green and blue levels, denoted as R, G and B, respectively.
[0044] In step 218, method 200 instantiates a point cloud data object.
[0045] In step 220, method 200 updates the point cloud data object with point cloud information extracted from the point cloud. This information includes data values for the pixel identifier, pixel x-coordinate, pixel y-coordinate, pixel z-coordinate, pixel R-level, pixel G-level and pixel B-level.
[0046] In step 222, method 200 updates the pixel x-coordinate data values, pixel y-coordinate data values and pixel z-coordinate data values of the point cloud data object using autorotation method 300.
[0047] In step 224, method 200 displays a visual representation of the excavation site and said calibration object on the GUI using the point cloud data object. Due to the use of autorotation method 300 in step 220, the visual representation of the surface of the point cloud will appear to be perpendicular to the screen.
[0048] In optional step 226, method 200 updates the pixel x-coordinate data values, pixel y-coordinate data values and pixel z-coordinate data values of the point cloud data object using input values for θ.sub.x and/or θ.sub.y for steps 306 and/or 312 of autorotation method 300. The input values for θ.sub.x and/or θ.sub.y may be predetermined or entered manually.
[0049] In step 228, method 200 updates the pixel x-coordinate data values, pixel y-coordinate data values and pixel z-coordinate data values of the point cloud data object using scaling method 400.
[0050] In step 230, method 200 displays an updated visual representation on the GUI.
[0051] In step 232, method 200 receives plane coordinate values for a user-selected ground plane and calculates an excavated volume V.sub.h using cubic volumetric method 500.
[0052] In step 234, method 200 receives an input soil mass value M for the mass of the excavated soil.
[0053] In optional step 236, method 200 receives an input value for moisture content ω of the excavated soil.
[0054] In optional step 238, method 200 adjusts the value for mass M of the excavated soil based on the value for moisture content ω of the excavated soil using the following equation:
M=ω*M.sub.i
where M.sub.i is the original mass.
[0055] In step 240, method 200 calculates a soil density value D using the excavated volume V.sub.h of the excavation site and soil mass value M.
[0056] In step 242, method 200 outputs the soil density value D.
[0057] In certain embodiments, method 200 repeats steps 232-242 to obtain a new soil density value D. These steps may be iteratively repeated to provide multiple potential soil density values D. In certain embodiments, method 200 repeats steps 202-242 to obtain a comparative soil density value D for a different excavation site.
[0058]
[0059] In step 302, method 300 extracts the largest pixel y-coordinate data value y.sub.max and the smallest pixel y-coordinate data value y.sub.min from the point cloud data object, along with the corresponding pixel z-coordinates, z.sub.ymax and z.sub.ymin, respectively.
[0060] In step 304, method 300 calculates an x-axis angle of adjustment θ.sub.x using the following equation:
[0061] In step 306, method 300 updates each pixel x-coordinate, y-coordinate and z-coordinate data value in the point cloud data object with an updated pixel x-coordinate data value x′.sub.n, updated pixel y-coordinate data value y′.sub.n and updated pixel z-coordinate data value z′.sub.n, respectively, using the equation:
[0062] where x.sub.n is the current pixel x-coordinate data value in the point cloud data object, y.sub.n is the current pixel y-coordinate data value in the point cloud data object, z.sub.n is the current pixel z-coordinate data value in the point cloud data object and n equals the number of pixels.
[0063] In step 308, method 300 extracts the largest pixel x-coordinate data value x.sub.max and the smallest pixel x-coordinate data value x.sub.min from the point cloud data object, along with the corresponding pixel z-coordinates, z.sub.xmax and z.sub.xmin, respectively.
[0064] In step 310, method 300 calculates a y-axis angle of adjustment θ.sub.y using the following equation:
[0065] In step 312, method 300 updates each pixel x-coordinate, y-coordinate and z-coordinate data value in the point cloud data object with an updated pixel x-coordinate data value x′.sub.n, updated pixel y-coordinate data value y′.sub.n and updated pixel z-coordinate data value z′.sub.n, respectively, using the equation:
[0066]
[0067] In step 402, method 400 instantiates a scaling data object.
[0068] In step 404, method 400 updates the scaling data object with data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S. Scale value S is a known quantity. In the exemplary embodiment, scale value S is the distance between two opposing sides of calibration object 10. In other embodiments, scale value S may be a distance along a linear object, or a distance between two defined points on an object.
[0069] In one embodiment, a user manually enters the data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S. In another embodiment, at least one of the data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S is entered manually and at least one of the data values for outer point x-coordinate x.sub.o, outer point y-coordinate y.sub.o, inner point x-coordinate x.sub.i, inner point y-coordinate y.sub.i and a scale value S is entered by clicking on a point on the visual representation of the point cloud data object on the GUI.
[0070] In step 406, method 400 calculates a coordinate distance C between an inner point and an outer point using the equation:
C=√{square root over ((x.sub.o−x.sub.i).sup.2+(y.sub.o−y.sub.i).sup.2)}
[0071] In step 408, method 400 calculates a scaling factor F.sub.s using the equation:
F.sub.s=S/C
[0072] In step 410, method 400 updates data values for the pixel x-coordinate, pixel y-coordinate, pixel z-coordinate by multiplying each pixel x-coordinate data value, pixel y-coordinate data value and pixel z-coordinate data value in the point cloud data object by scaling factor F.sub.s.
[0073]
[0074] In step 502, method 500 receives plane coordinate values for a user-selected ground plane. In the exemplary embodiment, a user utilizes a slider bar on a GUI to move a visual plane representation along the Z-axis through the visual representation of the excavation site and calibration object. The user moves the slider bar until the ground plane intersects the perimeter of the excavation site.
[0075] In step 504, method 500 instantiates a perimeter data object.
[0076] In step 506, method 500 updates the perimeter data object with information extracted from the point cloud data object. This information includes data values for the pixel identifier, pixel x-coordinate and pixel y-coordinate of pixels that intersect the user-selected ground plane.
[0077] In step 508, method 500 extracts the largest perimeter x-coordinate data value x.sub.pmax, the smallest perimeter x-coordinate data value x.sub.pmin, the largest perimeter y-coordinate data value y.sub.pmax, the smallest perimeter y-coordinate data value y.sub.pmin from the perimeter data object.
[0078] In step 510, method 500 creates a volumetric cube having outer boundaries defined by the largest perimeter x-coordinate data value x.sub.pmax, the smallest perimeter x-coordinate data value x.sub.pmin, the largest perimeter y-coordinate data value y.sub.pmax, the smallest perimeter y-coordinate data value y.sub.pmin, z-axis intersection point z.sub.gp, and the smallest z-coordinate data value z.sub.min from the point cloud data object.
[0079] In step 512, method 500 divides the volumetric cube symmetrically into a cube grid comprising a plurality of sub-cubes having identical volume. The number of sub-cubes may be entered manually, selected from a menu or preprogrammed. Each sub-cube is located between the largest perimeter x-coordinate data value x.sub.pmax, the smallest perimeter x-coordinate data value x.sub.pmin, the largest perimeter y-coordinate data value y.sub.pmax, the smallest perimeter y-coordinate data value y.sub.pmin, z-axis intersection point z.sub.gp, and the smallest z-coordinate data value z.sub.min. The boundaries of each sub-cube are defined by a largest perimeter x-coordinate data value x.sub.cmax, a smallest perimeter x-coordinate data value x.sub.cmin, a largest perimeter y-coordinate data value y.sub.cmax, a smallest perimeter y-coordinate data value y.sub.cmin, a largest perimeter z-coordinate data value z.sub.cmax and a smallest perimeter z-coordinate data value z.sub.cmin.
[0080] In step 514, method 500 instantiates a cube grid data object.
[0081] In step 516, method 500 updates the cube grid data object with cube grid information extracted from the cube grid. This information includes data values for the sub-cube identifier, the sub-cube volume, the largest sub-cube perimeter x-coordinate data value x.sub.cmax, the smallest sub-cube perimeter x-coordinate data value x.sub.cmin, the largest sub-cube perimeter y-coordinate data value y.sub.cmax, the smallest sub-cube perimeter y-coordinate data value y.sub.cmin, the largest sub-cube perimeter z-coordinate data value z.sub.cmax and the smallest sub-cube perimeter z-coordinate data value z.sub.cmin.
[0082] In step 518, method 500 discards all data values for any sub-cubes located directly between the point cloud and the volumetric cube, as determined by the largest sub-cube perimeter x-coordinate data value x.sub.cmax, the smallest sub-cube perimeter x-coordinate data value x.sub.cmin, the largest sub-cube perimeter y-coordinate data value y.sub.cmax, the smallest sub-cube perimeter y-coordinate data value y.sub.cmin, the largest sub-cube perimeter z-coordinate data value z.sub.cmax and the smallest sub-cube perimeter z-coordinate data value z.sub.cmin. Method 500 removes all data values for discarded sub-cubes from the cube grid data object.
[0083] In optional step 520, method 500 updates the sub-cube volume data value of any sub-cubes that pass through pixels in the point cloud to exclude any volume located directly between the point cloud and the volumetric cube.
[0084] In step 522, method 500 sums the remaining sub-cube volume data values to calculate the excavated volume V.sub.h.
[0085] It will be understood that many additional changes in the details, materials, procedures and arrangement of parts, which have been herein described and illustrated to explain the nature of the invention, may be made by those skilled in the art within the principle and scope of the invention as expressed in the appended claims.
[0086] It should be further understood that the drawings are not necessarily to scale; instead, emphasis has been placed upon illustrating the principles of the invention. Moreover, the term “substantially” as used herein may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related.