Patent classifications
G01C11/10
Generating measurements of physical structures and environments through automated analysis of sensor data
Introduced here computer programs and associated computer-implemented techniques for generating measurements of physical structures and environments in an automated matter through analysis of data that is generated by one or more sensors included in a computing device. This can be accomplished by combining insights that are derived through analysis different types of data that are generated, computed, or otherwise obtained by a computing device. For instance, a computer program may enable or facilitate measurement of arbitrary dimensions, angles, and square footage of a physical structure based on (i) images generated by an image sensor included in the corresponding computing device and (ii) measurements generated by an inertial sensor included in the corresponding computing device.
Geo-positioning
The invention is a method of geo-positioning geographic data for visualization of a geographic area, particularly a mine site, and a device to work the method. The method includes the steps of: importing two or more data sources having geographic data of the geographic area; selecting a first control in a first data source of the two or more data sources and the same first control in a second data source of the two or more data sources; selecting a second control in the first data source and the same second control in the second data source; and applying an algorithm in a processor to process the first control in the first data source, the first control in the second data source, the second control in the first data source and the second control in the second data source by overlaying, rotating and scaling the data sources until at least the first control in the first data source matches the first control in the second data source and the second control in the first data source matches the second control in the second data source.
Geo-positioning
The invention is a method of geo-positioning geographic data for visualization of a geographic area, particularly a mine site, and a device to work the method. The method includes the steps of: importing two or more data sources having geographic data of the geographic area; selecting a first control in a first data source of the two or more data sources and the same first control in a second data source of the two or more data sources; selecting a second control in the first data source and the same second control in the second data source; and applying an algorithm in a processor to process the first control in the first data source, the first control in the second data source, the second control in the first data source and the second control in the second data source by overlaying, rotating and scaling the data sources until at least the first control in the first data source matches the first control in the second data source and the second control in the first data source matches the second control in the second data source.
Portable field imaging of plant stomata
Examples of the disclosure describe systems and methods for identifying, quantifying, and/or characterizing plant stomata. In an example method, a first set of two or more images of a plant leaf representing two or more focal distances is captured via an optical sensor. A reference focal distance is determined based on the first set of images. A second set of two or more images of the plant leaf is captured via the optical sensor, including at least one image captured at a focal distance less than the reference focal distance, and at least one image captured at a focal distance greater than the reference focal distance. A composite image is generated based on the second set of images. The composite image is provided to a trainable feature detector in order to determine a number, density, and/or distribution of stomata in the composite image.
Portable field imaging of plant stomata
Examples of the disclosure describe systems and methods for identifying, quantifying, and/or characterizing plant stomata. In an example method, a first set of two or more images of a plant leaf representing two or more focal distances is captured via an optical sensor. A reference focal distance is determined based on the first set of images. A second set of two or more images of the plant leaf is captured via the optical sensor, including at least one image captured at a focal distance less than the reference focal distance, and at least one image captured at a focal distance greater than the reference focal distance. A composite image is generated based on the second set of images. The composite image is provided to a trainable feature detector in order to determine a number, density, and/or distribution of stomata in the composite image.
Systems and Methods for Rapid Alignment of Digital Imagery Datasets to Models of Structures
Systems and methods for aligning digital image datasets to a computer model of a structure. The system receives a plurality of reference images from an input image dataset and identifies common ground control points (GCPs) in the reference images. The system then calculates virtual three-dimensional (3D) coordinates of the measured GCPs. Next, the system calculates and projects two-dimensional (2D) image coordinates of the virtual 3D coordinates into all of the images. Finally, using the projected 2D image coordinates, the system performs spatial resection of all of the images in order to rapidly align all of the images.
POSITION AND ATTITUDE DETERMINATION METHOD AND SYSTEM USING EDGE IMAGES
A method of determining at least one of position and attitude in relation to an object is provided. The method includes capturing at least two images of the object with at least one camera. Each image is captured at a different position in relation to the object. The images are converted to edge images. The edge images of the object are converted into three-dimensional edge images of the object using positions of where the at least two images were captured. Overlap edge pixels in the at least two three-dimensional edge images are located to identify overlap points. A three dimensional edge candidate point image of the identified overlapped points in an evidence grid is built. The three dimensional candidate edge image in the evidence grid is compared with a model of the object to determine at least one of a then current position and attitude in relation to the object.
POSITION AND ATTITUDE DETERMINATION METHOD AND SYSTEM USING EDGE IMAGES
A method of determining at least one of position and attitude in relation to an object is provided. The method includes capturing at least two images of the object with at least one camera. Each image is captured at a different position in relation to the object. The images are converted to edge images. The edge images of the object are converted into three-dimensional edge images of the object using positions of where the at least two images were captured. Overlap edge pixels in the at least two three-dimensional edge images are located to identify overlap points. A three dimensional edge candidate point image of the identified overlapped points in an evidence grid is built. The three dimensional candidate edge image in the evidence grid is compared with a model of the object to determine at least one of a then current position and attitude in relation to the object.
Method for implementing high-precision orientation and evaluating orientation precision of large-scale dynamic photogrammetry system
The present invention provides a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, including steps: a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar; b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras; d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate; and e) performing traceable evaluation of orientation precision of the photogrammetry system. The present invention can effectively reduce the relative error in length measurement.
Method for implementing high-precision orientation and evaluating orientation precision of large-scale dynamic photogrammetry system
The present invention provides a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, including steps: a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar; b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras; d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate; and e) performing traceable evaluation of orientation precision of the photogrammetry system. The present invention can effectively reduce the relative error in length measurement.