G01C11/02

Agricultural pattern analysis system

A pattern recognition system including an image gathering unit that gathers at least one digital representation of a field, an image analysis unit that pre-processes the at least one digital representation of a field, an annotation unit that provides a visualization of at least one channel for each of the at least one digital representation of the field, where the image analysis unit generates a plurality of image samples from each of the at least one digital representation of the field, and the image analysis unit splits each of the image samples into a plurality of categories.

Agricultural pattern analysis system

A pattern recognition system including an image gathering unit that gathers at least one digital representation of a field, an image analysis unit that pre-processes the at least one digital representation of a field, an annotation unit that provides a visualization of at least one channel for each of the at least one digital representation of the field, where the image analysis unit generates a plurality of image samples from each of the at least one digital representation of the field, and the image analysis unit splits each of the image samples into a plurality of categories.

Visual-inertial tracking using rolling shutter cameras

Visual-inertial tracking of an eyewear device using a rolling shutter camera(s). The eyewear device includes a position determining system. Visual-inertial tracking is implemented by sensing motion of the eyewear device. An initial pose is obtained for a rolling shutter camera and an image of an environment is captured. The image includes feature points captured at a particular capture time. A number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. The number of computed poses is responsive to the sensed movement of the mobile device. A computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. The position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.

Structural characteristic extraction using drone-generated 3D image data

A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.

VISUAL POSITIONING DEVICE AND THREE-DIMENSIONAL SURVEYING AND MAPPING SYSTEM AND METHOD BASED ON SAME
20180005457 · 2018-01-04 ·

Disclosed are a visual positioning device (101) and a three-dimensional surveying and mapping system (100) including at least one visual positioning device (101). The visual positioning device (101) includes an infrared light source (101b), an infrared camera (101a), a signal transceiver module (101d) and a visible light camera (101c). The three-dimensional surveying and mapping system (100) further includes a plurality of position identification points (102), a plurality of active signal points (103) and an image processing server (104). The image processing server (104) is configured to cache infrared images and real scene images shot by the infrared camera (101a) and the visible light camera (101c) and positioning information thereabout and store a three-dimensional model obtained through reconstruction. The present invention has the advantages of simple structure, no need for a power supply, convenience in use and high precision, etc.

BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE
20180014002 · 2018-01-11 ·

A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.

LIGHT-EMITTING APPARATUS, CALIBRATION COEFFICIENT CALCULATION METHOD, AND METHOD FOR CALIBRATING CAPTURED IMAGE OF EXAMINATION TARGET ITEM
20180010767 · 2018-01-11 ·

Provided are a light-emitting apparatus that can suppress manufacturing cost to a low level and perform light emission with high uniformity using a simple configuration, a calibration coefficient calculation method using the light-emitting apparatus, and a method for calibrating a captured image of an inspection target object. A plurality of light-emitting diodes arranged at equal intervals on the circumference of a virtual circle, and a milky white-colored emission window, which is provided on a top surface portion separated from the light-emitting diodes, has an outer edge that is smaller than the circumference on which the light-emitting diodes are arranged, and allows light of the light-emitting diodes to pass therethrough, are included. The diameter of the virtual circle on which the light-emitting diodes are arranged and a separation distance between the light-emitting diodes and the emission window are set to predetermined distances.

Camera triggering and multi-camera photogrammetry

A photogrammetry system includes a memory, a processor, and a geo-positioning device. The geo-positioning device outputs telemetry regarding a vehicle on which one or more cameras are mounted. The processor can receive first telemetry from the geo-positioning device characterizing the vehicle telemetry at a first time, camera specification(s) regarding the cameras, photogrammetric requirement(s) for captured images, and a last camera trigger time. The processor can determine a next trigger time for the cameras based upon the received telemetry, camera specification(s), photogrammetric requirement(s), and last trigger time. The processor can transmit a trigger signal to the camera(s) and the geo-positioning device to cause the camera(s) to acquire images of a target and the geo-positioning device to store second vehicle telemetry data characterizing the vehicle telemetry at a second time that is after the first time and during acquisition of the images. The processor can receive the acquired images from the cameras.

Camera triggering and multi-camera photogrammetry

A photogrammetry system includes a memory, a processor, and a geo-positioning device. The geo-positioning device outputs telemetry regarding a vehicle on which one or more cameras are mounted. The processor can receive first telemetry from the geo-positioning device characterizing the vehicle telemetry at a first time, camera specification(s) regarding the cameras, photogrammetric requirement(s) for captured images, and a last camera trigger time. The processor can determine a next trigger time for the cameras based upon the received telemetry, camera specification(s), photogrammetric requirement(s), and last trigger time. The processor can transmit a trigger signal to the camera(s) and the geo-positioning device to cause the camera(s) to acquire images of a target and the geo-positioning device to store second vehicle telemetry data characterizing the vehicle telemetry at a second time that is after the first time and during acquisition of the images. The processor can receive the acquired images from the cameras.

MATURITY DETERMINATION DEVICE AND MATURITY DETERMINATION METHOD

A maturity determination device includes an image capturing device to capture a image including a plurality of first and second pixels; and a signal processing circuit configured to find an area size ratio of an intensity distribution of light of a first wavelength band on the basis of a predetermined reference value based on pixel values obtained from the plurality of first and second pixels, and to generate maturity determination information in accordance with the area size ratio. The first pixel includes a first light transmission filter, and the second pixel includes a second light transmission filter. The intensity of the light of the first wavelength band reflected by the fruits and vegetables varies in accordance with the maturity level, and the intensity of the light of the second wavelength band reflected by the fruits and vegetables is substantially the same regardless of the maturity level.