Patent classifications
G01C11/06
TRACKER OF A SURVEYING APPARATUS FOR TRACKING A TARGET
The present invention relates to a tracker and a surveying apparatus comprising the tracker, which improve the reliability of tracking a target. The tracker comprises a first imaging region having a plurality of pixels for taking a first image of a scene including the target; a second imaging region having a plurality of pixels for taking a second image of a scene including the target; a control unit to receive a timing signal indicating a time duration during which an illumination illuminating the target in the scene is switched on and off, control the first imaging region to take the first image of the scene when the timing signal indicates that the illumination unit is switched on, and control the second imaging region to take the second image when the illumination is switched off; and a read out unit configured to read out the first image from the first imaging region and the second image from the second imaging region and to obtain a difference image.
TRACKER OF A SURVEYING APPARATUS FOR TRACKING A TARGET
The present invention relates to a tracker and a surveying apparatus comprising the tracker, which improve the reliability of tracking a target. The tracker comprises a first imaging region having a plurality of pixels for taking a first image of a scene including the target; a second imaging region having a plurality of pixels for taking a second image of a scene including the target; a control unit to receive a timing signal indicating a time duration during which an illumination illuminating the target in the scene is switched on and off, control the first imaging region to take the first image of the scene when the timing signal indicates that the illumination unit is switched on, and control the second imaging region to take the second image when the illumination is switched off; and a read out unit configured to read out the first image from the first imaging region and the second image from the second imaging region and to obtain a difference image.
IMAGING RANGE ESTIMATION DEVICE, IMAGING RANGE ESTIMATION METHOD, AND PROGRAM
An imaging range estimation device includes an image data processor configured to acquire image data imaged by a camera device and generate image data with an object name label added, a reference data generator configured to set, by using geographic information, a region within a predetermined distance that is imageable from an estimated position at which the camera device is installed and generate reference data with an object name label added, and an imaging range estimator configured to calculate a concordance rate by comparing a feature indicated by a region of an object name label of the image data with a feature indicated by a region of an object name label of the reference data, and estimate the imaging range of the camera device to be a region of the reference data that corresponds to the image data.
IMAGING RANGE ESTIMATION DEVICE, IMAGING RANGE ESTIMATION METHOD, AND PROGRAM
An imaging range estimation device includes an image data processor configured to acquire image data imaged by a camera device and generate image data with an object name label added, a reference data generator configured to set, by using geographic information, a region within a predetermined distance that is imageable from an estimated position at which the camera device is installed and generate reference data with an object name label added, and an imaging range estimator configured to calculate a concordance rate by comparing a feature indicated by a region of an object name label of the image data with a feature indicated by a region of an object name label of the reference data, and estimate the imaging range of the camera device to be a region of the reference data that corresponds to the image data.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
Structural characteristic extraction using drone-generated 3D image data
A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.
Structural characteristic extraction using drone-generated 3D image data
A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.
VISUAL POSITIONING DEVICE AND THREE-DIMENSIONAL SURVEYING AND MAPPING SYSTEM AND METHOD BASED ON SAME
Disclosed are a visual positioning device (101) and a three-dimensional surveying and mapping system (100) including at least one visual positioning device (101). The visual positioning device (101) includes an infrared light source (101b), an infrared camera (101a), a signal transceiver module (101d) and a visible light camera (101c). The three-dimensional surveying and mapping system (100) further includes a plurality of position identification points (102), a plurality of active signal points (103) and an image processing server (104). The image processing server (104) is configured to cache infrared images and real scene images shot by the infrared camera (101a) and the visible light camera (101c) and positioning information thereabout and store a three-dimensional model obtained through reconstruction. The present invention has the advantages of simple structure, no need for a power supply, convenience in use and high precision, etc.
Multi-Baseline Camera Array System Architectures for Depth Augmentation in VR/AR Applications
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.