Patent classifications
G06T7/596
Geospatial modeling system providing 3D geospatial model update based upon predictively registered image and related methods
A geospatial modeling system may include a memory and a processor cooperating therewith to generate a three-dimensional (3D) geospatial model including geospatial voxels based upon a plurality of geospatial images, obtain a newly collected geospatial image, and determine a reference geospatial image from the 3D geospatial model using Artificial Intelligence (AI) and based upon the newly collected geospatial image. The processor may further align the newly collected geospatial image and the reference geospatial image to generate a predictively registered image, and update the 3D geospatial model based upon the predictively registered image.
Dynamic-baseline imaging array with real-time spatial data capture and fusion
Spatial image data captured at plural camera modules is fused into rectangular prism coordinates to support rapid processing and efficient network communication. The rectangular prism spatial imaging data is remapped to a truncated pyramid at render time to align with a spatial volume encompassed by a superset of imaging devices. A presentation of a reconstructed field of view is provide with near and far field image capture from the plural imaging devices.
Virtual photogrammetry
Multiple snapshots of a scene are captured within an executing application (e.g., a video game). When each snapshot is captured, associated color values per pixel and a distance or depth value z per pixel are stored. The depth information from the snapshots is accessed, and a point cloud representing the depth information is constructed. A mesh structure is constructed from the point cloud. The light field(s) on the surface(s) of the mesh structure are calculated. A surface light field is represented as a texture. A renderer uses the surface light field with geometry information to reproduce the scene captured in the snapshots. The reproduced scene can be manipulated and viewed from different perspectives.
SYSTEMS AND METHODS FOR GENERATING HIGH-RESOLUTION VIDEO OR ANIMATED SURFACE MESHES FROM LOW-RESOLUTION IMAGES
A system for generating high-resolution video from low-resolution images is configured to access a first video stream and a second video stream capturing an environment. The first video stream is captured by a first video capture device. The second video stream is captured by a second video capture device. Image frames of the first video stream are temporally synchronized with corresponding image frames of the second video stream. The system is also configured to generate a composite video stream with a higher resolution than the first or second video streams. Each composite image frame of the composite video stream is generated using a respective image frame of the first video stream and a temporally synchronized corresponding image frame of the second video stream as input.
REAL-TIME OMNIDIRECTIONAL STEREO MATCHING METHOD USING MULTI-VIEW FISHEYE LENSES AND SYSTEM THEREOF
Provided is a real-time omnidirectional stereo matching method in a camera system including a first pair of fisheye cameras including first and second fisheye cameras provided to perform shooting in opposite directions and a second pair of fisheye cameras including third and fourth fisheye cameras provided to perform shooting in opposite directions and in which the first pair of fisheye cameras and the second pair of fisheye cameras are vertically provided, including receiving fisheye images of a subject captured through the first to the fourth fisheye cameras; selecting one fisheye camera from among fisheye cameras for each pixel of a preset reference fisheye image among the fisheye images using a sweep volume for preset distance candidates; generating a distance map for all pixels using the reference fisheye image and a fisheye image of the one fisheye camera; and performing real-time stereo matching on the fisheye images using the distance map.
Depth map generation device
A depth map generation device includes a plurality of image capture pairs, a depth map generation module, and a processor. The depth map generation module is coupled to the plurality of image capture pairs for generating a plurality of depth maps corresponding to the plurality of image capture pairs according to the image pairs captured by the plurality of image capture pairs. The processor is coupled to the depth map generation module for optionally outputting a depth map of the plurality of depth maps, or outputting a blending depth map composed of a part or all of the plurality of depth maps.
Method and system for measuring an object by means of stereoscopy
The invention relates to a method and a system for measuring an object (2) by means of stereoscopy, in which method a pattern (3) is projected onto the object surface by means of a projector (9) and the pattern (3), which is designated as a scene and is projected onto the object surface, is captured by at least two cameras (4.1, 4.2, 4.3, 4.4), wherein correspondences of the scene are found in the images captured by the cameras (4.1, 4.2, 4.3, 4.4) by means of a computing unit (5) using image processing, and the object (2) is measured by means of the correspondences found. According to the invention, the cameras (4.1, 4.2, 4.3, 4.4) are intrinsically and extrinsically calibrated, and a two-dimensional and temporal coding is generated during the pattern projection, by (a) projecting a (completely) two-dimensionally coded pattern (3) and capturing the scene using the cameras (4.1, 4.2, 4.3, 4.4), and (b) projecting a temporally encoded pattern having a two-dimensionally different coding several times in succession and using the cameras (4.1, 4.2, 4.3, 4.4) to capture several scenes in succession, the capturing of said scenes being triggered simultaneously in each case.
SELECTIVELY PAIRED IMAGING ELEMENTS FOR STEREO IMAGES
This disclosure describes a configuration of an aerial vehicle, such as an unmanned aerial vehicle (“UAV”), that includes a plurality of cameras that may be selectively combined to form a stereo pair for use in obtaining stereo images that provide depth information corresponding to objects represented in those images. Depending on the distance between an object and the aerial vehicle, different cameras may be selected for the stereo pair based on the baseline between those cameras and a distance between the object and the aerial vehicle. For example, cameras with a small baseline (close together) may be selected to generate stereo images and depth information for an object that is close to the aerial vehicle. In comparison, cameras with a large baseline may be selected to generate stereo images and depth information for an object that is farther away from the aerial vehicle.
Still-image extracting method and image processing device for implementing the same
A still-image extracting method is disclosed. Frames of an object are extracted as still images from a moving image stream chronologically continuously captured by a camera. The camera moves relative to the object. First frames are extracted from the moving image stream. Image capture times of the extracted first frames are obtained. Image capture positions of the camera at the image capture times of the first frames are identified based on the first frames. Image capture times of the frames captured at image capture positions spaced at equal intervals are estimated based on both the image capture positions, identified by the first frames, of the camera and the obtained image capture times. Second frames at the estimated image capture times are extracted as frames captured and obtained at image capture positions spaced apart at equal intervals from the moving image stream.
METHOD AND SYSTEM FOR GENERATING DEPTH INFORMATION
A method and a system including at least three image capturing devices for generating depth information are proposed. Multiple depth maps associated with a specific scene are obtained, where each of the depth maps corresponds to a different group of the image capturing devices and a different estimated region of the specific scene. For each pixel corresponding to the specific scene, whether the pixel is within a joint overlapping region of its estimated region is determined. If no, the depth information of the pixel is set according to its depth value in the depth map corresponding to a non-joint overlapping region of its estimated region. If yes, the depth information of the pixel is set according to its depth values in the depth maps corresponding to the joint overlapping region within its estimated region. An integrated depth map is generated by using the depth information of all the pixels.