G06T7/596

GEOSPATIAL MODELING SYSTEM PROVIDING 3D GEOSPATIAL MODEL UPDATE BASED UPON ITERATIVE PREDICTIVE IMAGE REGISTRATION AND RELATED METHODS

A geospatial modeling system may include a memory and a processor cooperating therewith to: (a) generate a three-dimensional (3D) geospatial model including geospatial voxels based upon a plurality of geospatial images; (b) select an isolated geospatial image from among the plurality of geospatial images; (c) determine a reference geospatial image from the 3D geospatial model using Artificial Intelligence (AI) and based upon the isolated geospatial image; (d) align the isolated geospatial image and the reference geospatial image to generate a predictively registered image; (e) update the 3D geospatial model based upon the predictively registered image; and (f) iteratively repeat (b)-(e) for successive isolated geospatial images.

GEOSPATIAL MODELING SYSTEM PROVIDING 3D GEOSPATIAL MODEL UPDATE BASED UPON PREDICTIVELY REGISTERED IMAGE AND RELATED METHODS

A geospatial modeling system may include a memory and a processor cooperating therewith to generate a three-dimensional (3D) geospatial model including geospatial voxels based upon a plurality of geospatial images, obtain a newly collected geospatial image, and determine a reference geospatial image from the 3D geospatial model using Artificial Intelligence (AI) and based upon the newly collected geospatial image. The processor may further align the newly collected geospatial image and the reference geospatial image to generate a predictively registered image, and update the 3D geospatial model based upon the predictively registered image.

Visual-inertial odometry with an event camera
11295456 · 2022-04-05 · ·

The invention relates a method for generating a motion-corrected image for visual-inertial odometry comprising an event camera rigidly connected to an inertial measurement unit (IMU), wherein the event camera comprises pixels arranged in an image plane that are configured to output events in presence of brightness changes in a scene at the time they occur, wherein each event comprises the time at which it is recorded and a position of the respective pixel that detected the brightness change, the method comprising the steps of: Acquiring at least one set of events (S), wherein the at least one set (S) comprises a plurality of subsequent events (e); Acquiring IMU data (D) for the duration of the at least one set (S); Generating a motion-corrected image from the at least one set (S) of events (e), wherein the motion-corrected image is obtained by assigning the position (x.sub.j) of each event (e.sub.j) recorded at its corresponding event time (t.sub.j) at an estimated event camera pose (T.sub.tj) to an adjusted event position (x′.sub.j), wherein the adjusted event position (x′.sub.j) is obtained by determining the position of the event (e.sub.j) for an estimated reference camera pose (T.sub.tf.sub.k) at a reference time (.sub.tf.sub.k), wherein the estimated camera pose (T.sub.tj) at the event time (t.sub.j) and the reference camera pose (T.sub.tf.sub.k) at the reference time (.sub.tf.sub.k) are estimated by means of the IMU data (D).

OBJECT POSE ESTIMATION IN VISUAL DATA

The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.

DEPTH EXTRACTION
20220101549 · 2022-03-31 · ·

A computer-implemented method of training a depth uncertainty estimator comprises receiving, at a training computer system, a set of training examples, each training example comprising (i) a stereo image pair and (ii) an estimated disparity map computed from at least one image of the stereo image pair by a depth estimator. The training computer system executes a training process to learn one or more uncertainty estimation parameters of a perturbation function, the uncertainty estimation parameters for estimating uncertainty in disparity maps computed by the depth estimator. The training process is performed by sampling a likelihood function based on the training examples and the perturbation function, thereby obtaining a set of sampled values for learning the one or more uncertainty estimation parameters. The likelihood function measures similarity between one image of each training example and a reconstructed image computed by transforming the other image of that training example based on a possible true disparity map derived from the estimated disparity map of that training example and the perturbation function.

System of vehicle inspection and method thereof
11301981 · 2022-04-12 · ·

There are provided a method of vehicle inspection and a system thereof, the method comprising: obtaining a plurality of sets of images capturing a plurality of segments of surface of a vehicle at a plurality of time points; generating, for each time point, a 3D patch using a set of images capturing a corresponding segment at the time point, giving rise to a plurality of 3D patches; estimating 3D transformation of the plurality of 3D patches based on a relative movement between the imaging devices and the vehicle; and registering the plurality of 3D patches using the estimated 3D transformation thereby giving rise to a composite 3D point cloud of the vehicle. The composite 3D point cloud is usable for reconstructing a 3D mesh and/or 3D model of the vehicle where light reflection, comprised in at least some of the plurality of sets of images, is eliminated therefrom.

Drone with wide frontal field of view
11307583 · 2022-04-19 · ·

A drone includes a frame and a plurality of motors attached to the frame. Each motor of the plurality of motors is connected to a respective propeller located below the frame. A tail motor is attached to the frame. The tail motor is connected to a tail propeller located above the frame. Cameras are attached to the frame and located above the frame. The cameras have fields of view extending over the plurality of propellers.

METHOD FOR 3D RECONSTRUCTION OF AN OBJECT
20220068018 · 2022-03-03 · ·

The invention relates to a method for 3D reconstruction of an object comprising the following steps: generating a plurality of images of an object by at least one camera; extracting features of the object from the plurality of images; generating a cloud of three dimensional points arranged in a three dimensional model representing the object; identifying the images each of which comprises at least one of a subset of said features; determining a first set of three dimensional points corresponding to the subset of said features and a second set of three dimensional points; determining a mathematical equation which corresponds to a predefined three dimensional geometric structure as a building block of the object by means of the first set and the second set of the three dimensional points; and rendering a three dimensional model of the object by means of at least the predefined three dimensional geometric structure.

MULTI-VIEW DEPTH ESTIMATION LEVERAGING OFFLINE STRUCTURE-FROM-MOTION

A method for estimating depth of a scene includes selecting an image of the scene from a sequence of images of the scene captured via an in-vehicle sensor of a first agent. The method also includes identifying previously captured images of the scene. The method further includes selecting a set of images from the previously captured images based on each image of the set of images satisfying depth criteria. The method still further includes estimating the depth of the scene based on the selected image and the selected set of images.

SYSTEMS AND METHODS FOR OBJECT REPLACEMENT
20210334742 · 2021-10-28 ·

Some embodiments provide systems and methods to enable object replacement. A central computing system can receive data associated with quantities of like physical objects from remote systems. The central computing system can adjust the first quantity of the like physical objects stored in the first one of the remote systems based on the second quantity of the like physical objects stored in the at least another one of the remote systems. The central computing system can determine the like physical objects are absent from the facility. An autonomous robot device can detect a vacant space at the designated location at which the like physical objects are supposed to be disposed. The autonomous robot device using the image capturing device can capture an image of the vacant space. The central computing system can determine a set of like replacement physical objects to be disposed in the vacant space.