Patent classifications
G06T7/33
Graphical element rooftop reconstruction in digital map
A client device receives a first map tile, a second map tile, and map terrain data from a mapping system, the first and second map tiles together including map feature having a geometric base with a height value, the geometric base represented by a set of vertices split across the first and second map tiles. The client device identifies edges of the geometric base that intersect a tile border between the first and second map tiles. The client device determines a set of sample points based on the identified edges and determines a particular sample elevation value corresponding to a sample point in the set. The client device renders the map feature based on the particular sample elevation value and displays the rendering of the map feature.
Graphical element rooftop reconstruction in digital map
A client device receives a first map tile, a second map tile, and map terrain data from a mapping system, the first and second map tiles together including map feature having a geometric base with a height value, the geometric base represented by a set of vertices split across the first and second map tiles. The client device identifies edges of the geometric base that intersect a tile border between the first and second map tiles. The client device determines a set of sample points based on the identified edges and determines a particular sample elevation value corresponding to a sample point in the set. The client device renders the map feature based on the particular sample elevation value and displays the rendering of the map feature.
Multi-imaging mode image alignment
Methods and systems for aligning images of a specimen generated with different modes of an imaging subsystem are provided. One method includes separately aligning first and second images generated with first and second modes, respectively, to a design for the specimen. For a location of interest in the first image, the method includes generating a first difference image for the location of interest and the first mode and generating a second difference image for the location of interest and the second mode. The method also includes aligning the first and second difference images to each other and determining information for the location of interest from results of the aligning.
High-definition city mapping
A vehicle generates a city-scale map. The vehicle includes one or more Lidar sensors configured to obtain point clouds at different positions, orientations, and times, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform registering, in pairs, a subset of the point clouds based on respective surface normals of each of the point clouds; determining loop closures based on the registered subset of point clouds; determining a position and an orientation of each of the subset of the point clouds based on constraints associated with the determined loop closures; and generating a map based on the determined position and the orientation of each of the subset of the point clouds.
Systems and methods for deep learning-based image reconstruction
Methods and systems for deep learning based image reconstruction are disclosed herein. An example method includes receiving a set of imaging projections data, identifying a voxel to reconstruct, receiving a trained regression model, and reconstructing the voxel. The voxel is reconstructed by: projecting the voxel on each imaging projection in the set of imaging projections according to an acquisition geometry, extracting adjacent pixels around each projected voxel, feeding the regression model with the extracted adjacent pixel data to produce a reconstructed value of the voxel, and repeating the reconstruction for each voxel to be reconstructed to produce a reconstructed image.
Systems and methods for deep learning-based image reconstruction
Methods and systems for deep learning based image reconstruction are disclosed herein. An example method includes receiving a set of imaging projections data, identifying a voxel to reconstruct, receiving a trained regression model, and reconstructing the voxel. The voxel is reconstructed by: projecting the voxel on each imaging projection in the set of imaging projections according to an acquisition geometry, extracting adjacent pixels around each projected voxel, feeding the regression model with the extracted adjacent pixel data to produce a reconstructed value of the voxel, and repeating the reconstruction for each voxel to be reconstructed to produce a reconstructed image.
Photogrammetric alignment for immersive content production
A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.
Photogrammetric alignment for immersive content production
A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.
Automatic correction method for onboard camera and onboard camera device
There is provided an automatic correction method for an onboard camera and an onboard camera device. The automatic correction method includes the following steps: obtaining a lane image with the onboard camera and a current extrinsic parameter matrix, and identifying two lane lines in the lane image; converting the lane image into a top-view lane image, and obtaining two projected lane lines in the top-view lane image for the two lane lines; calculating a plurality of correction parameter matrices corresponding to the current extrinsic parameter matrix according to the two projected lane lines; and correcting the current extrinsic parameter matrix according to the plurality of correction parameter matrices. This can be applied in situations where the vehicle is stationary or travelling for automatic correction on the extrinsic parameter matrix of the onboard camera.
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.