Patent classifications
G06T7/55
Identification of 3D printed objects
In example implementations, a method is provided. The method includes printing a three-dimensional (3D) object that includes a secondary structure. The secondary structure is removed. A representation of a surface of the 3D object where the secondary structure was removed is captured. The 3D object is authenticated based on the representation of the surface.
SYSTEMS AND METHODS FOR MAPPING AN ENVIRONMENT
A method for mapping an environment by an electronic device is described. The method includes obtaining a set of sensor measurements. The method also includes determining a set of voxel occupancy probability distributions respectively corresponding to a set of voxels based on the set of sensor measurements. Each of the voxel occupancy probability distributions represents a probability of occupancy of a voxel over a range of occupation densities. The range includes partial occupation densities.
Photometric-based 3D object modeling
Aspects of the present disclosure involve a system and a method for performing operations comprising: accessing a source image depicting a target structure; accessing one or more target images depicting at least a portion of the target structure; computing correspondence between a first set of pixels in the source image of a first portion of the target structure and a second set of pixels in the one or more target images of the first portion of the target structure, the correspondence being computed as a function of camera parameters that vary between the source image and the one or more target images; and generating a three-dimensional (3D) model of the target structure based on the correspondence between the first set of pixels in the source image and the second set of pixels in the one or more target images based on a joint optimization of target structure and camera parameters.
Photometric-based 3D object modeling
Aspects of the present disclosure involve a system and a method for performing operations comprising: accessing a source image depicting a target structure; accessing one or more target images depicting at least a portion of the target structure; computing correspondence between a first set of pixels in the source image of a first portion of the target structure and a second set of pixels in the one or more target images of the first portion of the target structure, the correspondence being computed as a function of camera parameters that vary between the source image and the one or more target images; and generating a three-dimensional (3D) model of the target structure based on the correspondence between the first set of pixels in the source image and the second set of pixels in the one or more target images based on a joint optimization of target structure and camera parameters.
Food orientor
A method of automatically orienting symmetric and asymmetric food items, such as apples for example, is provided. Individual items of food are manipulated by a programmable manipulator within the view of one or more depth imaging cameras. Digital three dimensional characterizations of the surface of the food items are generated by the depth imaging camera or cameras and are utilized by a computer connected to the depth imaging camera or cameras to locate the stem and blossom of each food item. Asymmetric food items, such as apples with dropped shoulders as well as symmetric food items can be properly oriented and processed automatically.
Food orientor
A method of automatically orienting symmetric and asymmetric food items, such as apples for example, is provided. Individual items of food are manipulated by a programmable manipulator within the view of one or more depth imaging cameras. Digital three dimensional characterizations of the surface of the food items are generated by the depth imaging camera or cameras and are utilized by a computer connected to the depth imaging camera or cameras to locate the stem and blossom of each food item. Asymmetric food items, such as apples with dropped shoulders as well as symmetric food items can be properly oriented and processed automatically.
GENERATING AND VALIDATING A VIRTUAL 3D REPRESENTATION OF A REAL-WORLD STRUCTURE
A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.
GENERATING AND VALIDATING A VIRTUAL 3D REPRESENTATION OF A REAL-WORLD STRUCTURE
A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.
MODEL GENERATION METHOD AND APPARATUS BASED ON MULTI-VIEW PANORAMIC IMAGE
The disclosure provides a model generation method based on a multi-view panoramic image, including: calculating an image rectification rotation matrix of source images and a reference image; extracting a reference image feature of the reference image and source image features of the source images; performing a fusion operation on rectified cost volumes of the plurality of source images corresponding to the reference image to obtain a final cost volume; calculating an estimated phase difference under a set resolution; obtaining a final phase difference of the reference image; and generating a depth map of the reference image, and constructing a corresponding stereo vision model.
MODEL GENERATION METHOD AND APPARATUS BASED ON MULTI-VIEW PANORAMIC IMAGE
The disclosure provides a model generation method based on a multi-view panoramic image, including: calculating an image rectification rotation matrix of source images and a reference image; extracting a reference image feature of the reference image and source image features of the source images; performing a fusion operation on rectified cost volumes of the plurality of source images corresponding to the reference image to obtain a final cost volume; calculating an estimated phase difference under a set resolution; obtaining a final phase difference of the reference image; and generating a depth map of the reference image, and constructing a corresponding stereo vision model.