Patent classifications
G06T7/55
System for generating a three-dimensional scene of a physical environment
A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.
System for generating a three-dimensional scene of a physical environment
A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.
Generating and validating a virtual 3D representation of a real-world structure
A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.
Generating and validating a virtual 3D representation of a real-world structure
A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images.
SYSTEMS AND METHODS FOR GENERATING DEPTH MAPS USING A CAMERA ARRAYS INCORPORATING MONOCHROME AND COLOR CAMERAS
A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.
SYSTEMS AND METHODS FOR GENERATING DEPTH MAPS USING A CAMERA ARRAYS INCORPORATING MONOCHROME AND COLOR CAMERAS
A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.
REDUCING COMPUTATIONAL COMPLEXITY IN THREE-DIMENSIONAL MODELING BASED ON TWO-DIMENSIONAL IMAGES
A method for three-dimensional (3D) modeling using two-dimensional (2D) image data includes obtaining a first image of an object oriented in a first direction and a second image of the object oriented in a second direction, determining a plurality of feature points of the object in the first image, and determining a plurality of matching feature points of the object in the second image that correspond to the plurality of feature points of the object in the first image. The method further includes calculating similarity values between the plurality of feature points and the corresponding plurality of matching feature points, calculating depth values of the plurality of feature points, calculating weighted depth values based on the similarity values and depth values, and performing 3D modeling of the object based on the weighted depth values.
Image processing
Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.
Image processing
Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.
Identification of 3D printed objects
In example implementations, a method is provided. The method includes printing a three-dimensional (3D) object that includes a secondary structure. The secondary structure is removed. A representation of a surface of the 3D object where the secondary structure was removed is captured. The 3D object is authenticated based on the representation of the surface.