Patent classifications
G06T17/20
DISPLACEMENT MAPS
Examples of methods for determining displacement maps are described herein. In some examples of the methods, a method includes determining a displacement map for a three-dimensional (3D) object model based on a compensated point cloud. In some examples, the method includes assembling the displacement map on the 3D object model for 3D manufacturing.
METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.
METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.
High-Precision Map Construction Method, Apparatus and Electronic Device
A high-precision map construction method, apparatus, and electronic device are provided. The method can include: displaying a first color image corresponding to a first track point; according to a first color sub-image and a depth image corresponding to the first track point, obtaining point cloud data corresponding to the first sub-color image, wherein the first sub-color image is a sub-image corresponding to an element to be added in the first color image, and the element to be added is an element to be added in a high-precision map for display; extracting a bounding box corresponding to the point cloud data; and generating a newly-added three-dimensional element according to the bounding box in the high-precision map.
High-Precision Map Construction Method, Apparatus and Electronic Device
A high-precision map construction method, apparatus, and electronic device are provided. The method can include: displaying a first color image corresponding to a first track point; according to a first color sub-image and a depth image corresponding to the first track point, obtaining point cloud data corresponding to the first sub-color image, wherein the first sub-color image is a sub-image corresponding to an element to be added in the first color image, and the element to be added is an element to be added in a high-precision map for display; extracting a bounding box corresponding to the point cloud data; and generating a newly-added three-dimensional element according to the bounding box in the high-precision map.
VOLUMETRIC VIDEO FROM AN IMAGE SOURCE
A method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).
VOLUMETRIC VIDEO FROM AN IMAGE SOURCE
A method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).
METHOD AND APPARATUS FOR PROCESSING NON-SEQUENTIAL POINT CLOUD MEDIA, DEVICE, AND STORAGE MEDIUM
This application provides a method and apparatus for processing non-sequential point cloud media, a device, and a storage medium. The method includes: processing non-sequential point cloud data of a static object using a Geometry-based Point Cloud Compression (GPCC) coding scheme to obtain a GPCC bitstream; encapsulating the GPCC bitstream to generate an item of at least one GPCC region; encapsulating the item of the at least one GPCC region to generate at least one piece of non-sequential point cloud media of the static object; transmitting media presentation description (MPD) signaling of the at least one piece of non-sequential point cloud media; receiving a first request message transmitted by a video playback device; and transmitting first non-sequential point cloud media, the item of the GPCC region being used to represent a GPCC component of a three-dimensional (3D) spatial region corresponding to the GPCC region, and the non-sequential point cloud media including: an identifier of the static object, so that a user can purposefully request non-sequential point cloud media of a same static object a plurality of times, thereby improving the user experience.
AVATAR ANIMATION IN VIRTUAL CONFERENCING
According to a general aspect, a method can include receiving a photo of a virtual conference participant, and a depth map based on the photo, and generating a plurality of synthesized images based on the photo. The plurality of synthesized images can have respective simulated gaze directions of the virtual conference participant. The method can also include receiving, during a virtual conference, an indication of a current gaze direction of the virtual conference participant. The method can further include animating, in a display of the virtual conference, an avatar corresponding with the virtual conference participant. The avatar can be based on the photo. Animating the avatar can be based on the photo, the depth map and at least one synthesized image of the plurality of synthesized images, the at least one synthesized image corresponding with the current gaze direction.
AVATAR ANIMATION IN VIRTUAL CONFERENCING
According to a general aspect, a method can include receiving a photo of a virtual conference participant, and a depth map based on the photo, and generating a plurality of synthesized images based on the photo. The plurality of synthesized images can have respective simulated gaze directions of the virtual conference participant. The method can also include receiving, during a virtual conference, an indication of a current gaze direction of the virtual conference participant. The method can further include animating, in a display of the virtual conference, an avatar corresponding with the virtual conference participant. The avatar can be based on the photo. Animating the avatar can be based on the photo, the depth map and at least one synthesized image of the plurality of synthesized images, the at least one synthesized image corresponding with the current gaze direction.