G06T3/0031

Face modeling method and apparatus, electronic device and computer-readable medium

A face modeling method and apparatus, an electronic device and a computer-readable medium. Said method comprises: acquiring multiple depth images, the multiple depth images being obtained by photographing a target face at different irradiation angles; performing alignment processing on the multiple depth images to obtain a target point cloud image; using the target point cloud image to construct a three-dimensional model of the target face. The present disclosure alleviates the technical problems of poor robustness and low precision of the three-dimensional model constructed according to the three-dimensional model constructing method.

Method and device for encoding/decoding the geometry of a point cloud

The present embodiments relate to a method and device. The method comprises obtaining at least one first point from at least one point of a point cloud by projecting said point of the point cloud onto a projection plane and obtaining at least one other point of the point cloud determined according to said at least one first point; determining and encoding at least one interpolation coding mode for said at least one first point based on at least one reconstructed point obtained from said at least one first point and at least one interpolation point defined by said at least one interpolation coding mode to approximate said at least one other point of the point cloud; and signaling said at least interpolation coding mode as values of image data.

TWO-DIMENSIONAL (2D) FEATURE DATABASE GENERATION
20230131418 · 2023-04-27 ·

One embodiment provides a method comprising acquiring 3D content comprising a 3D object in 3D space. The 3D object has object information indicative of a location of the 3D object in the 3D space. The method further comprises projecting the 3D object to a 2D object in 2D space based on the object information. The 2D object has one or more 2D vertices indicative of a location of the 2D object in the 2D space. The method further comprises determining one or more latent variables in the 2D space based on the object information and the one or more 2D vertices, and generating a 2D feature database including the one or more latent variables.

2D AND 3D FLOOR PLAN GENERATION

A floorplan modelling method and system. The floorplan modelling method includes receiving 2D images of each corner of an interior space from a camera, generating a corresponding camera position and camera orientation in a 3D coordinate system in the interior space for each 2D image, generating a depth map for each 2D image to estimate depth for each pixel, generating a corresponding edge map for each 2D image, and generating a 3D point cloud for each 2D image using the corresponding depth map and parameters of the camera. The floorplan modelling method includes transforming the 3D point clouds with the corresponding edge map into a 2D space in the 3D coordinate system of the camera, regularizing the 3D point clouds into 2D boundary lines, and generating a 2D plan of the interior space from the boundary lines.

Method for Maintaining 3D Orientation of Route Segments and Components in Route Harness Flattening
20230153940 · 2023-05-18 ·

A 3D modeled CAD object is flattened to a two dimensional 2D representation while maintaining a user selected wiring component represented in 3D. A user selected 3D component has a connector and a route segment with at least one stored sketch segment. A 3D and 2D tangent are calculated at a junction point of the route segment. A translation and rotation transformation is calculated to align the 2D and 3D tangents at the junction point. A calculated transformation matrix based on the translation and rotation transformation is used to display a flattened unconnected route segment aligned with the user selected 3D component.

METHOD AND SYSTEM THAT EFFICIENTLY PREPARES TEXT IMAGES FOR OPTICAL-CHARACTER RECOGNITION
20170372460 · 2017-12-28 ·

The current document is directed to methods and systems that straighten curvature in the text lines of text-containing digital images, including text-containing digital images generated from the two pages of an open book. Initial processing of a text-containing image identifies the outline of a text-containing page. Next, contours are generated to represent each text line. The midpoints and inclination angles of the links or vectors that comprise the contour lines are determined. A model is constructed for the perspective-induced curvature within the text image. In one implementation, the model, essentially an inclination-angle map, allows for assigning local displacements to pixels within the page image which are then used to straighten the text lines in the text image. In another implementation, the model is essentially a pixel-displacement map which is used to straighten the text lines in the text image.

Systems and Methods to Perform 3D Localization of Target Objects in Point Cloud Data Using A Corresponding 2D Image

The present invention relates to a systems and methods to perform 3D localization of target objects in point cloud data using a corresponding 2D image. According to an illustrative embodiment of the present disclosure, a target environment is imaged with a camera to generate a 2D panorama and a scanner to generate a 3D point cloud. The 2D panorama is mapped to the point cloud with a 1 to 1 grid map. The target objects are detected and localized in 2D before being mapped back to the 3D point cloud.

Methods and systems for computer-based prediction of fit and function of garments on soft bodies

A method generates a three dimensional representation of a garment and comprises: obtaining a three dimensional human body model comprising an outer surface representative of an outermost surface of a human body; obtaining a three dimensional representation of a garment; and simulating a three dimensional physical interaction of the three dimensional body model with a three dimensional representation of the garment. Simulating the three-dimensional physical interaction comprises: deforming both the three-dimensional body model and the three dimensional representation of the garment; and displaying the deformed three-dimensional human body model and the deformed three-dimensional representation of the garment.

Control method for image projection system, and image projection system
11676241 · 2023-06-13 · ·

A projector includes a correction information generation unit, an image information correction unit, and an image projection unit. The correction information generation unit sets a first coordinate in a two-dimensional projection formed by flattening out a three-dimensional projection surface onto a plane. The correction information generation unit arranges a first quadrilateral having a first aspect ratio within the two-dimensional projection, based on the first coordinate as a reference position, in such a way that the first quadrilateral comes into contact with an outline of the two-dimensional projection. The correction information generation unit determines whether the first quadrilateral is in contact with the outline of the two-dimensional projection at two or more points, or not. When the first quadrilateral is determined as being in contact with the outline of the two-dimensional projection at two or more points, the image information correction unit corrects image information, based on the first quadrilateral, and thus generates corrected image information. The image projection unit projects an image based on the corrected image information onto the projection surface.

Shared virtual reality

An immersive three dimensional (3-D) virtual reality sharing system is disclosed. The system comprises a content controller configured to determine the physical locations of a reference point and boundary in a physical space and map them to a corresponding point and boundary in a virtual world. The physical location and orientation of a user device relative to the reference point and boundary are used to determine a corresponding location and orientation in the 3-D virtual world. A representation of a portion of the 3-D virtual world corresponding to the determined location and orientation is rendered at the user device. As the user device is moved in the physical world, a corresponding updated location in the 3-D virtual world is determined, and the rendered representation updated. Thus, the user device acts as a window into the 3-D virtual world.