Patent classifications
G06T3/06
GENERATING 2D IMAGE OF 3D SCENE
A computer-implemented method for machine-learning a function that generates a 2D image of a 3D scene. The function includes a scene encoder and a generative image model. The scene encoder takes as input a layout of the 3D scene and a viewpoint and outputs a scene encoding tensor. The generative image model takes as input the scene encoding tensor outputted by the scene encoder and outputs the generated 2D image. The machine-learning method includes obtaining a dataset comprising 2D images and corresponding layouts and viewpoints of 3D scenes. The machine-learning method includes training the function based on the obtained dataset. Such a machine-learning method forms an improved solution for generating a 2D image of a 3D scene.
GENERATING 2D IMAGE OF 3D SCENE
A computer-implemented method for machine-learning a function that generates a 2D image of a 3D scene. The function includes a scene encoder and a generative image model. The scene encoder takes as input a layout of the 3D scene and a viewpoint and outputs a scene encoding tensor. The generative image model takes as input the scene encoding tensor outputted by the scene encoder and outputs the generated 2D image. The machine-learning method includes obtaining a dataset comprising 2D images and corresponding layouts and viewpoints of 3D scenes. The machine-learning method includes training the function based on the obtained dataset. Such a machine-learning method forms an improved solution for generating a 2D image of a 3D scene.
Generating a customized three-dimensional mesh from a scanned object
The present disclosure is directed toward systems and methods that facilitate scanning an object (e.g., a three-dimensional object) having custom mesh lines thereon and generating a three-dimensional mesh of the object. For example, a three-dimensional modeling system receives a scan of the object including depth information and a two-dimensional texture map of the object. The three-dimensional modeling system further generates an edge map for the two-dimensional texture map and modifies the edge map to generate a two-dimensional mesh including edges, vertices, and faces that correspond to the custom mesh lines on the object. Based on the two-dimensional mesh and the depth information from the scan, the three-dimensional modeling system generates a three-dimensional model of the object.
DESIGNATED REGION PROJECTION PRINTING
A system determines an object-design for a three-dimensional model of an object. The object-design may exhibit a design continuity. The system breaks the object-design in to spatial patterns corresponding to the discrete surfaces making up the outward surface of the object. The system then generates flattened patterns by projecting the spatial patterns into a two-dimensional plane. The system prints the flattened patterns on to designated regions of material sheets in an orientation that preserves the design continuity of the object-design. The regions may be extracted from the sheets and then joined at their edges to form a cover for object that exhibits the continuity of the object design.
Method and apparatus for generating projection-based frame with 360-degree image content represented by triangular projection faces assembled in triangle-based projection layout
A projection-based frame is generated according to an omnidirectional video frame and a triangle-based projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the triangle-based projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via a triangle-based projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.
Panoramic image compression method and apparatus
A panoramic image compression method and device is disclosed. The method comprises: obtaining a first spherical model formed by a first panoramic image to be compressed; generating a second spherical model in the first spherical model according to a main view image of an user; establishing a first mapping relationship between plane 2D rectangular coordinates in a second panoramic image and plane 2D rectangular coordinates in the first panoramic image; and sampling, from the first panoramic image, pixels corresponding to plane 2D rectangular coordinates in the second panoramic image according to the first mapping relationship to constitute the second panoramic image containing the pixels, so as to realize the compression of the first panoramic image.
Image display apparatus, mobile device, and methods of operating the same
A mobile device is provided. The mobile device may include a communication interface; a display; a memory configured to store one or more instructions; and at least one processor configured to execute the one or more instructions stored in the memory to: control the communication interface to communicate with an image display apparatus; control a viewpoint of a 360-degree image based on an input; and control the communication interface to transmit, to the image display apparatus, at least one among an image corresponding to the viewpoint of the 360-degree image, and viewpoint control information corresponding to the viewpoint of the 360-degree image.
Fringe projection for determining topography of a body
A fringe projection method for determining the topography of a body (12) comprising the steps: projecting a series of sets of patterns (Ti) onto a surface (20) of the body (12), wherein each set has at least two patterns (Ti) and wherein each pattern (Ti) has S fringes; for each pattern (Ti), recording an image (24.i) of the surface (20) having the projected pattern, so that a sequence of recordings is formed; and calculating the topography from the images (24.i), wherein such patterns are projected in which each fringe has an intensity distribution perpendicular to the fringe longitudinal direction (L) and each intensity distribution can be expressed by a function (Q) which has a spatial phase position (). According to the invention, the phase position () changes as a function of a code (g.sub.(s)) of the ordinal number (s) of the fringe.
Interface-based modeling and design of three dimensional spaces using two dimensional representations
Interface-based modeling and design of three dimensional spaces using two dimensional representations are provided herein. An example method includes converting a three dimensional space into a two dimensional space using a map projection schema, where the two dimensional space is bounded by ergonomic limits of a human, and the two dimensional space is provided as an ergonomic user interface, receiving an anchor position within the ergonomic user interface that defines a placement of an asset relative to the three dimensional space when the two dimensional space is re-converted back to a three dimensional space, and re-converting the two dimensional space back into the three dimensional space for display along with the asset, within an optical display system.
POSITIONAL INFORMATION DISPLAY DEVICE, POSITIONAL INFORMATION DISPLAY METHOD, POSITIONAL INFORMATION DISPLAY PROGRAM, AND RADIOGRAPHY APPARATUS
A first positional information derivation unit derives first positional information indicating at least one first position related to the insertion of an insertion structure into a target structure in a subject from a preoperative image acquired before a medical procedure for the subject. A second positional information derivation unit derives second positional information indicating at least one second position on the insertion structure from an intraoperative image acquired during the medical procedure for the subject. A display control unit displays, on a display unit, a positional information screen including at least one of a distance between the first position and the second position or an angle related to the first position and the second position on the basis of the first positional information and the second positional information in a coordinate system common to a coordinate system of the preoperative image and a coordinate system of the intraoperative image.