G06T2219/021

Systems and methods for image processing

An image processing method is provided, including: obtaining image data of a cavity wall of an organ; unfolding the cavity wall; and generating an image of the unfolded cavity wall. The unfolding of the cavity wall may include: obtaining a mask and a centerline of the organ; obtaining a connected region of the mask; dividing the connected region into at least one equidistant block; determining an orientation of the equidistant block in a three-dimensional coordinate system including a first direction, a second direction and a third direction; determining an initial normal vector and an initial tangent vector of a center point of the centerline; assigning a projection of the initial normal vector to a normal vector of a light direction of the center point; assigning the third direction or an reverse direction of the third direction to a tangent vector of the light direction of the center point.

Method and system for the 3D design and calibration of 2D substrates
10748327 · 2020-08-18 ·

A system and method including scanning an object with a three-dimensional (3D) scanning module of a computing system; providing a three-dimensional (3D) image model from said scanned object or from a user input with a 3D CAD module of said computing system executing a computer code configured to perform said three-dimensional (3D) image model step stored in said non-transitory computer readable medium; rescaling with a rescaling module of said computing system, said three-dimensional (3D) image model; calibrating with a calibration module of said computing system, said three-dimensional (3D) image model; retopologizing said three-dimensional (3D) image model with said calibration module; unwrapping said three-dimensional (3D) image model with a 3D to 2D translation module; converting, with said 3D to 2D translation module, said unwrapped three-dimensional (3D) image model into a two-dimensional (2D) graphic or embroidery file format.

AUGMENTED EXPRESSION SYSTEM
20200242826 · 2020-07-30 ·

Embodiments described herein relate to an augmented expression system to generate and cause display of a specially configured interface to present an augmented reality perspective. The augmented expression system receives image and video data of a user and tracks facial landmarks of the user based on the image and video data, in real-time to generate and present a 3-dimensional (3D) bitmoji of the user.

Augmented expression system
10719968 · 2020-07-21 · ·

Embodiments described herein relate to an augmented expression system to generate and cause display of a specially configured interface to present an augmented reality perspective. The augmented expression system receives image and video data of a user and tracks facial landmarks of the user based on the image and video data, in real-time to generate and present a 3-dimensional (3D) bitmoji of the user.

IMAGE PROCESSING DEVICE, METHOD, AND PROGRAM
20200151949 · 2020-05-14 ·

An image processing device is described herein including an information input unit that receives as an input three-dimensional structure information indicating a three-dimensional structure of a heart; and an image generation unit that develops an inner wall of atria and ventricles of a heart indicated by the three-dimensional structure information into a two-dimensional image based on an equal-area projection, and generates a developed image interrupted by dividing the two-dimensional image into a front wall, a rear wall, a left wall, and a right wall.

CUT-SURFACE DISPLAY OF TUBULAR STRUCTURES
20200151874 · 2020-05-14 ·

A method for visualizing a tubular object from a set of volumetric data may include the steps of: determining a viewing direction for the tubular object; selecting a constraint subset of the tubular object within the volumetric data; defining a cut-surface through the volumetric data and including the constraint subset of the tubular object within the volumetric data; and rendering an image based upon the determined viewing direction and the volumetric data of the tubular object along the intersection of the volumetric data and the defined cut-surface. Additionally or alternatively, the method may identify a plurality of bifurcations in the tubular object; assign a weighting factor to each identified bifurcation; determine a bifurcation normal vector associated with each bifurcation; determine a weighted average of the bifurcation normal vectors; and render an image of the volumetric data from a perspective parallel to the weighted average of the bifurcation normal vectors.

Augmented reality (AR) display of pipe inspection data

Described is a method of providing an augmented reality (AR) scene of pipe inspection data, including: obtaining, using a processor, pipe inspection data derived from a pipe inspection robot that traverses through the interior of an underground pipe, the pipe inspection data including one or more sets of condition assessment data relating to an interior of the underground pipe; obtaining, using a processor, real-time visual image data of an above-ground surface; combining, using a processor, the pipe inspection data with the real-time visual image data in an AR scene; and displaying, using a display device, the AR scene. Other examples are described and claimed.

Method for generating and using a two-dimensional drawing having three-dimensional orientation information

A two-dimensional drawing of a three-dimensional wire harness model is generated by selecting a starting node from a plurality of nodes of the three-dimensional wire harness model, where the starting node is directly connected to a first bundle and a second bundle of the plurality of bundles, wherein further each of the first and second bundles are representable by corresponding first and second vectors. A reference plane is defined based on an orientation of the starting node, the first vector and the second vector, such that a first adjacent node may then be mapped onto the reference plane by geometric translation. Thereafter, a plurality of mapping operations are sequentially carried out until each of the plurality of nodes and the plurality of bundles have been mapped, by geometric translation, to the reference plane, and wherein corresponding translation matrices are stored in association with corresponding ones of the plurality of mapped nodes and/or the plurality of mapped bundles. The two-dimensional drawing of the three-dimensional wire harness model may then be generated such that the two-dimensional drawing includes three-dimensional orientation data corresponding to the plurality of bundles.

METHOD FOR ESTABLISHING A DEFORMABLE 3D MODEL OF AN ELEMENT, AND ASSOCIATED SYSTEM
20200118332 · 2020-04-16 ·

A method is provided for generating a three-dimensional morphable model of an element from an initial database of examples of such elements providing data allowing, for each of the elements of the initial database, a three-dimensional meshed surface based on points and on a triangular network connecting the points to be determined.

METHOD FOR TEXTURING A 3D MODEL

Method for texturing a 3D model of at least one scene (5), comprising: a) the meshing with surface elements (50; 55) of a point cloud (45) representing the scene, so as to generate the 3D model, each surface element representing an area of the scene, b) the unfolding of the 3D model for obtaining a 2D model formed of a plane mesh (60a; 60b) formed of polygons (65), each surface element corresponding to a single polygon, and vice versa, and c) for at least one, preferably all the surface elements, iv) the identification, from an image bank (40a; 40b), of the images representing the area of the scene and which have been acquired by a camera the image plane (72a-b) of which has a normal direction, in the corresponding acquisition position, forming an angle (.sub.a-b) less than 10, preferably less than 5, better less than 3 with a direction normal (70) to the face of the surface element, v) the selection of an image (40a-b) from the identified images, and, vi) the association of a texture property with a corresponding polygon (65), from a piece of information of a pixel (80; 85) of the selected image which is superimposed on the surface element (55), so as to produce a textured 2D model, and d) the production of the textured 3D model by matching the 3D model and the textured 2D model.