Patent classifications
G06T2219/2021
MESH CORRECTION DEPENDING ON MESH NORMAL DIRECTION
The invention relates to a system and computer-implemented method for enabling correction of a segmentation of an anatomical structure in 3D image data. The segmentation may be provided by a mesh which is applied to the 3D image data to segment the anatomical structure. The correction may for example involve a user directly or indirectly selecting a mesh part, such as a mesh point, that needs to be corrected. The behaviour of the correction, e.g., in terms of direction, radius/neighbourhood or strength, may then be dependent on the mesh normal direction, and in some embodiments, on a difference between the mesh normal direction and the orientation of the viewing plane.
METHOD, APPARATUS, AND SYSTEM FOR PROVIDING A DIGITAL ELEVATION MODEL FROM POSE POINTS
An approach is provided for digital elevation modeling from pose points. The approach, for example, involves retrieving a digital elevation model (DEM) representing a geographic area. The approach also involves retrieving pose point data associated with the geographic area. The pose point data, for instance, are collected using at least one sensor of at least one probe device traveling in the geographic area. The approach further involves editing the DEM based on the pose point data and providing the edited DEM as an output.
GINGIVA STRIP PROCESSING USING ASYNCHRONOUS PROCESSING
Methods and apparatuses for asynchronously identifying and modeling a gingiva strip from the three-dimensional (3D) dental model of the patient's dentition. These methods may reduce the time required to generate accurate 3D dental models and therefore may reduce and streamline the process of generating dental treatment plans.
System and Method for Authoring Freehand Interactive Augmented Reality Applications
An augmented reality (AR) application authoring system is disclosed. The AR application authoring system enables the real-time creation of freehand interactive AR applications with freehand inputs. The AR application authoring system enables intuitive authoring of customized freehand gesture inputs through embodied demonstration while using the surrounding environment as a contextual reference. A visual programming interface is provided with which users can define freehand interactions by matching the freehand gestures with reactions of virtual AR assets. Thus, users can create personalized freehand interactions through simple trigger-action programming logic. Further, with the support of a real-time hand gesture detection algorithm, users can seamlessly test and iterate on the authored AR experience.
MEASUREMENT SYSTEM AND STORAGE MEDIUM STORING MEASUREMENT PROGRAM
A measurement system includes a processor. Based on information measured by a camera that takes an image of a measurement target and an auxiliary object arranged on the measurement target, the processor acquires first point cloud data representing a three-dimensional geometry of the measurement target including the auxiliary object. Based on the first point cloud data and second point cloud data that is known and that represents a three-dimensional geometry of the measurement target, the processor eliminates point cloud data of the auxiliary object from the first point cloud data. The processor compares the first point cloud data, from which the point cloud data of the auxiliary object has been eliminated, with the second point cloud data. The processor displays information relating to a result of comparison on a display device.
Smart-home device placement and installation using augmented-reality visualizations
A method for guiding installation of smart-home devices may include capturing, by a camera of a mobile computing device, a view of an installation location for a smart-home device; determining, by the mobile computing device, an instruction for installing the smart-home device at the location; and displaying, by a display of the mobile computing device, the view of the installation location for a smart-home device with the instruction for installing the smart-home device.
MEDICAL IMAGE EDITING
The present invention relates to medical image editing. In order to facilitate the medical image editing process, a medical image editing device (50) is provided that comprises a processor unit (52), an output unit (54), and an interface unit (56). The processor unit (52) is configured to provide a 3D surface model of an anatomical structure of an object of interest. The 3D surface model comprises a plurality of surface sub-portions. The surface sub-portions each comprise a number of vertices, and each vertex is assigned by a ranking value. The processor unit (52) is further configured to identify at least one vertex of vertices adjacent to the determined point of interest as an intended vertex. The identification is based on a function of a detected proximity distance to the point of interest and the assigned ranking value. The output unit (54) is configured to provide a visual presentation of the 3D surface model. The interface unit (56) is configured to determine a point of interest in the visual presentation of the 3D surface model by interaction of a user. The interface unit 56 is further configured to modify the 3D surface model by displacing the intended vertex by manual user interaction. In an example, the output unit (54) is a display configured to display the 3D surface model directly to the user (58).
METHOD OF HIDING AN OBJECT IN AN IMAGE OR VIDEO AND ASSOCIATED AUGMENTED REALITY PROCESS
A method for generating a final image from an initial image including an object suitable to be worn by an individual. The presence of the object in the initial image is detected. A first layer is superposed on the initial image. The first layer includes a mask at least partially covering the object in the initial image. The appearance of at least one part of the mask is modified. The suppression of all or part of an object in an image or a video is enabled. Also, a process of augmented reality intended to be used by an individual wearing a vision device on the face, and a try-on device for a virtual object.
Generative latent textured proxies for object category modeling
Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.
Object modeling using light projection
A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.