Patent classifications
G06T2200/08
3D BUILDING GENERATION USING TOPOLOGY
Embodiments provide systems and methods for three-dimensional building generation from machine learning and topological models. The method uses topology models that are converted into vertices and edges. A BGAN (Building generative adversarial network) is used to create fake vertices/edges. The BGAN is then used to generate random samples from seen sample of different structures of building based on relationship of vertices and edges. The embeddings are then fed into a machine trained network to create a digital structure from the image.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
Patient-specific instrumentation for implant revision surgery
A system for creating at least one model of a bone and implanted implant comprises a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining at least one image of at least part of a bone and of an implanted implant on the bone, the at least one image being patient specific, obtaining a virtual model of the implanted implant using an identity of the implanted implant, overlaying the virtual model of the implanted implant on the at least one image to determine a relative orientation of the implanted implant relative to the bone in the at least one image, and generating and outputting a current bone and implant model using the at least one image, the virtual model of the implanted implant and the overlaying.
Virtual 3D communications with actual to virtual cameras optical axes compensation
A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.
METHOD, APPARATUS, SYSTEM, AND STORAGE MEDIUM FOR 3D RECONSTRUCTION
A method, device, computer system and computer readable storage medium for 3D reconstruction are provided. The method comprises: performing a 3D reconstruction of an original 2D image of a target object to generate an original 3D object corresponding to the original 2D image; selecting a complementary view of the target object from candidate views based on a reconstruction quality of the original 3D object at the candidate views; obtaining a complementary 2D image of the target object based on the complementary view; performing a 3D reconstruction of the complementary 2D image to generate a complementary 3D object corresponding to the complementary 2D image; and fusing the original 3D object and the complementary 3D object to obtain a 3D reconstruction result of the target object.
MODEL-BASED IMAGE SEGMENTATION
Presented are concepts for initialising a model for model-based segmentation of an image which use specific landmarks (e.g. detected using other techniques) to initialize the segmentation mesh. Using such an approach, embodiments need not be limited to predefined model transformations, but can initialise a segmentation mesh with arbitrary shape. In this way, embodiments may provide for an image segmentation algorithm that not only delivers a robust surface-based segmentation result but also does so for strongly varying target structure variations (in terms of shape).
LARGE-SCALE GENERATION OF PHOTOREALISTIC 3D MODELS
A system and methods are provided for large-scale generation of photorealistic 3D models, including training texture map and 3D mesh encoder and decoder neural networks, and training a sampler neural network to convert random seeds into input vectors for the texture map and 3D mesh decoder networks. Training the sampler neural network may include feeding random seeds to the sampler neural network, generating training 3D models from the texture map and 3D mesh decoders, rendering 2D images from the training 3D models, back-propagating output of realism classifier and of a uniqueness function of the 2D images to the sampler neural network; and providing the trained sampler neural network with additional random seed inputs to generate multiple respective input vectors for the texture map and 3D mesh decoders, and responsively generating by the texture map and 3D mesh decoders multiple new 3D models.
SYSTEMS AND METHODS FOR TRAINING POSE ESTIMATORS IN COMPUTER VISION
A data capture stage includes a frame at least partially surrounding a target object, a rotation device within the frame and configured to selectively rotate the target object, a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles, a sensor coupled to the frame and configured to sense mapping data corresponding to the target object, and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.
Ultrasonic diagnostic apparatus, medical image processing apparatus, and non-transitory computer medium storing computer program
The ultrasonic diagnostic apparatus according to the present embodiment includes processing circuitry. The processing circuitry is configured to: acquire multiple position data associated with respective multiple two-dimensional image data of ultrasonic related to multiple cross sections; smooth the acquired multiple position data; and arrange the multiple two-dimensional image data in accordance with the smoothed multiple position data to generate volume data.