Patent classifications
G06T7/344
METHOD AND SYSTEM FOR AUTOMATIC CHARACTERIZATION OF A THREE-DIMENSIONAL (3D) POINT CLOUD
Methods of and systems for characterization of a 3D point cloud are disclosed. The method comprises accessing a 3D point cloud, the 3D point cloud being a set of data points representative of the object, determining, based on the 3D point cloud, a 3D reconstructed object, determining, based on the 3D reconstructed object, a digital framework of the 3D point cloud, the digital framework being a ramified 3D tree structure, the digital framework being representative of a base structure of the object, morphing a 3D reference model of the object onto the 3D reconstructed object, the morphing being based on the digital framework; and determining, based on the morphed 3D reference model and the 3D reconstructed object, characteristics of the object.
METHOD OF IN-PROCESS DETECTION AND MAPPING OF DEFECTS IN A COMPOSITE LAYUP
A method of detecting defects in a composite layup includes capturing, using an infrared camera, reference images of a reference layup being laid up by a reference layup head. The method also includes manually reviewing the reference images for defects, and generating reference defect masks indicating defects in the reference images. The method further includes training, using the reference images and reference defect masks, a neural network, creating a machine learning model that, given a production image as input, outputs a production defect mask indicating the defect location and the defect type of each defect. The method also includes capturing, using an infrared camera, production images of a production layup being laid up by the production layup head, and applying the model to the production images to automatically generate a production defect masks indicating each defect in the production images.
System and method for predictive fusion
An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.
Surveying data processing device, surveying data processing method, and surveying data processing program
A surveying data processing device includes a point cloud data acquiring unit, a three-dimensional model acquiring unit, a first correspondence relationship determining unit, an extended three-dimensional data generating unit, and a second correspondence relationship determining unit. The point cloud data acquiring unit acquires first point cloud data obtained by laser scanning, at a first viewpoint, and acquires second point cloud data obtained by laser scanning, at a second viewpoint. The three-dimensional model acquiring unit acquires data of a three-dimensional model. The first correspondence relationship determining unit obtains a correspondence relationship between the first point cloud data and the three-dimensional model. The extended three-dimensional data generating unit generates extended three-dimensional data in which the first point cloud data is extended, on the basis of the correspondence relationship. The second correspondence relationship determining unit determines a correspondence relationship between the extended three-dimensional data and the second point cloud data.
Method And Apparatus for Image Registration
An image registration apparatus including at least one processor and configured to project, to a first model, a first image generated based on an image obtained from a first camera to generate a first intermediate image, to map the first intermediate image to a first output model to generate a first output image, to project, to a second model, a second image generated based on an image obtained from a second camera to generate a second intermediate image, to map the second intermediate image to a second output model to generate a second output image, and to determine a match rate between the first output image and the second output image and transform at least one of the first model and the second model based on a determined match rate and a preset reference match rate.
Determining Spatial Relationship Between Upper and Lower Teeth
A computer-implemented method includes receiving a 3D model of upper teeth (U1) of a patient (P) and a 3D model of lower teeth (L1) of the patient (P1), and receiving a plurality of 2D images, each image representative of at least a portion of the upper teeth (U1) and lower teeth (L1) of the patient (P). The method also includes determining, based on the 2D images, a spatial relationship between the upper teeth (U1) and lower teeth (L1) of the patient (P).
SYSTEMS, METHODS AND PROGRAMS FOR GENERATING DAMAGE PRINT IN A VEHICLE
The disclosure relates to systems, methods and computer readable media for providing network-based identification, generation and management of a unique damage (finger) print of vehicle(s) by geodetic mapping of stable key points onto a ground truth 3D model of the vehicle, and vehicle parts—identified from the raw images using supervised and unsupervised machine learning. Specifically, the disclosure relates to System and methods for the generation of unique damage print on a vehicle that is obtained from captured images of the damaged vehicle, photogrammetrically localized to a specific vehicle part, and the computer programs enabling the method, the damage print configured to be used, for example, in fraud detection in insurance claims.
METHODS FOR MULTI-MODAL BIOIMAGING DATA INTEGRATION AND VISUALIZATION
A multi-modal visualization system (MMVS) is provided, which may be used to analyze and visualize bioimaging data, objects, and pointers, such as neuroimaging data, surgical tools, and pointing rods. MMVS can integrate multiple bioimaging modalities to visualize a plurality of bioimaging datasets simultaneously, such as anatomical bioimaging data and functional bioimaging data.
Sensor alignment
Described herein are systems, methods, and non-transitory computer readable media for performing an alignment between a first vehicle sensor and a second vehicle sensor. Two-dimensional (2D) data indicative of a scene within an environment being traversed by a vehicle is captured by the first vehicle sensor such as a camera or a collection of multiple cameras within a sensor assembly. A three-dimensional (3D) representation of the scene is constructed using the 2D data. 3D point cloud data also indicative of the scene is captured by the second vehicle sensor, which may be a LiDAR. A 3D point cloud representation of the scene is constructed based on the 3D point cloud data. A rigid transformation is determined between the 3D representation of the scene and the 3D point cloud representation of the scene and the alignment between the sensors is performed based at least in part on the determined rigid transformation.
SYSTEMS AND METHODS FOR MEDICAL IMAGE REGISTRATION
There is provided a method for registration of intravital anatomical imaging modality image data and nuclear medicine image data of a patient's heart comprising: obtaining anatomical image data including a heart of a patient outputted by an anatomical intravital imaging modality; obtaining at least one nuclear medicine image data outputted by a nuclear medicine imaging modality, the nuclear medicine image data including the heart of the patient; identifying a segmentation of a network of vessels of the heart in the anatomical image data; identifying a contour of at least part of the heart in the nuclear medicine image data, the contour including at least one muscle wall border of the heart; correlating between the segmentation and the contour; registering the correlated segmentation and the correlated contour to form a registered image of the anatomical image data and the nuclear medicine image data; and providing the registered image for display.