Patent classifications
G06T7/33
Virtual teach and repeat mobile manipulation system
A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.
System and method for predictive fusion
An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.
Surveying data processing device, surveying data processing method, and surveying data processing program
A surveying data processing device includes a point cloud data acquiring unit, a three-dimensional model acquiring unit, a first correspondence relationship determining unit, an extended three-dimensional data generating unit, and a second correspondence relationship determining unit. The point cloud data acquiring unit acquires first point cloud data obtained by laser scanning, at a first viewpoint, and acquires second point cloud data obtained by laser scanning, at a second viewpoint. The three-dimensional model acquiring unit acquires data of a three-dimensional model. The first correspondence relationship determining unit obtains a correspondence relationship between the first point cloud data and the three-dimensional model. The extended three-dimensional data generating unit generates extended three-dimensional data in which the first point cloud data is extended, on the basis of the correspondence relationship. The second correspondence relationship determining unit determines a correspondence relationship between the extended three-dimensional data and the second point cloud data.
Method And Apparatus for Image Registration
An image registration apparatus including at least one processor and configured to project, to a first model, a first image generated based on an image obtained from a first camera to generate a first intermediate image, to map the first intermediate image to a first output model to generate a first output image, to project, to a second model, a second image generated based on an image obtained from a second camera to generate a second intermediate image, to map the second intermediate image to a second output model to generate a second output image, and to determine a match rate between the first output image and the second output image and transform at least one of the first model and the second model based on a determined match rate and a preset reference match rate.
SYSTEM AND METHOD FOR MULTI-MODAL MICROSCOPY
A system and method for processing multi-modal microscopy imaging data on small-scale computer architecture which avoids restrictive manufacturer data formats and APIs. The system and method leverage a web-based application made available to microscopy instrument control hardware by which direct visual output of the control hardware is captured and transmitted to an edge computing device for processing by one or more inference models in parallel to construct a composite hyperimage.
SYSTEM AND METHOD FOR HYBRID IMAGING
The present disclosure provides systems and methods for hybrid imaging. The systems and methods may obtain a first magnetic resonance (MR) image of a target object. The first MR image may be acquired by a magnetic resonance imaging (MRI) device using a first imaging sequence. The systems and methods may also obtain a second MR image of the target object. The second MR image may be acquired by the MRI device using a second imaging sequence. The second MR image may correspond to a target respiratory phase of the target object. The systems and methods may also obtain a target emission computed tomography ECT) image of the target object. The target ECT image may correspond to the target respiratory phase. The systems and methods may further fuse, based on the second MR image, the first MR image and the target ECT image.
SYSTEM AND METHOD FOR HYBRID IMAGING
The present disclosure provides systems and methods for hybrid imaging. The systems and methods may obtain a first magnetic resonance (MR) image of a target object. The first MR image may be acquired by a magnetic resonance imaging (MRI) device using a first imaging sequence. The systems and methods may also obtain a second MR image of the target object. The second MR image may be acquired by the MRI device using a second imaging sequence. The second MR image may correspond to a target respiratory phase of the target object. The systems and methods may also obtain a target emission computed tomography ECT) image of the target object. The target ECT image may correspond to the target respiratory phase. The systems and methods may further fuse, based on the second MR image, the first MR image and the target ECT image.
Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
An electronic device and method for 3D modeling based on 2D warping is disclosed. The electronic device acquires a color image of a face of a user, depth information corresponding to the color image, and a point cloud of the face. A 3D mean-shape model of a reference 3D face is acquired, and rigid aligned with the point cloud. A 2D projection of the aligned 3D mean-shape model is generated. The 2D projection includes a set of landmark points associated with the aligned 3D mean-shape model. The 2D projection is warped such that the set of landmark points in the 2D projection is aligned with a corresponding set of feature points in the color image. A 3D correspondence between the aligned 3D mean-shape model and the point cloud is determined for a non-rigid alignment of the aligned 3D mean-shape model, based on the warped 2D projection and the depth information.
SYSTEMS AND METHODS FOR PROGRESSIVE REGISTRATION
A system receives a first set of points corresponding to an anatomical feature. Each point in the first set of points represents a position in a first frame. The system receives a second set of points corresponding to the anatomical feature. Each point in the second set of points represents a position in a second frame. The system identifies a first subset of the first set of points and determines a first transformation to align the first subset of the first set of points with the second set of points. The first set of points is transformed based on the first transformation. The system identifies a second subset of the first set of points and determines a second transformation to align the first and second subsets of the first set of points with the second set of points. The first set of points are transformed based on the second transformation.
SYSTEMS, METHODS AND PROGRAMS FOR GENERATING DAMAGE PRINT IN A VEHICLE
The disclosure relates to systems, methods and computer readable media for providing network-based identification, generation and management of a unique damage (finger) print of vehicle(s) by geodetic mapping of stable key points onto a ground truth 3D model of the vehicle, and vehicle parts—identified from the raw images using supervised and unsupervised machine learning. Specifically, the disclosure relates to System and methods for the generation of unique damage print on a vehicle that is obtained from captured images of the damaged vehicle, photogrammetrically localized to a specific vehicle part, and the computer programs enabling the method, the damage print configured to be used, for example, in fraud detection in insurance claims.