Patent classifications
G06T7/35
SYSTEMS AND METHODS FOR RADIOLOGIC AND PHOTOGRAPHIC IMAGING OF PATIENTS
A method for identifying a misidentified study can utilize a set of photographs captured at substantially the same time as a corresponding set of medical images. The method can include determining similarities between the photographs through machine learning models and determining that a misidentified study exists when the similarity between the photographs fails to satisfy a threshold similarity.
Systems and methods for intraoperative spinal level verification
Systems and methods are provided in which intraoperatively acquired surface data is employed to verify the correspondence of an intraoperatively selected spinal level with a spinal level that is pre-selected based on volumetric image data. Segmented surface data corresponding to the pre-selected spinal levels may be obtained from the volumetric image data, such that the segmented surface data corresponds to a spinal segment that is expected to be exposed and identified intraoperatively during the surgical procedure. The segmented surface data from the pre-selected spinal level, and adjacent segmented surface data from an adjacent spinal level that is adjacent to the pre-selected spinal level, is registered to the intraoperative surface data, and quality measures associated with the registration are obtained, thereby permitting an assessment or a determination of whether or not the pre-selected spinal surface (in the volumetric frame or reference) is likely to correspond to the intraoperatively selected spinal level.
METHOD FOR CALIBRATING THE ALIGNMENT OF CAMERAS
A method for calibrating a multiplicity of cameras on a vehicle in a common coordinate system. The method includes: a) acquiring camera images using each camera of the multiplicity of cameras, b) ascertaining overlap regions of camera images acquired in step a), c) setting up a common optimization function, which describes the alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions, d) solving the common optimization function set up in step c) to ascertain the alignments of each camera.
METHOD FOR CALIBRATING THE ALIGNMENT OF CAMERAS
A method for calibrating a multiplicity of cameras on a vehicle in a common coordinate system. The method includes: a) acquiring camera images using each camera of the multiplicity of cameras, b) ascertaining overlap regions of camera images acquired in step a), c) setting up a common optimization function, which describes the alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions, d) solving the common optimization function set up in step c) to ascertain the alignments of each camera.
IMAGE SYNTHESIS BASED ON PREDICTIVE ANALYTICS
An embodiment includes analyzing a first digital image using a first trained neural network that classifies the first image as having a first context based on a plurality of characteristics of the first image. The embodiment generates a data structure that associates first and second elements depicted in the first image based on an affinity detected between the first and second elements in the first context. The embodiment executes a querying process that searches for information requested by a user, and detects that a search result from the querying process conveys a second context that is different from the first context. The embodiment identifies a third element in the data structure having an affinity with the first element in the second context. The embodiment renders a machine-generated image that is generated using a second trained neural network and depicts the first element and the third element in the second context.
IMAGE SYNTHESIS BASED ON PREDICTIVE ANALYTICS
An embodiment includes analyzing a first digital image using a first trained neural network that classifies the first image as having a first context based on a plurality of characteristics of the first image. The embodiment generates a data structure that associates first and second elements depicted in the first image based on an affinity detected between the first and second elements in the first context. The embodiment executes a querying process that searches for information requested by a user, and detects that a search result from the querying process conveys a second context that is different from the first context. The embodiment identifies a third element in the data structure having an affinity with the first element in the second context. The embodiment renders a machine-generated image that is generated using a second trained neural network and depicts the first element and the third element in the second context.
Method and system for automatically processing point cloud based on reinforcement learning
A method and system for automatically processing point cloud based on reinforcement learning are provided. The method for automatically processing point cloud based on reinforcement learning according to an embodiment of the present disclosure includes scanning to collect a point cloud (PCL) and an image through a lidar and a camera; calibrating, by a controller, to match locations of the image and the point cloud through reinforcement learning that maximizes a reward including geometric and luminous intensity consistency of the image and the point cloud; and meshing, by the controller, the point cloud into a 3D image through reinforcement learning that minimizes a reward including a difference between a shape of the image and a shape of the point cloud.
Homography generation for image registration in inlier-poor domains
A method for efficient image registration between two images in the presence of inlier-poor domains includes receiving a set of candidate correspondences between the two images. An approximate homography between the two images is generated based upon a first correspondence in the correspondences. The set of candidate correspondences is filtered to identify inlier correspondences based upon the approximate homography. A candidate homography is computed based upon the inlier correspondences. The candidate homography can be selected as a final homography between the two images based upon a support of the candidate homography against the set of candidate correspondences. An image registration is performed between the two images based upon the candidate homography being selected as the final homography.
REAL-TIME DETECTION METHOD OF BLOCK MOTION BASED ON FEATURE POINT RECOGNITION
Disclosed is a real-time detection method of a block motion based on a feature point recognition, including following steps: S1, calibrating a camera; S2, shooting digital images of a block on a surface of a breakwater by the camera, and sending the digital images to a digital signal processing system based on a field programmable gate array; S3, carrying out a feature point detection of the block in the images by the digital signal processing system; S4, carrying out a coordinate conversion after the feature point detection; S5, comparing position changes of the feature points of the block before and after a test, making a difference between coordinates of the two images before and after the test to obtain a change value of the feature points, and obtaining a displacement of the block; and S6, displaying displacement calculation results on a data processor.
REAL-TIME DETECTION METHOD OF BLOCK MOTION BASED ON FEATURE POINT RECOGNITION
Disclosed is a real-time detection method of a block motion based on a feature point recognition, including following steps: S1, calibrating a camera; S2, shooting digital images of a block on a surface of a breakwater by the camera, and sending the digital images to a digital signal processing system based on a field programmable gate array; S3, carrying out a feature point detection of the block in the images by the digital signal processing system; S4, carrying out a coordinate conversion after the feature point detection; S5, comparing position changes of the feature points of the block before and after a test, making a difference between coordinates of the two images before and after the test to obtain a change value of the feature points, and obtaining a displacement of the block; and S6, displaying displacement calculation results on a data processor.