Patent classifications
G06T2207/10136
Semi-automated heart valve morphometry and computational stress analysis from 3D images
A method is provided for measuring or estimating stress distributions on heart valve leaflets by obtaining three-dimensional images of the heart valve leaflets, segmenting the heart valve leaflets in the three-dimensional images by capturing locally varying thicknesses of the heart valve leaflets in three-dimensional image data to generate an image-derived patient-specific model of the heart valve leaflets, and applying the image-derived patient-specific model of the heart valve leaflets to a finite element analysis (FEA) algorithm to estimate stresses on the heart valve leaflets. The images of the heart valve leaflets may be obtained using real-time 3D transesophageal echocardiography (rt-3DTEE). Volumetric images of the mitral valve at mid-systole may be analyzed by user-initialized segmentation and 3D deformable modeling with continuous medial representation to obtain, a compact representation of shape. The regional leaflet stress distributions may be predicted in normal and diseased (regurgitant) mitral valves using the techniques of the invention.
Ultrasonic cardiac assessment of hearts with medial axis curvature and transverse eccentricity
An ultrasonic imaging system produces more diagnostic cardiac images of the left ventricle by plotting the longitudinal medial axis of the chamber between the apex and mitral valve plane as a curved line evenly spaced between the opposite walls of the myocardium. Transverse image planes are positioned orthogonal to the curved medial axis with control points positioned in the short axis view on lines evenly spaced around and emanating from the medial axis. If the short axis view is of an oval shaped chamber the transverse image is stretched to give the heart a more rounded appearance resulting in better positioning of editing control points.
Image processing system and method
A System for image processing (IPS), in particular for lung imaging. The system (IPS) comprises an interface (IN) for receiving at least a part of a 3D image volume (VL) acquired by PAT an imaging apparatus (IA1) of a lung (LG) of a subject (PAT) by exposing the subject (PAT) to a first interrogating signal. A layer definer (LD) of the system (IPS) is configured to define, in the 3D image volume, a layer object (LO) that includes a representation of a surface (S) of the lung (LG). A renderer (REN) of the system (IPS) is configured to render at least a part of the layer object (LO) in 3D at a rendering view (V.sub.p) for visualization on a display device (DD).
MACHINE LEARNING MODEL FOR MEASURING PERFORATIONS IN A TUBULAR
A method and instruction memory for processing acoustic images of a downhole casing to determine perforations of the tubular. The images may be acquired by an acoustic logging tool deployed into cased well. A Machine Learning model is trained to recognize regions of the acoustic images that are perforations or not, in order to calculate geometric properties of the perforation and overall casing. Renderings of the imaged casing may be overlaid with contours and properties of perforations to improve perforation, fracturing and producing operations.
Determining at least one final two-dimensional image for visualizing an object of interest in a three dimensional ultrasound volume
The present invention relates to a device (2) and a method (100) for determining at least one final two-dimensional image or slice for visualizing an object of interest in a three-dimensional ultrasound volume. The method (100) for determining at least one final two-dimensional image, the method comprises the steps: a) providing (101) a three-dimensional image of a body region of a patient body, wherein an applicator configured for fixating at least one radiation source is inserted into the body region; b) providing (102) an initial direction, in particular by randomly determining the initial direction within the three-dimensional image; c) repeating (103) the following sequence of steps s1) to s4): s1) determining (104), via a processing unit, a set-direction within the three-dimensional image based on the initial direction for the first sequence or based on a probability map determined during a previous sequence; s2) extracting (105), via the processing unit, an image-set of two-dimensional images from the three-dimensional image, such that the two-dimensional images of the image-set are arranged coaxially and subsequently in the set-direction; s3) applying (106), via the processing unit, an applicator pre-trained classification method to each of the two-dimensional images of the image-set resulting in a probability score for each of the two-dimensional images of the image-set indicating a probability of the applicator being depicted, in particular fully depicted, in the respective two-dimensional image of the image-set in a cross-sectional view; and s4) determining (107), via the processing unit, a probability-map representing the probability scores of the two-dimensional images of the image-set with respect to the set-direction; wherein the method comprises the further step: d) determining (108), via a processing unit and after finishing the last sequence, the two-dimensional image associated with the highest probability score, in particular from the image-set determined during the last sequence, as the final two-dimensional image. The invention provides an efficient way to ensure that the ultrasound volume has the required clinical information by providing the necessary scan planes having the object of interest e.g. the applicator (6) in a three-dimensional ultrasound volume.
Apparatus and method for determining motion of an ultrasound probe
A method of determining a three-dimensional motion of a movable ultrasound probe (10) is described. The method is carried out during acquisition of an ultrasound image of a volume portion (2) by the ultrasound probe. The method comprises receiving a stream of ultrasound image data (20) from the ultrasound probe (10) while the ultrasound probe is moved along the volume portion (2); inputting at least a sub-set of the ultrasound image data (20, 40) representing a plurality of ultrasound image frames (22) into a machine-learning module (50), wherein the machine learning module (50) has been trained to determine the relative three-dimensional motion between ultrasound image frames (22); and determining, by the machine-learning module (50), a three-dimensional motion indicator (60) indicating the relative three-dimensional motion between the ultrasound image frames.
Three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging
For three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging, the three-dimension segmentation is output by a machine-learnt multi-task generator. The machine-learnt multi-task generator is trained from 3D information, such as a sparse ICE volume assembled from the 2D ICE images. The machine-learnt multi-task generator is trained to output both the 3D segmentation and a complete volume. The 3D segmentation may be used to project to 2D as an input with an ICE image to another network trained to output a 2D segmentation for the ICE image. Display of the 3D segmentation and/or 2D segmentation may guide ablation of tissue in the patient.
REAL-TIME ANATOMICALLY BASED DEFORMATION MAPPING AND CORRECTION
A method includes generating a real-time ultrasound image of anatomy of interest. At least a sub-portion of the anatomy of interest is deformed from an initial location to a different location by pressure applied by an external force. The method further includes obtaining a 2-D slice, which corresponds to a same plane as the real-time ultrasound image, from 3-D reference image data, wherein a corresponding sub-portion is at the initial location. The method further includes determining displacement fields for the sub-portion from the sub-portion, the corresponding sub-portion and other anatomy not-deformed in the real-time ultrasound image and the 3-D reference image data. The method further includes deforming the 3-D reference image data using the displacement fields, which creates deformed 3-D reference image data based on the different location.
Real-time 3-D ultrasound reconstruction of knee and its implications for patient specific implants and 3-D joint injections
Methods and apparatus for treating a patient. The method includes acquiring a plurality of radio frequency (RF) signals with an ultrasound transducer, each RF signal representing one or more return echoes from a scan line of a pulse-mode echo ultrasound scan. A position of the ultrasound transducer corresponding to each of the acquired RF signals is determined, and a plurality of contour lines generated from the plurality of RF signals. The method estimates a 3-D shape and position of an anatomical feature, such as a joint of patient based on the generated contour lines and corresponding ultrasound transducer positions. An apparatus, or computer includes a processor and a memory with instructions that, when executed by the processor, perform the aforementioned method.
ARTICULATED STRUCTURED LIGHT BASED-LAPAROSCOPE
In a method of using a structured-light based system, real-time 2D images of a portion of a field of view are captured using an endoscope. A portion of an object in the field of view is illuminated with a structured light pattern, and light reflected from the field of view is detected. From the reflected light, a 3D image of the field of view is constructed, and 3D locations of points on a surface of the object are determined. The real time 3D spatial position of the endoscope and/or a surgical tool is determined. If a distance between the surface the endoscope and/or surgical tool, as determined using the 3D spatial position, falls below a predetermined distance, an alert is generated to notify a user.