Patent classifications
A61B8/4245
Imaging view steering using model-based segmentation
An imaging steering apparatus includes sensors and an imaging processor configured for: acquiring, via multiple ones of the sensors and from a current position (322), and current orientation (324), an image of an object of interest; based on a model, segmenting the acquired image; and determining, based on a result of the segmenting, a target position (318), and target orientation (320), with the target position and/or target orientation differing correspondingly from the current position and/or current orientation. An electronic steering parameter effective toward improving the current field of view may be computed, and a user may be provided instructional feedback (144) in navigating an imaging probe toward the improving. A robot can be configured for, automatically and without need for user intervention, imparting force (142) to the probe to move it responsive to the determination.
SYSTEMS AND METHODS FOR GUIDING AN ULTRASOUND PROBE
An ultrasound device (10) includes a probe (12) including a tube (14) sized for insertion into a patient and an ultrasound transducer (18) disposed at a distal end (16) of the tube. A camera (20) is mounted at the distal end of the tube in a fixed spatial relationship to the ultrasound transducer. At least one electronic processor (28) is programmed to: control the ultrasound transducer and the camera to acquire ultrasound images (19) and camera images (21) respectively while the ultrasound transducer is disposed in vivo inside the patient; and construct a keyframe (36) representative of an in vivo position of the ultrasound transducer including at least ultrasound image features (38) extracted from at least one of the ultrasound images acquired at the in vivo position of the ultrasound transducer and camera image features (40) extracted from one of the camera images acquired at the in vivo position of the ultrasound transducer.
ESTIMATING STRAIN ON TISSUE USING 4D ULTRASOUND CATHETER
A medical system includes an ultrasound probe configured for insertion into an organ of a body, and a processor. The probe includes a two-dimensional (2D) ultrasound transducer array, and a sensor configured to output signals indicative of a position, direction and orientation of the 2D ultrasound transducer array inside the organ. The processor is configured to (a) using the signals output by the sensor, register multiple ultrasound images of a tissue region, acquired over a given time duration by the 2D ultrasound transducer array, with one another, (b) estimate, based on the ultrasound images acquired over the given time duration, three-dimensional displacements as a function of time for one or more locations in the tissue region, (c) estimate respective mechanical strains of the one or more locations in the tissue region, based on the three-dimensional displacements, and (d) present a time-dependent rendering of the mechanical strains to a user.
Apparatus and method for determining motion of an ultrasound probe
A method of determining a three-dimensional motion of a movable ultrasound probe (10) is described. The method is carried out during acquisition of an ultrasound image of a volume portion (2) by the ultrasound probe. The method comprises receiving a stream of ultrasound image data (20) from the ultrasound probe (10) while the ultrasound probe is moved along the volume portion (2); inputting at least a sub-set of the ultrasound image data (20, 40) representing a plurality of ultrasound image frames (22) into a machine-learning module (50), wherein the machine learning module (50) has been trained to determine the relative three-dimensional motion between ultrasound image frames (22); and determining, by the machine-learning module (50), a three-dimensional motion indicator (60) indicating the relative three-dimensional motion between the ultrasound image frames.
Three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging
For three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging, the three-dimension segmentation is output by a machine-learnt multi-task generator. The machine-learnt multi-task generator is trained from 3D information, such as a sparse ICE volume assembled from the 2D ICE images. The machine-learnt multi-task generator is trained to output both the 3D segmentation and a complete volume. The 3D segmentation may be used to project to 2D as an input with an ICE image to another network trained to output a 2D segmentation for the ICE image. Display of the 3D segmentation and/or 2D segmentation may guide ablation of tissue in the patient.
Ultrasound probe assembly and method using the same
The present disclosure provides an ultrasonic probe assembly and a method using the same. The ultrasonic probe assembly includes: a handle and a probe body separable from the handle; wherein the handle is configured to control movement of the probe body in a body of an examinee; the probe body includes an ultrasonic component for emitting ultrasonic waves to the body of the examinee and receiving reflected ultrasonic waves to generate examination information, and a driving component for driving the ultrasonic component to move to change a direction of the ultrasonic waves emitted by the ultrasonic component.
POINT-OF-CARE ULTRASOUND (POCUS) SCAN ASSISTANCE AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS
Ultrasound image devices, systems, and methods are provided. An ultrasound imaging system comprising a processor circuit in communication with an ultrasound probe comprising a transducer array, wherein the processor circuit is configured to receive, from the ultrasound probe, a first image of a patients anatomy; detect, from the first image, a first anatomical landmark at a first location along a scanning trajectory of the patients anatomy; determine, based on the first anatomical landmark, a steering configuration for steering the ultrasound probe towards a second anatomical landmark at a second location along the scanning trajectory; and output, to a display in communication with the processor circuit, an instruction based on the steering configuration to steer the ultrasound probe towards the second anatomical landmark at the second location.
ULTRASONIC DIAGNOSIS SYSTEM
An ultrasonic diagnosis system includes a reaction force detection sensor that detects a reaction force acting on an ultrasonic probe when the ultrasonic probe is pressed against a body surface of a subject. Then, the ultrasonic diagnosis system estimates the push-in amount of the ultrasonic probe with respect to the subject by using the reaction force detection sensor during the ultrasonic diagnosis to output a display or a warning of the estimated push-in amount of the ultrasonic probe to the output device.
Real-time 3-D ultrasound reconstruction of knee and its implications for patient specific implants and 3-D joint injections
Methods and apparatus for treating a patient. The method includes acquiring a plurality of radio frequency (RF) signals with an ultrasound transducer, each RF signal representing one or more return echoes from a scan line of a pulse-mode echo ultrasound scan. A position of the ultrasound transducer corresponding to each of the acquired RF signals is determined, and a plurality of contour lines generated from the plurality of RF signals. The method estimates a 3-D shape and position of an anatomical feature, such as a joint of patient based on the generated contour lines and corresponding ultrasound transducer positions. An apparatus, or computer includes a processor and a memory with instructions that, when executed by the processor, perform the aforementioned method.
Fetal imaging system and method
An ultrasound fetal imaging system uses an acceleration sensor (16) for generating an acceleration signal relating to movement of the ultrasound transducer (10). A user is guided in how or where to move the ultrasound transducer based on the results of image processing of the ultrasound images. The user can thus be guided to move the transducer in a certain direction so as to achieve a complete scan of a fetus in a shortest possible time. This limits exposure of the expectant mother to the ultrasound energy. The fetal image obtained may be used to determine a fetal weight, for example using regression analysis based on some of the parameters derived from the obtained image.