Patent classifications
A61B8/5215
System and method for ultrasound analysis
An exemplary system, method and computer-accessible medium for detecting an anomaly(ies) in an anatomical structure(s) of a patient(s) can be provided, which can include, for example, receiving imaging information related to the anatomical structure(s) of the patient(s), classifying a feature(s) of the anatomical structure(s) based on the imaging information using a neural network (s), and detecting the anomaly(ies) based on data generated using the classification procedure. The imaging information can include at least three images of the anatomical structure(s).
Handheld three-dimensional ultrasound imaging system and method
Disclosed in the application is a handheld three-dimensional ultrasound imaging system and method, comprising a handheld ultrasound probe, used for scanning and obtaining an ultrasound image; a display, control and processing terminal, connected to the handheld ultrasound probe wiredly or wirelessly. The handheld ultrasound imaging system and method of the application further comprises: a handheld three-dimensional spatial positioning system, connected to the handheld ultrasound probe, moving with the movement of the handheld ultrasound probe, connected to the display, control and processing terminal wiredly or wirelessly, and used for independently positioning the three-dimensional position of the handheld ultrasound probe. By means of the handheld three-dimensional ultrasound imaging system and method of the present application, the large spatial positioning system in an existing three-dimensional ultrasound imaging system is changed into a portable spatial positioning system that can be used at any time, so that handheld three-dimensional ultrasound imaging can be widely applied.
Methods and apparatus for collection of ultrasound data
Aspects of the technology described herein relate to instructing an operator to move an ultrasound device along a predetermined path relative to an anatomical area in order to collect first ultrasound data and second ultrasound data, the first ultrasound data capable of being transformed into an ultrasound image of a target anatomical view, and the second ultrasound data not capable of being transformed into the ultrasound image of the target anatomical view.
System and method for determining a subject's muscle fuel level, muscle fuel rating, and muscle energy status
Provided is a non-invasive system and method for determining a fuel value for a target muscle and potentially at least one indicator muscle. The method includes receiving an ultrasound scan of a target muscle; evaluating at least a portion of the ultrasound scan to determine fuel value within the target muscle; recording the determined fuel value for the muscle as an element of a data set for the muscle; evaluating the fuel data set to determine a value range; and in response to the range being at least above a pre-determined threshold, establishing a target score for the muscle as based on an upper portion of the value range. The method may be repeated to identify ranges for a plurality of muscles, the muscle with the greatest range being identified as an indicator muscle. Based on these findings the muscles estimated fuel level, fuel rating and energy status may be determined. An associated system is also disclosed.
STEERABLE MULTI-PLANE ULTRASOUND IMAGING SYSTEM
A steerable multi-plane ultrasound imaging system (MPUIS) for steering a plurality of intersecting image planes (PL.sub.1 . . . n) of a beamforming ultrasound imaging probe (BUIP) based on ultrasound signals transmitted between the beamforming ultrasound imaging probe (BUIP) and an ultrasound transducer (S) disposed within a field of view (FOV) of the probe (BUIP). An ultrasound tracking system (UTS) causes the beamforming ultrasound imaging probe (BUIP) to adjust an orientation of the first image plane (PL.sub.1) such that a first image plane passes through a position (POS) of the ultrasound transducer (S) by maximizing a magnitude of ultrasound signals transmitted between the beamforming ultrasound imaging probe (BUIP) and the ultrasound transducer (S). An orientation of a second image plane (PL.sub.2) is adjusted such that an intersection (AZ) between the first image plane and the second image plane passes through the position of the ultrasound transducer (S).
Flow imaging processing method and ultrasound imaging device
Embodiments of the present disclosure provide a flow imaging processing method, which may include determining flow imaging parameters, where the flow imaging parameters include a sound speed for calculation, a center frequency of the transmitting pulse for exciting a probe and a imaging depth; obtaining a velocity measurement range; and determining the first target number of the different transmit angles according to the sound speed for calculation, the center frequency of the transmitting pulse, the imaging depth and the velocity measurement range. The embodiments of the present disclosure also provide an ultrasound imaging device.
MEDICAL IMAGE DIAGNOSIS APPARATUS, MEDICAL INFORMATION PROCESSING APPARATUS, AND MEDICAL IMAGE PROCESSING METHOD
A medical image diagnosis apparatus according to an embodiment includes a processing circuit. The processing circuit is configured to obtain three-dimensional data related to a target site. The processing circuit is configured to generate a three-dimensional model of the target site by using the obtained three-dimensional data. The processing circuit is configured to calculate positions of one or more recommended cross-sections to be set for the target site, on the basis of information about the size of the target site obtained by using the three-dimensional model. The processing circuit is configured to cause a display device to display the positions of the one or more recommended cross-sections.
High volume rate 3D ultrasonic diagnostic imaging
A 3D ultrasonic diagnostic imaging system produces 3D display images at a 3D frame rate of display which is equal to the acquisition rate of a 3D image dataset. The volumetric region being imaged is sparsely sub-sampled by separated scanning beams. Spatial locations between the beams are filled in with interpolated values or interleaved with acquired data values from other 3D scanning intervals depending upon the existence of motion in the image field. A plurality of different beam scanning patterns are used, different ones of which have different spatial locations where beams are located and beams are omitted. In a preferred embodiment the determination of motion and the consequent decision to use interpolated or interleaved data for display is determined on a pixel-by-pixel basis.
AUTOMATED IMAGE ANALYSIS FOR DIAGNOSING A MEDICAL CONDITION
Aspects of the technology described herein relate to techniques for guiding an operator to use an ultrasound device. Thereby, operators with little or no experience operating ultrasound devices may capture medically relevant ultrasound images and/or interpret the contents of the obtained ultrasound images. For example, some of the techniques disclosed herein may be used to identify a particular anatomical view of a subject to image with an ultrasound device, guide an operator of the ultrasound device to capture an ultrasound image of the subject that contains the particular anatomical view, and/or analyze the captured ultrasound image to identify medical information about the subject.
Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
For three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging, the three-dimension segmentation is output by a machine-learnt multi-task generator. Rather than the brute force approach of training the generator from 2D ICE images to output a 2D segmentation, the generator is trained from 3D information, such as a sparse ICE volume assembled from the 2D ICE images. Where sufficient ground truth data is not available, computed tomography or magnetic resonance data may be used as the ground truth for the sample sparse ICE volumes. The generator is trained to output both the 3D segmentation and a complete volume (i.e., more voxels represented than in the sparse ICE volume). The 3D segmentation may be further used to project to 2D as an input with an ICE image to another network trained to output a 2D segmentation for the ICE image. Display of the 3D segmentation and/or 2D segmentation may guide ablation of tissue in the patient.