Patent classifications
A61B6/5247
SYSTEMS AND METHODS FOR USING REGISTERED FLUOROSCOPIC IMAGES IN IMAGE-GUIDED SURGERY
A method performed by a computing system comprises receiving a fluoroscopic image of a patient anatomy while a portion of a medical instrument is positioned within the patient anatomy. The fluoroscopic image has a fluoroscopic frame of reference. The portion has a sensed position in an anatomic model frame of reference. The method further comprises identifying the portion in the fluoroscopic image and identifying an extracted position of the portion in the fluoroscopic frame of reference using the identified portion in the fluoroscopic image. The method further comprises registering the fluoroscopic frame of reference to the anatomic model frame of reference based on the sensed position of the portion and the extracted position of the portion.
SYSTEM AND METHOD FOR AUTOMATED TRANSFORM BY MANIFOLD APPROXIMATION
A system may transform sensor data from a sensor domain to an image domain using data-driven manifold learning techniques which may, for example, be implemented using neural networks. The sensor data may be generated by an image sensor, which may be part of an imaging system. Fully connected layers of a neural network in the system may be applied to the sensor data to apply an activation function to the sensor data. The activation function may be a hyperbolic tangent activation function. Convolutional layers may then be applied that convolve the output of the fully connected layers for high level feature extraction. An output layer may be applied to the output of the convolutional layers to deconvolve the output and produce image data in the image domain.
Systems and methods for performing intraoperative image registration
Systems and methods are provided for performing intraoperative fusion of two or more volumetric image datasets via surface-based image registration. The volumetric image datasets are separately registered with intraoperatively acquired surface data, thereby fusing the two volumetric image datasets into a common frame of reference while avoiding the need for complex and time-consuming preoperative volumetric-to-volumetric image registration and fusion. The resulting fused image data may be processed to generate one or more images for use during surgical navigation.
MEDICAL DEVICES AND METHODS THEREOF
The present disclosure provides medical devices and methods thereof. The medical device may include a housing, a positron emission tomography (PET) detector module, and a radio frequency (RF) coil. The housing may form a scanning tunnel for accommodating a subject. The PET detector module may be arranged along a circumference of the scanning tunnel. The RF coil may be arranged along the circumference of the scanning tunnel. The RF coil may include a first RF coil and a second RF coil. The first RF coil and the second RF coil may be disposed coaxially around an axial direction of the scanning tunnel. A projection of the second RF coil along a radial direction of the scanning tunnel may cover at least a portion of a gap of the first RF coil.
Probe and system and method for detecting radiation and magnetic activity from body tissue
A hand-held probe for measuring radiation or magnetic activity includes a probe having a handle having a longitudinal axis and a shaft portion adapted to be inserted or held above a radiation or magnetic emitting source implanted within a patient's body or tissue of interest, the shaft portion includes a radiation or magnetic activity sensor configured to detect and measure radiation emitted from the radiation emitting source or magnetic activity from a magnetic source; the radiation emitting source being an implanted seed or a radioisotope that is injected near a tumor site in the patient's body; the probe including a signal processing device for further processing the measured radiation or magnetic activity; and a communication medium to exchange data from the hand-held probe with an external data processor unit.
Tumor position determination
A computer-implemented tumor position determining model is trained, based on a plurality of sets of image data, to determine a subsequent position of a tumor in a subject based on a subsequent 2D or 3D representation of a surface of the subject, an initial image of the tumor in the subject and an initial 2D or 3D representation of a surface of the subject. Each set of image data comprises an initial training image of a tumor in a subject, an initial training 2D or 3D representation of a surface of the subject, a subsequent training image of the tumor in the subject and a subsequent training 2D or 3D representation of a surface of the subject. The subsequent training image and the subsequent training 2D or 3D representation are taken at a subsequent point in time than the initial training image and the initial training 2D or 3D representation and the plurality of sets of image data are from a plurality of different subjects.
Intraoperative Ultrasound Probe System and Related Methods
An intraoperative ultrasound imaging system and method capable of using ultrasound imaging to safely place a surgical access instrument (e.g. guide wire, dilator, cannula, etc.) through a tissue (e.g., muscle, fat, brain, liver, lung, etc.) without damaging nearby neurovascular structure is described herein. The intraoperative ultrasound system includes an ultrasound probe assembly configured for emitting and receiving ultrasound waves and a computer system including a processor and a display unit. Once the probe is in position, ultrasound imaging is performed wherein the computer receives RF data from the probe and causes a B-mode image of the visible anatomical structures (e.g. muscle, bone, etc.) to be displayed on the display unit.
System and methods for navigating interventional instrumentation
An image guided surgical system includes a marker attachable to and removable from an elongated surgical tool having a shaft, and at least one camera, and an image processing system in communication with the camera configured to obtain an image of the surgical tool. The image processing system is configured to operate in a calibration mode to generate a template and display the template on a display device and to receive a user input, after the image of the surgical tool is aligned to the template, to adjust a length of the template to substantially match a length of the surgical tool. A storage device in communication with the image processing system is included to store calibration information that associates a position of the marker with a position of the tip of the shaft of the surgical tool based on the adjusted length of the template.
IMAGING-BASED SIZING OPTIMIZATION OF ENDOTRACHEAL TUBE FOR MECHANICAL VENTILATION
An intubation assistance device includes an electronic controller configured to: identify, from one or more images of a patient, information about the patient including at least a diameter of a trachea and a length of an intubation pathway; determine a recommended ETT size including an ETT diameter and an ETT depth of insertion from the determined diameter of the trachea and the determined length of the intubation pathway; and display the recommended ETT size on a display device.
MULTI-SCAN IMAGE PROCESSING
A framework for multi-scan image processing. A single real anatomic image of a region of interest is first acquired. One or more emission images of the region of interest are also acquired. One or more synthetic anatomic images may be generated based on the one or more emission images. One or more deformable registrations of the real anatomic image to the one or more synthetic anatomic images are performed to generate one or more registered anatomic images. Attenuation correction may then be performed on the one or more emission images using the one or more registered anatomic images to generate one or more attenuation corrected emission images.