A61B2090/364

Systems and methods for performing intraoperative image registration

Systems and methods are provided for performing intraoperative fusion of two or more volumetric image datasets via surface-based image registration. The volumetric image datasets are separately registered with intraoperatively acquired surface data, thereby fusing the two volumetric image datasets into a common frame of reference while avoiding the need for complex and time-consuming preoperative volumetric-to-volumetric image registration and fusion. The resulting fused image data may be processed to generate one or more images for use during surgical navigation.

Systems and methods for laparoscopic planning and navigation
11547481 · 2023-01-10 · ·

A method for performing a surgical procedure includes generating, by a computing device, an anatomical map of a patient from a plurality of images; positioning a trocar obturator adjacent to the patient; calculating, by the computing device, a projected path of the trocar obturator; overlaying, by the computing device, the projected path of the trocar obturator with the anatomical map of the patient; and displaying the projected path of the trocar obturator and the anatomical map of the patient on a display device to define an augmented image.

POSITIONING SYSTEM REGISTRATION USING MECHANICAL LINKAGES
20230210604 · 2023-07-06 ·

A positioning system includes a group of positioning devices including a first device comprising a first positioning source associated with a first positioning modality, the first positioning source being configured to view a first field, a second device comprising a second positioning source associated with a second positioning modality that is of a different type than the first positioning modality, the second positioning source being configured to view a second field, a third device comprising one or more first markers detectable within the first field using the first positioning modality, and a fourth device comprising one or more second markers detectable within the second field using the second positioning modality. A linking structure physically links two of the group of positioning devices to one another in a fixed, rigid relative position and orientation.

Assisting apparatus for assisting a user during an interventional procedure
11690676 · 2023-07-04 · ·

The invention relates to an assisting apparatus (2) for assisting a user in moving an insertion element (11) like a catheter to a target element within, for instance, a person (8). A target element representation representing the target element within the object in its three-dimensional position and three-dimensional orientation and with its size is generated based on a provided target element image. Moreover, a three-dimensional position of the insertion element is tracked, while the insertion element is moved to the target element, and the target element representation and the tracked position of the insertion element are displayed. The three-dimensional position and orientation of the target element relative to the actual position of the insertion element can therefore be shown to the user, while the insertion element is moved to the target element, which allows the user to more accurately and faster move the insertion element to the target element.

Alignment precision

Alignment precision technology, in which a system accesses image data of a bone to which a reference marker array is fixed. The system generates a three-dimensional representation of the bone and the reference markers, defines a coordinate system for the three-dimensional representation, and determines locations of the reference markers relative to the coordinate system. The system accesses intra-operative image data that includes the bone and a mobile marker array that is attached to an instrument used in a surgical procedure. The system co-registers the intra-operative image data with the three-dimensional representation by matching the reference markers included in the intra-operative image data to the locations of the reference markers. The system determines locations of the mobile markers in the co-registered image and determines a three-dimensional spatial position and orientation of the instrument relative to the bone.

Image processing system and method

A System for image processing (IPS), in particular for lung imaging. The system (IPS) comprises an interface (IN) for receiving at least a part of a 3D image volume (VL) acquired by PAT an imaging apparatus (IA1) of a lung (LG) of a subject (PAT) by exposing the subject (PAT) to a first interrogating signal. A layer definer (LD) of the system (IPS) is configured to define, in the 3D image volume, a layer object (LO) that includes a representation of a surface (S) of the lung (LG). A renderer (REN) of the system (IPS) is configured to render at least a part of the layer object (LO) in 3D at a rendering view (V.sub.p) for visualization on a display device (DD).

Navigation system for and method of tracking the position of a work target

Navigation system and method for tracking movement of a patient during surgery. Image data is acquired by imaging the patient with a base layer of a skin-based patient tracking apparatus secured to the patient's skin. The skin-based patient tracking apparatus includes a plurality of optical surgical tracking elements. A computer processor arrangement is adapted to implement a navigation routine. The patient position is registered to the image data. The movement of the patient is tracked based on movement of the plurality of optical surgical tracking elements. The movement of the patient's skin is tracked by determining positions of the optical surgical tracking elements both before and after a deformation of the skin-based patient tracking apparatus. Movement of the patient's skin results in corresponding movement of the surgical tracking elements to provide a dynamic reference frame for use in continuously tracking movement of a patient's skin during surgery.

HYBRID MULTI-CAMERA TRACKING FOR COMPUTER-GUIDED SURGICAL NAVIGATION
20220409287 · 2022-12-29 · ·

The invention relates to a camera system for surgical navigation systems including a plurality of cameras mounted in a room. At least three cameras are mounted in the room which are operated in at least two different modes. In the first mode at least a subset of the cameras is operated to determine the position of markers and in a second mode at least a subset of the cameras is operated to determine the position of surfaces of the room.

METHOD AND SYSTEM FOR REPRODUCING AN INSERTION POINT FOR A MEDICAL INSTRUMENT
20220409290 · 2022-12-29 ·

The invention relates to a method for displaying an injection point for a medical instrument. The method comprises the following steps: Providing at least one marker on a surface of an object, with such marker exhibiting the property that it can be recorded both tomographically, in particular fluoroscopically, and also optically; Generating tomographic image data that can be used to reconstruct a fluoroscopic image of the at least one marker, located on the surface of the object, together with the object; Determining the insertion point for the medical instrument on the surface of the object relative to the at least one marker in the coordinate system of the tomographic image data; Generating visual image data that can be used to reconstruct a visual image of the at least one marker, located on the surface of the object, together with the object; Transforming the coordinate of the insertion point in the coordinate system of the tomographic image data into the coordinate system of the visual image data using the relative position of the insertion point to the at least one marker; and Displaying the insertion point for the medical instrument in real time in a view of the object.

SYSTEMS AND METHODS FOR GUIDING AN ULTRASOUND PROBE
20220409292 · 2022-12-29 ·

An ultrasound device (10) includes a probe (12) including a tube (14) sized for insertion into a patient and an ultrasound transducer (18) disposed at a distal end (16) of the tube. A camera (20) is mounted at the distal end of the tube in a fixed spatial relationship to the ultrasound transducer. At least one electronic processor (28) is programmed to: control the ultrasound transducer and the camera to acquire ultrasound images (19) and camera images (21) respectively while the ultrasound transducer is disposed in vivo inside the patient; and construct a keyframe (36) representative of an in vivo position of the ultrasound transducer including at least ultrasound image features (38) extracted from at least one of the ultrasound images acquired at the in vivo position of the ultrasound transducer and camera image features (40) extracted from one of the camera images acquired at the in vivo position of the ultrasound transducer.