A61B2090/372

Surgical display
11696813 · 2023-07-11 · ·

Disclosed herein are visualization systems, methods, devices and database configurations related to the real-time depiction, in 2 D and 3 D on monitor panels as well as via 3 D holographic visualization, of the internal workings of patient surgery, such as patient intervention site posture as well as the positioning, in some cases real time positioning, of an object foreign to the patient.

Iliac pin and adapter
20230009793 · 2023-01-12 ·

Apparatus for mounting in a bone of a patient, consisting of a rigid elongated member having an axis of symmetry and a distal section, a proximal section, and an intermediate section connecting the distal and proximal sections. The apparatus has n helical blades, formed in the distal section, distributed symmetrically about the axis, each of the blades having a helix angle greater than zero and less than 45°. A cross-section of the distal section, taken orthogonally to the axis of symmetry, includes n mirror planes containing the axis of symmetry, wherein n is a whole number greater than one, and wherein the blades are configured to penetrate into the bone and engage stably therein. Adapters coupling the apparatus to different types of markers are also described

REAL TIME IMAGE GUIDED PORTABLE ROBOTIC INTERVENTION SYSTEM

An image-guided robotic intervention system (“IGRIS”) may be used to perform medical procedures on patients. IGRIS provides a real-time view of patient anatomy, as well as an intended target or targets for the procedures, software that allows a user to plan an approach or trajectory path using either the image or the robotic device, software that allows a user to convert a series of 2D images into a 3D volume, and localizes the 3D volume with respect to real-time images during the procedure. IGRIS may include sensors to estimate pose of the imaging device relative to the patient to improve the performance of that software with respect to runtime, robustness, and accuracy.

EASY TO MANUFACTURE AUTOCLAVABLE LED FOR OPTICAL TRACKING

An optical tracking system is provided. The optical tracking system comprises an autoclavable fiducial marker assembly including an opaque housing, a light source, a window panel configured to refract light rays from the light source therethrough, and a metallized coating forming a hermetic seal at an interface of the window panel and the opaque housing. The fiducial marker assembly is configured to shield a peripheral edge of the window panel from the light rays. The system further comprises a tracking device comprising at least two optical sensors configured to detect a position of a light ray emitted by the light source. The system further comprises a processor configured to receive the position of the light rays from the optical sensors, shift the position of each light ray based on a calculated refraction deviation, and triangulate the location of the light source based on the shifted position of each light ray.

SURGICAL SKILL TRAINING SYSTEM AND MACHINE LEARNING-BASED SURGICAL GUIDE SYSTEM USING THREE-DIMENSIONAL IMAGING
20230210598 · 2023-07-06 · ·

A surgical skill training system includes: a data collecting unit configured to collect actual surgical skill data on a patient of an operating surgeon; an image providing server configured to generate a 3-dimensional (3D) surgical image for surgical skill training, based on the actual surgical skill data; and a user device configured to display the 3D surgical image, wherein the image providing server includes: a patient image generating unit configured to generate a patient image, based on patient information of the patient; a surgical stage classifying unit configured to classify the actual surgical skill data into actual surgical skill data for each surgical stage performed by the operating surgeon; and a 3D image generating unit configured to generate the 3D surgical image by using the patient image, and feature information detected from the actual surgical skill data.

Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display

A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.

Methods and systems for touchless control of surgical environment

A method facilitates touchless control of medical equipment devices in an OR. The method involves: providing a three-dimensional control menu, which comprises a plurality of menu items selectable by the practitioner by one or more gestures made in a volumetric spatial region corresponding to the menu item; displaying an interaction display unit (IDU) image corresponding to the three-dimensional control menu to provide indicia of any selected menu items; estimating a line of sight of a practitioner; and when the estimated line of sight is directed within a first spatial range around a first medical equipment device, determining that the practitioner is looking at the first medical equipment device. Then the method involves providing a first device-specific three-dimensional control menu displaying a first device-specific IDU image.

METHOD AND SYSTEM FOR REPRODUCING AN INSERTION POINT FOR A MEDICAL INSTRUMENT
20220409290 · 2022-12-29 ·

The invention relates to a method for displaying an injection point for a medical instrument. The method comprises the following steps: Providing at least one marker on a surface of an object, with such marker exhibiting the property that it can be recorded both tomographically, in particular fluoroscopically, and also optically; Generating tomographic image data that can be used to reconstruct a fluoroscopic image of the at least one marker, located on the surface of the object, together with the object; Determining the insertion point for the medical instrument on the surface of the object relative to the at least one marker in the coordinate system of the tomographic image data; Generating visual image data that can be used to reconstruct a visual image of the at least one marker, located on the surface of the object, together with the object; Transforming the coordinate of the insertion point in the coordinate system of the tomographic image data into the coordinate system of the visual image data using the relative position of the insertion point to the at least one marker; and Displaying the insertion point for the medical instrument in real time in a view of the object.

SYSTEMS AND METHODS FOR TELESTRATION WITH SPATIAL MEMORY
20220409324 · 2022-12-29 ·

An exemplary system is configured to detect user input directing a telestration element to be drawn within an image depicting a surface within a scene; render, based on depth data representative of a depth map for the scene and within a three dimensional (3D) image depicting the surface within the scene, the telestration element; record a 3D position within the scene at which the telestration element is rendered within the 3D image; detect a telestration termination event that removes the telestration element from being rendered within the 3D image; and indicate, subsequent to the telestration termination event, an option to again render the telestration element at the 3D position.

REPRESENTATION APPARATUS FOR DISPLAYING A GRAPHICAL REPRESENTATION OF AN AUGMENTED REALITY
20220414994 · 2022-12-29 ·

A representation apparatus for displaying a graphical representation of an augmented reality includes a capture unit, a first display unit, and a processing unit. The first display unit is at least partially transparent. The capture unit is configured to capture a relative positioning of the first display unit relative to a representation area of a second display unit. The processing unit is configured to determine an observation geometry between the first display unit and the representation area of the second display unit based on the relative positioning, receive a dataset, generate the augmented reality based on the dataset, and provide the graphical representation of the augmented reality via virtual mapping of the augmented reality onto the representation area along the observation geometry. The first display unit displays the graphical representation of the augmented reality in at least partial overlaying with the representation area of the second display unit.