A61B2090/502

SURGICAL SKILL TRAINING SYSTEM AND MACHINE LEARNING-BASED SURGICAL GUIDE SYSTEM USING THREE-DIMENSIONAL IMAGING
20230210598 · 2023-07-06 · ·

A surgical skill training system includes: a data collecting unit configured to collect actual surgical skill data on a patient of an operating surgeon; an image providing server configured to generate a 3-dimensional (3D) surgical image for surgical skill training, based on the actual surgical skill data; and a user device configured to display the 3D surgical image, wherein the image providing server includes: a patient image generating unit configured to generate a patient image, based on patient information of the patient; a surgical stage classifying unit configured to classify the actual surgical skill data into actual surgical skill data for each surgical stage performed by the operating surgeon; and a 3D image generating unit configured to generate the 3D surgical image by using the patient image, and feature information detected from the actual surgical skill data.

User wearable fluorescence enabled visualization system

A user-wearable fluorescence based visualization system comprising a multi-light lamp assembly that provides for the selected output of light using multiple light emitting sources, wherein the outputted light may be tailored to generate response wavelength by the interaction of the emitted light and a tissue illuminated by the emitted light, through the process of fluorescence, and a viewing system that allows a practitioner view the fluorescent light generated by the tissue, and distinguish between healthy and diseased tissues.

Methods and systems for touchless control of surgical environment

A method facilitates touchless control of medical equipment devices in an OR. The method involves: providing a three-dimensional control menu, which comprises a plurality of menu items selectable by the practitioner by one or more gestures made in a volumetric spatial region corresponding to the menu item; displaying an interaction display unit (IDU) image corresponding to the three-dimensional control menu to provide indicia of any selected menu items; estimating a line of sight of a practitioner; and when the estimated line of sight is directed within a first spatial range around a first medical equipment device, determining that the practitioner is looking at the first medical equipment device. Then the method involves providing a first device-specific three-dimensional control menu displaying a first device-specific IDU image.

Extended reality AR/VR system
11691073 · 2023-07-04 · ·

A system includes a mobile device having one or more cameras to take images; a sensor detecting reflected light from one or more lasers and a diffuser to detect object range or dimension; code for motion tracking, environmental understanding by detecting planes in an environment, and estimating light and dimensions of the surrounding based on the one or more lasers; code to estimate a three-dimensional (3D) volume of an object from multiple perspectives and from projected laser beams to measure size or scale and determine locations of points on the object's surface in a plane or a slice using time-of-flight, wherein positions and cross-sections for different slices are correlated to construct a 3D model of the object, including object position and shape; the device receiving user request to select a content from one or more augmented, virtual, or extended reality contents and rendering a reality view of the environment.

Displaying a virtual model of a planned instrument attachment to ensure correct selection of physical instrument attachment

A virtual model a planned instrument attachment can be provided to ensure correct selection of a physical instrument attachment. An XR headset controller can generate a shape and a pose of the virtual model of the planned instrument attachment based on predetermined information associated with the planned instrument attachment and based on a pose of an instrument relative to the XR headset. An XR headset can display the virtual model on a see-through display screen of the XR headset that is configured to allow at least a portion of a real-world scene to pass therethrough.

Intraoperative alignment assessment system and method

Some embodiments provide systems, assemblies, and methods of analyzing patient anatomy including providing an analysis of a patient's spine. The systems, assemblies, and/or methods can include obtaining initial patient data, and acquiring spinal alignment contour information. Further, the systems, assemblies, and/or methods can assess localized anatomical features of the patient, and obtain anatomical region data. The system, assemblies, and/or method can analyze the localized anatomy and therapeutic device location and contouring. Further, the system, assemblies, and/or method can output localized anatomical analyses and therapeutic device contouring data and/or imagery on a display.

METHOD AND SYSTEM FOR REPRODUCING AN INSERTION POINT FOR A MEDICAL INSTRUMENT
20220409290 · 2022-12-29 ·

The invention relates to a method for displaying an injection point for a medical instrument. The method comprises the following steps: Providing at least one marker on a surface of an object, with such marker exhibiting the property that it can be recorded both tomographically, in particular fluoroscopically, and also optically; Generating tomographic image data that can be used to reconstruct a fluoroscopic image of the at least one marker, located on the surface of the object, together with the object; Determining the insertion point for the medical instrument on the surface of the object relative to the at least one marker in the coordinate system of the tomographic image data; Generating visual image data that can be used to reconstruct a visual image of the at least one marker, located on the surface of the object, together with the object; Transforming the coordinate of the insertion point in the coordinate system of the tomographic image data into the coordinate system of the visual image data using the relative position of the insertion point to the at least one marker; and Displaying the insertion point for the medical instrument in real time in a view of the object.

REPRESENTATION APPARATUS FOR DISPLAYING A GRAPHICAL REPRESENTATION OF AN AUGMENTED REALITY
20220414994 · 2022-12-29 ·

A representation apparatus for displaying a graphical representation of an augmented reality includes a capture unit, a first display unit, and a processing unit. The first display unit is at least partially transparent. The capture unit is configured to capture a relative positioning of the first display unit relative to a representation area of a second display unit. The processing unit is configured to determine an observation geometry between the first display unit and the representation area of the second display unit based on the relative positioning, receive a dataset, generate the augmented reality based on the dataset, and provide the graphical representation of the augmented reality via virtual mapping of the augmented reality onto the representation area along the observation geometry. The first display unit displays the graphical representation of the augmented reality in at least partial overlaying with the representation area of the second display unit.

PRESENTATION DEVICE FOR DISPLAYING A GRAPHICAL PRESENTATION OF AN AUGMENTED REALITY
20220409283 · 2022-12-29 ·

A presentation device for displaying a graphical presentation of an augmented reality is disclosed. The presentation device includes a recording unit, a first display unit, and a processing unit. The recording unit is configured to capture a relative positioning of the first display unit in respect of a presentation area and capture a second set of graphical information. The processing unit is configured to generate an augmented reality based on a received dataset, supply a graphical presentation of the augmented reality by virtual mapping to the presentation area, and adjust the augmented reality and/or the graphical presentation thereof as a function of the second set of graphical information. The first display unit is at least partially transparent and is configured to display the graphical presentation of the augmented reality.

PATIENT-SPECIFIC ADJUSTMENT OF SPINAL IMPLANTS, AND ASSOCIATED SYSTEMS AND METHODS

A computer system receives readings from sensors embedded in a spinal implant implanted in a patient during surgery. The sensor readings are indicative of a load applied by a spine of the patient on the spinal implant. The load causes physical discomfort to the patient. A feature vector is extracted from the implant sensor readings using a machine learning module. The feature vector is indicative of the physical discomfort caused by the load. Electrical signals are generated using the machine learning module based on the feature vector. The machine learning module is trained based on patient data sets to generate the electrical signals to balance the load, such that the physical discomfort is reduced. The electrical signals are transmitted to one or more actuators embedded in the spinal implant to cause the one or more actuators to configure the spinal implant, such that the load is balanced.