A61B2090/365

Virtual reality training, simulation, and collaboration in a robotic surgical system

A virtual reality system providing a virtual robotic surgical environment, and methods for using the virtual reality system, are described herein. Within the virtual reality system, various user modes enable different kinds of interactions between a user and the virtual robotic surgical environment. For example, one variation of a method for facilitating navigation of a virtual robotic surgical environment includes displaying a first-person perspective view of the virtual robotic surgical environment from a first vantage point, displaying a first window view of the virtual robotic surgical environment from a second vantage point and displaying a second window view of the virtual robotic surgical environment from a third vantage point. Additionally, in response to a user input associating the first and second window views, a trajectory between the second and third vantage points can be generated sequentially linking the first and second window views.

Light and shadow guided needle positioning system and method

The embodiments of the present invention provide a system and method for light and shadow guided needle positioning. DICOM images of a patient are captured for identifying a point of insertion of a needle on the patient's body and a target point inside the patient's body. Needle coordinates are computed based on the captured DICOM images to position the mechanical arms. Light is projected at a particular angle on the needle to form shadows of the needle. Laser light beams are projected to form cross hair at the point of insertion. Images or videos of the point of insertion, shadow of the needle and the cross hair are captured and displayed on a monitoring unit. A virtual circle is projected on the displayed image and is aligned with the point of insertion, shadow of the needle and cross hair in order to insert the needle precisely.

Method of using lung airway carina locations to improve ENB registration

Disclosed are systems, devices, and methods for registering a luminal network to a 3D model of the luminal network. An example method comprises generating a 3D model of a luminal network, identifying a target within the 3D model, determining locations of a plurality of carinas in the luminal network proximate the target, displaying guidance for navigating a location sensor within the luminal network, tracking the location of the location sensor, comparing the tracked locations of the location sensor and the portions of the 3D model representative of open space, displaying guidance for navigating the location sensor a predetermined distance into each lumen originating at the plurality of carinas proximate the target, tracking the location of the location sensor while the location sensor is navigated into each lumen, and updating the registration of the 3D model with the luminal network based on the tracked locations of the location sensor.

ALGORITHM-BASED METHODS FOR PREDICTING AND/OR DETECTING A CLINICAL CONDITION RELATED TO INSERTION OF A MEDICAL INSTRUMENT TOWARD AN INTERNAL TARGET
20230044620 · 2023-02-09 ·

Provided are computer-implemented methods and systems for generating and/or utilizing data analysis algorithm(s) for predicting and/or detecting a clinical condition related to insertion of a medical instrument toward a target in a body of a patient based, inter alia, on data related to an automated medical device and/or to operation thereof.

Resource segmentation to improve delivery performance

A flexible approach to segmenting a resource (e.g., a media resource, such as a media segment, or other resource, such as a resource normally fetched or pushed using general file transfer protocols like HTTP) into a plurality of fragments. By employing such an approach, the delay until the resource can be utilized at the client side is reduced. Certain embodiments are provided which apply the flexible segmentation approach to ISOBMFF media segments for video streaming, such as would be used with Live DASH streaming.

METHOD FOR AUTOMATICALLY PLANNING A TRAJECTORY FOR A MEDICAL INTERVENTION
20230008386 · 2023-01-12 ·

The invention relates to a method for automatically planning a trajectory to be followed during a medical intervention by a medical instrument targeting an anatomy of interest of a patient, said automatic planning method comprising the steps of: acquiring at least one medical image of the anatomy of interest; determining a target point on the previously acquired image; generating a set of trajectory planning parameters from the medical image of the anatomy of interest and the previously determined target point, the set of planning parameters comprising coordinates of an entry point on the medical image. The set of parameters is generated using a machine learning method of neural network type. The invention also relates to a guiding device implementing the set of planning parameters obtained.

METHODS FOR MULTI-MODAL BIOIMAGING DATA INTEGRATION AND VISUALIZATION

A multi-modal visualization system (MMVS) is provided, which may be used to analyze and visualize bioimaging data, objects, and pointers, such as neuroimaging data, surgical tools, and pointing rods. MMVS can integrate multiple bioimaging modalities to visualize a plurality of bioimaging datasets simultaneously, such as anatomical bioimaging data and functional bioimaging data.

Surgical instrument with real time navigation assistance

Navigation assistance systems and methods for use with a surgical instrument to assist in navigation of a surgical instrument during an operation. The system may include sensors that may observe the patient to generate positioning data regarding the relative position of the surgical instrument and the patient. The system may retrieve imaging data regarding the patient and correlate the imaging data to the positioning data. In turn, the position of the surgical instrument relative to the imaging data may be provided and used to generate navigation date (e.g., position, orientation, trajectory, or the like) regarding the surgical instrument.

ADHESIVE FIDUCIAL MARKERS FOR MEDICAL AUGMENTED REALITY
20230037393 · 2023-02-09 ·

Various embodiments of a physical instrument are described herein. The physical instrument comprises a reference array platform having a top surface and a bottom surface. A reference array including one or more different fiducial markers is disposed on the top surface of the reference array platform. The reference array platform may have a bent physical configuration or a tilted physical configuration. An adhesive layer is disposed on the bottom surface of the reference array platform.

ACTIVATION OF ENERGY DEVICES

Various systems and methods for controlling the activation of energy surgical instruments are disclosed. An advance energy surgical instrument, such an electrosurgical instrument or an ultrasonic surgical instrument, can include one or more sensor assemblies for detecting the state or position of the end effector, arm, or other components of the surgical instrument. A control circuit can be configured to control the activation of the surgical instrument according to the state or position of the components of the surgical instrument.