Patent classifications
A61B34/20
Tracking Apparatus For Tracking A Patient Limb
A tracking apparatus for tracking a bone of a patient limb is provided. The tracking apparatus includes a body configured to couple to the patient limb. The body includes first and second arms each including an exterior and opposing interior surface and opposing sides connecting the exterior and interior surfaces. The tracking apparatus also includes a wing portion extending from one of the sides of the first or second arm, the wing portion sharing the interior surface of the first or second arm. The tracking apparatus also includes one or more ultrasonic sensors coupled to the interior surface of the body and the interior surface of wing portion, the one or more ultrasonic sensor being configured to transmit ultrasonic waves to and receive ultrasonic waves from the bone. The tracking apparatus also includes one or more trackable elements coupled to the body and the wing portion.
METHOD OF LOCATING A MOBILE PERCEPTION DEVICE INTENDED TO BE WORN OR CARRIED BY A USER IN A SURGICAL SCENE
The method of locating at least one mobile perception device of a navigation platform, the mobile perception device intended to be worn or carried by a user in a surgical scene, the navigation platform including at least one perception sensor, comprises: —acquiring, by the at least one perception sensor, a plurality of successive images of the scene including the portion of the body of the patient intended to be subjected to the surgical operation; —processing the plurality of successive images to evaluate a relative position of the mobile perception device and the portion of the body intended to be subjected to the surgical operation, wherein the relative position of the mobile perception device and the portion of the body takes into account a movement of the mobile perception device.
METHOD OF LOCATING A MOBILE PERCEPTION DEVICE INTENDED TO BE WORN OR CARRIED BY A USER IN A SURGICAL SCENE
The method of locating at least one mobile perception device of a navigation platform, the mobile perception device intended to be worn or carried by a user in a surgical scene, the navigation platform including at least one perception sensor, comprises: —acquiring, by the at least one perception sensor, a plurality of successive images of the scene including the portion of the body of the patient intended to be subjected to the surgical operation; —processing the plurality of successive images to evaluate a relative position of the mobile perception device and the portion of the body intended to be subjected to the surgical operation, wherein the relative position of the mobile perception device and the portion of the body takes into account a movement of the mobile perception device.
METHOD OF FITTING A KNEE PROSTHESIS WITH ASSISTANCE OF AN AUGMENTED REALITY SYSTEM
The method of fitting a knee prosthesis in a knee of a patient includes displaying, by a visual display device of a mobile augmented reality navigation system worn or carried by a user performing or assisting at least one navigation-assisted stage of the method, at least one visual element superposed on a view of at least a portion of a surgical scene of the fitting of the knee prosthesis. The visual element can be a 3D model of at least one portion of the knee of the patient, a 3D model of another component of the surgical scene, information relative to the at least one portion of the knee of the patient, and/or information relative to the other component of the surgical scene.
METHOD OF FITTING A KNEE PROSTHESIS WITH ASSISTANCE OF AN AUGMENTED REALITY SYSTEM
The method of fitting a knee prosthesis in a knee of a patient includes displaying, by a visual display device of a mobile augmented reality navigation system worn or carried by a user performing or assisting at least one navigation-assisted stage of the method, at least one visual element superposed on a view of at least a portion of a surgical scene of the fitting of the knee prosthesis. The visual element can be a 3D model of at least one portion of the knee of the patient, a 3D model of another component of the surgical scene, information relative to the at least one portion of the knee of the patient, and/or information relative to the other component of the surgical scene.
CARDIOGRAM COLLECTION AND SOURCE LOCATION IDENTIFICATION
Systems are provided for generating data representing electromagnetic states of a heart for medical, scientific, research, and/or engineering purposes. The systems generate the data based on source configurations such as dimensions of, and scar or fibrosis or pro-arrhythmic substrate location within, a heart and a computational model of the electromagnetic output of the heart. The systems may dynamically generate the source configurations to provide representative source configurations that may be found in a population. For each source configuration of the electromagnetic source, the systems run a simulation of the functioning of the heart to generate modeled electromagnetic output (e.g., an electromagnetic mesh for each simulation step with a voltage at each point of the electromagnetic mesh) for that source configuration. The systems may generate a cardiogram for each source configuration from the modeled electromagnetic output of that source configuration for use in predicting the source location of an arrhythmia.
Drill guide assembly and method
Disclosed is a system to engage one or more tools. In the system, a drive shaft and collet may be assembled to engage and disengage, selectively, a plurality of tools. A guide may be used with the system to select various features of a procedure, such as depth and position.
Systems and methods for registration of location sensors
Provided are systems and methods for registration of location sensors. In one aspect, a system includes an instrument and a processor configured to provide a first set of commands to drive the instrument along a first branch of the luminal network, the first branch being outside a path to a target within a model. The processor is also configured to track a set of one or more registration parameters during the driving of the instrument along the first branch and determine that the set of registration parameters satisfy a registration criterion. The processor is further configured to determine a registration between a location sensor coordinate system and a model coordinate system based on location data received from a set of location sensors during the driving of the instrument along the first branch and a second branch.
Systems and methods for registration of location sensors
Provided are systems and methods for registration of location sensors. In one aspect, a system includes an instrument and a processor configured to provide a first set of commands to drive the instrument along a first branch of the luminal network, the first branch being outside a path to a target within a model. The processor is also configured to track a set of one or more registration parameters during the driving of the instrument along the first branch and determine that the set of registration parameters satisfy a registration criterion. The processor is further configured to determine a registration between a location sensor coordinate system and a model coordinate system based on location data received from a set of location sensors during the driving of the instrument along the first branch and a second branch.
Machine-learning-based visual-haptic system for robotic surgical platforms
Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.