Patent classifications
A61B2090/364
Surgical guidance intersection display
A system and method for providing image guidance for placement of one or more medical devices at a target location. The system can determine one or more intersections between a medical device and an image region based at least in part on first emplacement data and second emplacement data. Using the determined intersections, the system can cause one or more displays to display perspective views of image guidance cues, including an intersection indicator in a virtual 3D space.
Medical manipulator system and image display method therefor
A medical manipulator system includes: an endoscope; a first manipulator equipped with a first treatment tool at a distal end thereof; a second manipulator equipped with a second treatment tool at a distal end thereof; a display for a user to view; and a controller configured to generate an image to be displayed on the display. The controller is configured to: acquire a first image taken by the endoscope, the first image contains the first treatment tool; and in response to determining that the second treatment tool does not exist in the first image: calculate a relative distance and a relative direction between the first treatment tool and the second treatment tool; generate a second image showing the relative distance and the relative direction between the first treatment tool and the second treatment tool; and send the first image and the second image to the display.
System and methods for navigating interventional instrumentation
An image guided surgical system includes a marker attachable to and removable from an elongated surgical tool having a shaft, and at least one camera, and an image processing system in communication with the camera configured to obtain an image of the surgical tool. The image processing system is configured to operate in a calibration mode to generate a template and display the template on a display device and to receive a user input, after the image of the surgical tool is aligned to the template, to adjust a length of the template to substantially match a length of the surgical tool. A storage device in communication with the image processing system is included to store calibration information that associates a position of the marker with a position of the tip of the shaft of the surgical tool based on the adjusted length of the template.
Master/slave registration and control for teleoperation
A teleoperated system comprises a display, a master input device, and a control system. The control system is configured to determine an orientation of an end effector reference frame relative to a field of view reference frame, determine an orientation of a master input device reference frame relative to a display reference frame, establish an alignment relationship between the master input device reference frame and the display reference frame, and command, based on the alignment relationship, a change in a pose of the end effector in response to a change in a pose of the master input device. The alignment relationship is independent of a position relationship between the master input device reference frame and the display reference frame. In one aspect, the teleoperated system is a telemedical system such as a telesurgical system.
IMAGE BASED MOTION CONTROL CORRECTION
The present invention relates to a method of adjusting control commands for moving a medical camera connected to a motorized support structure, wherein the adjustment is based on images provided by the camera. Based on a comparison of at least two images provided by the camera, an actual motion of the camera is determined and compared with an intended motion defined by a control command forwarded to the motorized support structure. In case a deviation between the intended motion and the actual motion is determined, a correction is applied to the control command such that the actual motion of the camera coincides with the intended motion.
MEDICAL OBSERVATION SYSTEM, CONTROL DEVICE, AND CONTROL METHOD
A medical observation system includes: a plurality of types of sensor units that measure information regarding an internal environment; an acquisition unit (131) that acquires individual sensor values of the plurality of types of sensor units; a comparison unit (132) that compares the individual sensor values of the plurality of types of sensor units acquired by the acquisition unit (131); and a determination unit (134) that determines a sensor unit to be used for observing the internal environment among the plurality of types of sensor units based on a comparison result obtained by the comparison unit (132).
ROBOTIC SURGICAL NAVIGATION USING A PROPRIOCEPTIVE DIGITAL SURGICAL STEREOSCOPIC CAMERA SYSTEM
A robotic surgical navigation system is disclosed. An example system includes a stereoscopic camera and a robotic arm having an end-effector connected to the camera. The system also includes a navigation computer that determines a first transformation between the stereoscopic camera and a target surgical site, a second transformation between the end-effector and a robotic base of the robotic arm, and a third transformation between the robotic base and the target surgical site. The navigation computer calculates a fourth transformation using the first, second, and third transformations. The fourth transformation represents a transformation between the end-effector and the stereoscopic camera. The navigation computer uses the transformations to determine coordinates for a view vector of the stereoscopic camera that are in a coordinate system of the robotic arm, thereby enabling movement of the robotic arm based on commands provided by an operator in relation to the view vector of the camera.
IMAGING-BASED SIZING OPTIMIZATION OF ENDOTRACHEAL TUBE FOR MECHANICAL VENTILATION
An intubation assistance device includes an electronic controller configured to: identify, from one or more images of a patient, information about the patient including at least a diameter of a trachea and a length of an intubation pathway; determine a recommended ETT size including an ETT diameter and an ETT depth of insertion from the determined diameter of the trachea and the determined length of the intubation pathway; and display the recommended ETT size on a display device.
SYSTEMS AND METHODS UTILIZING MACHINE-LEARNING FOR IN VIVO NAVIGATION
A method of providing in vivo navigation of a medical device includes: receiving input medical imaging data of a patient's anatomy; receiving input non-optical in vivo image data from a sensor on a distal end of the device in the anatomy; using a trained model to locate the distal end in the input imaging data, wherein: the model is trained, based on (i) training medical imaging data and training non-optical in vivo image data of one or more individuals' anatomy and (ii) registration data associating the training image data with locations in the training imaging data as ground truth, to learn associations between the training image data and the training imaging data; determining an output location of the medical device using the learned associations and the input data; modifying the input imaging data to depict the determined location; and causing a display to output the modified input imaging data.
Robotic navigation of robotic surgical systems
In certain embodiments, the systems, apparatus, and methods disclosed herein relate to robotic surgical systems with built-in navigation capability for patient position tracking and surgical instrument guidance during a surgical procedure, without the need for a separate navigation system. Robotic based navigation of surgical instruments during surgical procedures allows for easy registration and operative volume identification and tracking. The systems, apparatus, and methods herein allow re-registration, model updates, and operative volumes to be performed intra-operatively with minimal disruption to the surgical workflow. In certain embodiments, navigational assistance can be provided to a surgeon by displaying a surgical instrument's position relative to a patient's anatomy. Additionally, by revising pre-operatively defined data such as operative volumes, patient-robot orientation relationships, and anatomical models of the patient, a higher degree of precision and lower risk of complications and serious medical error can be achieved.