A61B90/361

DYNAMIC SCALING FOR A ROBOTIC SUGICAL SYSTEM

A robotic surgical system in which the system applies a scaling factor between user input from a user input device and corresponding movements of the robotic manipulator. Scaling factors may be applied or adjusted based on detected conditions such as the type of instrument being manipulated, detected distance between multiple instruments being manipulated, user biometric parameters.

MULTIPLE-INPUT INSTRUMENT POSITION DETERMINATION

A robotic system includes an instrument including an elongate shaft, a robotic manipulator configured to manipulate the elongate shaft of the instrument, and control circuitry communicatively coupled to the robotic manipulator and configured to determine a first estimated position of at least a portion of the elongate shaft of the instrument based at least in part on robotic command data, determine a second estimated position of the at least a portion of the elongate shaft of the instrument based at least in part on position sensor data, compare the first estimated position and the second estimated position, and generate a third estimated position based at least in part on the comparison of the first estimated position to the second estimated position.

Apparatuses, Methods and Computer Programs for Controlling a Microscope System
20230046644 · 2023-02-16 ·

Examples relate to apparatuses, methods and computer programs for controlling a microscope system, and to a corresponding microscope system. An apparatus for controlling a microscope system comprises an interface for communicating with a camera module. The camera module is suitable for providing camera image data of a head of a user of the microscope system. The apparatus comprises a processing module configured to obtain the camera image data from the camera module via the interface. The processing module is configured to process the camera image data to determine information on an angular orientation of the head of the user relative to a display of the microscope system. The processing module is configured to provide a control signal for a robotic adjustment system of the microscope system based on the information on the angular orientation of the head of the user.

ENDOSCOPIC VESSEL HARVESTING WITH THERMAL MANAGEMENT AND AUGMENTED REALITY DISPLAY

A vessel harvesting system removes a target vessel from a patient for use as a bypass. An elongated harvesting instrument inserts into a body along a path of a target vessel which includes at least one side branch. The harvesting instrument includes a cutter for applying thermal energy to sever and cauterize the side branch. An endoscopic camera captures visible-light images from a distal tip of the instrument within a dissected tunnel around the target vessel. A thermal camera captures thermograms coinciding with the visible-light images to characterize a temperature present at respective surfaces in the tunnel. An image processor (e.g., an electronic controller) renders a video stream including the visible-light images and an overlay depicting the temperatures present on at least some of the respective surfaces when applying the thermal energy. A display presenting the video stream and overlay to a user can be an augmented-reality display.

METHOD OF FITTING A KNEE PROSTHESIS WITH ASSISTANCE OF AN AUGMENTED REALITY SYSTEM

The method of fitting a knee prosthesis in a knee of a patient includes displaying, by a visual display device of a mobile augmented reality navigation system worn or carried by a user performing or assisting at least one navigation-assisted stage of the method, at least one visual element superposed on a view of at least a portion of a surgical scene of the fitting of the knee prosthesis. The visual element can be a 3D model of at least one portion of the knee of the patient, a 3D model of another component of the surgical scene, information relative to the at least one portion of the knee of the patient, and/or information relative to the other component of the surgical scene.

Systems and methods for registration of location sensors

Provided are systems and methods for registration of location sensors. In one aspect, a system includes an instrument and a processor configured to provide a first set of commands to drive the instrument along a first branch of the luminal network, the first branch being outside a path to a target within a model. The processor is also configured to track a set of one or more registration parameters during the driving of the instrument along the first branch and determine that the set of registration parameters satisfy a registration criterion. The processor is further configured to determine a registration between a location sensor coordinate system and a model coordinate system based on location data received from a set of location sensors during the driving of the instrument along the first branch and a second branch.

Staple instrument comprising a firing path display

A surgical stapling system for stapling the tissue of a patient is disclosed. The stapling system comprises a housing, a shaft extending from the housing, and an end effector extending from the shaft. The end effector comprises a plurality of staples removably stored therein and, also, an anvil configured to deform the staples. The stapling system further comprises a firing mechanism configured to deploy the staples along a staple firing path longer than 60 mm, a camera configured to capture an image of the patient tissue, a display, and a controller configured to generate an image of the staple firing path, wherein the images are displayed on the display.

Machine-learning-based visual-haptic system for robotic surgical platforms

Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.

Method of hub communication, processing, display, and cloud analytics

A method of displaying an operational parameter of a surgical system is disclosed. The method includes receiving, by a cloud computing system of the surgical system, first usage data, from a first subset of surgical hubs of the surgical system; receiving, by the cloud computing system, second usage data, from a second subset of surgical hubs of the surgical system; analyzing, by the cloud computing system, the first and the second usage data to correlate the first and the second usage data with surgical outcome data; determining, by the cloud computing system, based on the correlation, a recommended medical resource usage configuration; and displaying, on respective displays on the first and the second subset of surgical hubs, indications of the recommended medical resource usage configuration.

Hand controller for robotic surgery system
11576736 · 2023-02-14 · ·

A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.