Patent classifications
A61B34/32
SELF-ALIGNED DOCKING FOR A ROBOTIC SURGICAL PLATFORM
A robotic system can incorporate one or more sensors along a robotic arm in order to permit self- or auto-alignment of the robotic arm with a cannula during a docking procedure. The sensor can detect and measure a force or moment resulting from contact between an instrument driver of the robotic arm and the cannula. In response thereto, the robotic system can translate and/or rotate components of the robotic arm in order to align the instrument driver with the cannula, thereby facilitating latching of the cannula to the instrument driver.
SELF-ALIGNED DOCKING FOR A ROBOTIC SURGICAL PLATFORM
A robotic system can incorporate one or more sensors along a robotic arm in order to permit self- or auto-alignment of the robotic arm with a cannula during a docking procedure. The sensor can detect and measure a force or moment resulting from contact between an instrument driver of the robotic arm and the cannula. In response thereto, the robotic system can translate and/or rotate components of the robotic arm in order to align the instrument driver with the cannula, thereby facilitating latching of the cannula to the instrument driver.
ULTRASONIC ROBOTIC SURGICAL NAVIGATION
Surgical robot systems, anatomical structure tracker apparatuses, and US transducer apparatuses are disclosed. A surgical robot system includes a robot, a US transducer, and at least one processor. The robot includes a robot base, a robot arm coupled to the robot base, and an end-effector coupled to the robot arm. The end-effector is configured to guide movement of a surgical instrument. The US transducer is coupled to the end-effector and operative to output US imaging data of anatomical structure proximately located to the end-effector. The least one processor is operative to obtain an image volume for the patient and to track pose of the end-effector relative to anatomical structure captured in the image volume based on the US imaging data.
SYSTEM AND METHODS FOR POSITIONING A MANIPULATOR ARM BY CLUTCHING WITHIN A NULL-PERPENDICULAR SPACE CONCURRENT WITH NULL-SPACE MOVEMENT
Devices, systems, and methods for positioning an end effector or remote center of a manipulator arm by floating a first set of joints within a null-perpendicular joint velocity sub-space and providing a desired state or movement of a proximal portion of a manipulator arm concurrent with end effector positioning by driving a second set of joints within a null-space orthogonal to the null-perpendicular space. Methods include floating a first set of joints within a null-perpendicular space to allow manual positioning of one or both of a remote center or end effector position within a work space and driving a second set of joints according to an auxiliary movement calculated within a null-space according to a desired state or movement of the manipulator arm during the floating of the joints. Various configurations for devices and systems utilizing such methods are provided herein.
REAL TIME IMAGE GUIDED PORTABLE ROBOTIC INTERVENTION SYSTEM
An image-guided robotic intervention system (“IGRIS”) may be used to perform medical procedures on patients. IGRIS provides a real-time view of patient anatomy, as well as an intended target or targets for the procedures, software that allows a user to plan an approach or trajectory path using either the image or the robotic device, software that allows a user to convert a series of 2D images into a 3D volume, and localizes the 3D volume with respect to real-time images during the procedure. IGRIS may include sensors to estimate pose of the imaging device relative to the patient to improve the performance of that software with respect to runtime, robustness, and accuracy.
Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display
A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.
Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display
A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.
METHODS AND SYSTEMS FOR USING VOICE INPUT TO CONTROL A SURGICAL ROBOT
Methods, apparatuses, and systems for using speech input to control a surgical robot are disclosed. A surgical robot is disclosed that can be controlled by a surgeon using speech input in a conversational manner. The surgical robot is provided either general commands or specific instructions, assessing whether the instructions can be completed within the capabilities of the available hardware and resources, and seeking approval from the surgeon prior to executing the instructions. Alternatively, the embodiments disclosed allow the surgeon to perform an action that cannot be safely completed by the surgical robot.
Techniques for patient-specific morphing of virtual boundaries
Systems, methods, software and techniques are disclosed for morphing a generic virtual boundary into a patient-specific virtual boundary for an anatomical model. The generic virtual boundary comprises one or more morphable faces. An intersection of the generic virtual boundary and the anatomical model is computed to define a cross-sectional contour of the anatomical model. One or more faces of the generic virtual boundary are morphed to conform to the cross-sectional contour of the anatomical model to produce the patient-specific virtual boundary. In some cases, the morphed faces are spaced apart from the cross-sectional contour by an offset distance that accounts for a geometric feature of a surgical tool.
Methods for performing medical procedures using a surgical robot
Embodiments are directed to a medical robot system including a robot coupled to an end-effectuator element with the robot configured to control movement and positioning of the end-effectuator in relation to the patient. One embodiment is a method for removing bone with a robot system comprising: taking a two-dimensional slice through a computed tomography scan volume of target anatomy; placing a perimeter on a pathway to the target anatomy; and controlling a drill assembly with the robot system to remove bone along the pathway in the intersection of the perimeter and the two-dimensional slice.