Patent classifications
A61B2017/00207
Endoscope manipulator and method for controlling the same
An endoscope manipulator for performing robot-assisted endoscope manipulation comprises a movable robot base with a hollow trunk and a vertical lifting joint; a passive joint set with one end mounted to an upper end of the vertical lifting joint, for manually setting an initial pose of the endoscope; an active joint set mounted to another end of the passive joint set, for adjusting pose control of the endoscope intra-operatively; and a compliant endoscope holder mountable to an end-effector of the active joint set, which passively changes to a compliant state upon an external force exceeding a threshold being applied to an endoscopic lens held by the compliant endoscope holder.
Computer-assisted arthroplasty system
A computer-implemented method for creating an activity-optimized cutting guides for surgical procedures includes receiving one or more pre-operative images depicting one or more anatomical joints of a patient, and creating a three-dimensional anatomical model of the one or more anatomical joints based on the one or more pre-operative images. One or more patient-specific anatomical measurements are determined based on the three-dimensional anatomical model. A statistical model of joint performance is applied to the patient-specific anatomical measurements to identify one or more cut angles for performing a surgical procedure. A patient-specific cutting guide is created that comprises one or more apertures positioned based on the one or more cut angles.
Redundant communication channels and processing of imaging feeds
A computing system may use redundant communication pathways for communicating surgical imaging feed(s). The computing system may obtain multiple surgical video streams via multiple pathways. The multiple surgical video streams may include copies of the same video. The surgical video streams may be obtained, for example, from the same intra-body imaging feed, such as intra-body visual light feed. For example, a first video stream may be obtained via a communication pathway, and a second video stream may be obtained via another communication pathway. The computing system may display or send a surgical video stream for display. The computing system may whether the video stream being displayed has encountered any issues. Upon detecting an issue with the video stream being displayed, the computing system may display or send another obtained surgical video stream for display.
Method For Stylus And Hand Gesture Based Image Guided Surgery
A system is disclosed that allows for determining a position of an instrument in an object space. The position may include a three-dimensional location and at least one degree of freedom of orientation, or any appropriate number of degrees of freedom. The tracked position may be based on imaging the instrument, including imaging an external contour of the instrument. The movement of the instrument can be determined based on a three-dimensional determination of a movement of the instrument contour
MINIMALLY INVASIVE HISTOTRIPSY SYSTEMS AND METHODS
A histotripsy therapy system configured for the treatment of tissue is provided, which may include any number of features. Provided herein are systems and methods that provide efficacious non-invasive and minimally invasive therapeutic, diagnostic and research procedures. In particular, provided herein are optimized systems and methods that provide targeted, efficacious histotripsy in a variety of different regions and under a variety of different conditions without causing undesired tissue damage to intervening/non-target tissues or structures.
SYSTEM AND METHOD FOR AUGMENTED REALITY DATA INTERACTION FOR ULTRASOUND IMAGING
A mixed reality (MR) visualization system includes an MR device comprising a holographic display configured to display a holographic image to an operator, a hand-held ultrasound imaging device configured to obtain a real-time ultrasound image of a subject's anatomy, and a computing device communicatively coupled to the MR device and the hand-held ultrasound imaging device. The computing device includes a non-volatile memory and a processor. The computing device is configured to receive the real-time ultrasound image, determine a real-time 3D position and orientation of the hand-held ultrasound imaging device, generate a modified real-time ultrasound image by modifying the real-time ultrasound image to correspond to the real-time 3D position and orientation of the hand-held ultrasound imaging device, and transmit the modified real-time ultrasound image to the MR device for display as the holographic image positioned at a predetermined location relative to the hand-held ultrasound imaging device.
Registering Intra-Operative Images Transformed from Pre-Operative Images of Different Imaging-Modality for Computer Assisted Navigation During Surgery
A computer platform is provided for computer assisted navigation during surgery. The computer platform includes at least one processor that is operative to transform pre-operative images of a patient obtained from a first imaging modality to an estimate of the pre-operative images of the patient in a second imaging modality that is different than the first imaging modality. The at least one processor is further operative to register the estimate of the pre-operative images of the patient in the second imaging modality to intra-operative navigable images or data of the patient.
SURGICAL SYSTEM FOR REVISION ORTHOPEDIC SURGICAL PROCEDURES
A surgical planning system for use in surgical procedures to repair an anatomy of interest includes a preplanning system to generate a virtual surgical plan and a mixed reality system that includes a visualization device wearable by a user to view the virtual surgical plan projected in a real environment. The virtual surgical plan includes a 3D virtual model of the anatomy of interest. When wearing the visualization device, the user can align the 3D virtual model with the real anatomy of interest, thereby achieving a registration between details of the virtual surgical plan and the real anatomy of interest. The registration enables a surgeon to implement the virtual surgical plan on the real anatomy of interest without the use of tracking markers.
OPTICAL TRACKING DEVICE WITH BUILT-IN STRUCTURED LIGHT MODULE
A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.
Extended Intelligence Ecosystem for Soft Tissue Luminal Applications
Disclosed herein are techniques for implementing an intelligent assistance (“IA”) or extended intelligence (“EI”) ecosystem for soft tissue luminal applications. In various embodiments, a computing system analyzes first layer input data (indicating movement, position, and/or relative distance for a person(s) and object(s) in a room) and second layer input data. The second layer input data includes sensor and/or imaging data of a patient. Based on the analysis, the computing system generates one or more recommendations for guiding a medical professional in navigating a surgical device(s) with respect to one or more soft tissue luminal portions of the patient. The recommendation(s) include at least one mapped guide toward, in, and/or around the one or more soft tissue luminal portions. The mapped guide can include data corresponding to at least three dimensions, e.g., a 3D image/video. The computing system can present the recommendation(s) as image-based output, using a user experience device.