Patent classifications
A61B2090/368
Systems and methods for augmented reality based surgical navigation
The present disclosure involves object recognition as a method of registration, using a stereoscopic camera on Augmented Reality (AR) glasses or an endoscope as the image capture technology. Exemplary objects include surgical tools, anatomical components or features, such as bone or cartilage, etc. By detecting just a portion of the object in the image data of the surgical scene, the present disclosure may register and track a portion of the patient's anatomy, such as the pelvis, the knee, etc. The present disclosure also optionally displays information on the AR glasses themselves, such as the entire pelvis, the femur, the tibia, etc. The present disclosure may include combinations of the foregoing features, and may eliminate the need for electromagnetic, inertial, or infrared stereoscopic tracking as the tracking technology.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
ROTATIONAL ACTUATORS FOR SURGICAL ROBOTICS SYSTEMS
A system for use in surgery includes a central body, a visualization system operably connected to the central body, a video rendering system, a head-mounted display for displaying images from the video rendering system, a sensor system, and a robotic device operably connected to the central body. The visualization system includes at least one camera and a pan system and/or a tilt system. The sensor system tracks the position and/or orientation in space of the head-mounted display relative to a reference point. The pan system and/or the tilt system are configured to adjust the field of view of the camera in response to information from the sensor system about changes in at least one of position and orientation in space of the head-mounted display relative to the reference point.
Methods for Autoregistration of Arthroscopic Video Images to Preoperative Models and Devices Thereof
Surgical methods and devices that facilitate registration of arthroscopic video to preoperative models are disclosed. With this technology, a machine learning model is applied to diagnostic video data captured via an arthroscope to identify an anatomical structure. An anatomical structure in a three-dimensional (3D) anatomical model is registered to the anatomical structure represented in the diagnostic video data. The 3D anatomical model is generated from preoperative image data. The anatomical structure is then tracked intraoperatively based on the registration and without requiring fixation of fiducial markers to the patient anatomy. A simulated projected view of the registered anatomical structure is generated from the 3D anatomical model based on a determined orientation of the arthroscope during capture of intraoperative video data. The simulated projected view is scaled and oriented based on one or more landmark features of the anatomical structure extracted from the intraoperative video data.
Ultra-wideband positioning for wireless ultrasound tracking and communication
A method of designing an orthopedic implant comprising: (a) iteratively evaluating possible shapes of a dynamic orthopedic implant using actual anatomical shape considerations and kinematic shape considerations; and, (b) selecting a dynamic orthopedic implant shape from one of the possible shapes, where the dynamic orthopedic implant shape selected satisfies predetermined kinematic and anatomical constraints.
Enhanced ophthalmic surgical experience using a virtual reality head-mounted display
An ophthalmic surgical system comprises: a camera optically coupled to a surgical microscope; a virtual reality (VR) headset worn by a surgeon; and a VR data processing unit configured to communicate with the surgical microscope, the VR headset, and an ophthalmic surgical apparatus, wherein the VR data processing unit is configured to: project a real time video screen of video received from the camera into the VR headset; project a patient information screen into the VR headset to provide the patient information directly to the surgeon during ophthalmic surgery; project a surgical apparatus information screen into the VR headset; project a surgical apparatus input control screen into the VR headset to provide the surgeon with direct control over the surgical apparatus; and control which ones of the screens are visible in the VR headset based inputs indicating head movements of the surgeon as detected by the VR headset.
SYSTEMS AND METHODS FOR INTELLIGENTLY SEEDING REGISTRATION
A method of registering sets of anatomical data for use during a medical procedure is provided herein. The method may include accessing a first set of model points of a patient anatomy of interest and intra-operatively acquiring a second set of model points by visualizing a portion of the anatomical surface in the patient with a vision probe. The method may further include extracting system information, including kinematic information from a robotic arm of a medical system and/or setup information, and generating an initial seed transformation based on the extracted system information. Thereafter, the method may include applying the initial seed transformation to the first set of model points and generating a first registration between the first set of model points and the second set of model points to permit model and actual information to be viewed and used together by an operator.
VIRTUAL OBJECT DISPLAY DEVICE, METHOD, PROGRAM, AND SYSTEM
A camera 14 acquires a background image B0, and a virtual object acquisition unit 22 acquires a virtual object S0. A display information acquisition unit 23 acquires display information indicating a position, at which the virtual object S0 is displayed, from the background image B0, and a display control unit 24 displays the virtual object S0 on a display 15 based on the display information. A change information acquisition unit 25 acquires change information for changing the display state of the virtual object S0 according to the relative relationship between a reference marker image 36 and each of the other marker images 37, among a plurality of marker images 36 and 37 for changing the display state of the virtual object S0 that are included in the background image B0. A display state change unit 26 changes the display state of the virtual object according to the change information.
Secondary instrument control in a computer-assisted teleoperated system
Systems and methods for a teleoperational system and control thereof are provided. An exemplary system includes a first manipulator configured to support an instrument moveable within an instrument workspace, the instrument having an instrument frame of reference, and includes an operator input device configured to receive movement commands from an operator. The system further includes a control system to implement the movement commands by comparing an orientation of the instrument with an orientation of a field of view of the instrument workspace to produce an orientation comparison. When the comparison does not meet certain criteria, the control system causes instrument motion in a first direction relative to the instrument frame in response to a movement command. When the comparison meets the criteria, the control system causes instrument in a second direction relative to the instrument frame in response to the movement command. The second direction differs from the first direction.
REALITY-AUGMENTED MORPHOLOGICAL PROCEDURE
Data representative of a physical feature of a morphologic subject is received in connection with a procedure to be carried out with respect to the morphologic subject. A view of the morphologic subject overlaid by a virtual image of the physical feature is rendered for a practitioner of the procedure, including generating the virtual image of the physical feature based on the representative data, and rendering the virtual image of the physical feature within the view in accordance with one or more reference points on the morphologic subject such that the virtual image enables in-situ visualization of the physical feature with respect to the morphologic subject.