Patent classifications
G06T2207/30052
Digital image analysis for robotic installation of surgical implants
Computer-implemented digital image analysis methods, apparatuses, and systems for robotic installation of surgical implants are disclosed. A disclosed apparatus plans a route within an anatomy of a patient from an incision site to a surgical implant site for robotic installation of a surgical implant. The apparatus uses digital imaging data to identify less-invasive installation paths and determine the dimensions of the surgical implant components being used. The apparatus segments the surgical implant into surgical implant subcomponents and modifies the surgical implant subcomponents, such that they can be inserted using the identified less-invasive installation paths.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
SYSTEMS AND METHODS FOR MESH AND TACK DIAGNOSIS
A system for surgical mesh analysis includes a display screen, a processor, and a memory. The memory has instructions stored thereon, which when executed by the processor, cause the system to: access an image of a surgical site (602). The image includes the surgical mesh. The surgical mesh includes a grid of cells. The instructions, when executed by the processor further cause the system to detect a first desired location on the surgical mesh in the image (604); detect a second desired location on the surgical mesh in the image (606); determine a distance between the first desired location and the second desired location along the surgical mesh in the image (608); and display, on the display screen, the determined distance (610).
METHODS FOR ARTHROSCOPIC SURGERY VIDEO SEGMENTATION AND DEVICES THEREFOR
Methods, non-transitory computer readable media, and arthroscopic video segmentation apparatuses and systems that facilitate improved, automatic segmentation analysis of videos of arthroscopic procedures are disclosed. With this technology, a video feed of an arthroscopic surgery can be automatically segmented using machine learning models and one or more tags related to the segments can be associated with the video feed. The generated videos can be output in real time to provide segmented information related to the surgical procedure or can be saved with the one or more segments tagged for playback for training or informational purposes.
SYSTEMS AND METHODS FOR IMMEDIATE IMAGE QUALITY FEEDBACK
An apparatus (1) for providing image quality feedback during a medical imaging examination includes at least one electronic processor (20) programmed to: receive a live video feed (17) of a display (6) of an imaging device controller (4) of an imaging device (2) performing the medical imaging examination; extract a preview image (12) from the live video feed; perform an image analysis (38) on the extracted preview image to determine whether the extracted preview image satisfies an alert criterion; and output an alert (30) when the extracted preview image satisfies the alert criterion as determined by the image analysis.
Estimating bone mineral density from plain radiograph by assessing bone texture with deep learning
The present disclosure provides a computer-implemented method, a device, and a computer program product for radiographic bone mineral density (BMD) estimation. The method includes receiving a plain radiograph, detecting landmarks for a bone structure included in the plain radiograph, extracting an ROI from the plain radiograph based on the detected landmarks, estimating the BMD for the ROI extracted from the plain radiograph by using a deep neural network.
METHOD AND DEVICE FOR DOCUMENTING THE USE OF AT LEAST ONE IMPLANT WHICH IS USED IN A SURGERY AND/OR THE LOCALIZATION THEREOF
A method and device for documenting use of at least one implant used in a surgery and/or for the localization thereof. The implant can be provided for a surgery and used in the surgery. The method includes: a) providing a surgical set having a plurality of implants; b) capturing a first sequence of images of the plurality of implants of the surgical set using a device; c) analyzing the sequence of images of the plurality of implants in order to identify each individual implant; d) optionally outputting a signal when one and/or each implant has been identified; e) capturing a second sequence of images of the plurality of implants of the surgical set using the device after a surgery in order to ascertain missing implants; f) classifying a missing implant as used in surgery.
Techniques for patient-specific morphing of virtual boundaries
Systems, methods, software and techniques are disclosed for morphing a generic virtual boundary into a patient-specific virtual boundary for an anatomical model. The generic virtual boundary comprises one or more morphable faces. An intersection of the generic virtual boundary and the anatomical model is computed to define a cross-sectional contour of the anatomical model. One or more faces of the generic virtual boundary are morphed to conform to the cross-sectional contour of the anatomical model to produce the patient-specific virtual boundary. In some cases, the morphed faces are spaced apart from the cross-sectional contour by an offset distance that accounts for a geometric feature of a surgical tool.
STENT VISUALIZATION ENHANCEMENT USING CASCADED SPATIAL TRANSFORMATION NETWORK
An apparatus for stent visualization includes a hardware processor that is configured to input one or more stent images from a sequence of X-ray images and corresponding balloon marker location data to a cascaded spatial transform network. The background is separated from the one or more stent images using the cascaded spatial transform network and a transformed stent image with a clear background and a non-stent background image is generated. The stent layer and non-stent layer are generated using a neural network without online optimization. A mapping function f maps the inputs, the sequence images and marker coordinates, into the two single image outputs.
Selection of intraocular lens based on a plurality of machine learning models
A method and system for selecting an intraocular lens, with a controller having a processor and tangible, non-transitory memory. A plurality of machine learning models is selectively executable by the controller. The controller is configured to receive at least one pre-operative image of the eye and extract, via a first input machine learning model, a first set of data. The controller is configured to receive multiple biometric parameters of the eye and extract, via a second input machine learning model, a second set of data. The first set of data and the second set of data are combined to produce a mixed set of data. The controller is configured to generate, via an output machine learning model, at least one output factor based on the mixed set of data. An intraocular lens is selected based in part on the at least one output factor.