Patent classifications
G06T2207/30012
SYSTEMS AND METHODS FOR ASSISTING AND AUGMENTING SURGICAL PROCEDURES
Systems and methods for providing assistance to a surgeon during an implant surgery are disclosed. A method includes defining areas of interest in diagnostic data of a patient and defining a screw bone type based on the surgeon's input. Post defining the areas of interest, salient points are determined for the areas of interest. Successively, an XZ angle, an XY angle, and a position entry point for a screw are determined based on the salient points of the areas of interest. Successively, a maximum screw diameter and a length of the screw are determined based on the salient points. Thereafter, the screw is identified and suggested to the surgeon for usage during the implant surgery.
Graphical user interface for use in a surgical navigation system with a robot arm
A surgical navigation system includes: a tracker (125) for real-time tracking of a position and orientation of a robot arm (191); a source of a patient anatomical data (163) and a robot arm virtual image (166); a surgical navigation image generator (131) generating a surgical navigation image (142A) including the patient anatomy (163) and the robot arm virtual image (166) in accordance to the current position and/or orientation data provided by the tracker (125); a 3D display system (140) showing the surgical navigation image (142A).
Intervertebral disc modeling
A method is disclosed for spinal anatomy segmentation. In one example, the method includes combining a fully convolutional network with a residual neural network. The method also includes training the combined fully convolutional network with the residual neural network from end to end. The method also includes receiving at least one medical image of a spinal anatomy. The method also includes applying the fully convolutional network with the residual neural network to at least one medical image and segmenting at least one vertebral body from the at least one medical image of the spinal anatomy.
Three-dimensional ultrasound image display method
Disclosed is a three-dimensional ultrasound image display method comprising the following steps: S1: obtaining a series of original two-dimensional images having spatial position and angle information by means of automatic or manual scanning; S2: performing image reconstruction on the basis of the original two-dimensional images to obtain three-dimensional volumetric images; S3: obtaining, from the three-dimensional volumetric images, one or more section images intersecting the original two-dimensional images, and obtaining one or more reconstructed two-dimensional images by means of image processing; S4: displaying together the one or more original two-dimensional images and the one or more section images in a three-dimensional space; and S5: selecting and displaying feature points, feature lines, and feature surfaces in the three-dimensional space on the basis of the original two-dimensional volumetric images. The present method provides an efficient and high-precision three-dimensional image display method, which can be widely applied to ultrasound and other three-dimensional imaging modes.
SYSTEMS AND METHODS FOR IMAGE CROPPING AND ANATOMICAL STRUCTURE SEGMENTATION IN MEDICAL IMAGING
One or more medical images of a patient are processed by a first neural network model to determine a region-of-interest (ROI) or a cut-off plane. Information from the first neural network model is used to crop the medical images, which serves as input to a second neural network model. The second neural network model processes the cropped medical images to determine contours of anatomical structures in the medical images of the patient. Each of the first and second neural network models are deep neural network models. By use of cropped images in the training and inference phases of the second neural network model, contours are produced with sharp edges or flat surfaces.
Systems and methods for automated detection and segmentation of vertebral centrum(s) in 3D images
Presented herein are systems and methods that allow for vertebral centrums of individual vertebrae to be identified and segmented within a 3D image of a subject (e.g., a CT or microCT image). In certain embodiments, the approaches described herein identify, within a graphical representation of an individual vertebra in a 3D image of a subject, multiple discrete and differentiable regions, one of which corresponds to a vertebral centrum of the individual vertebra. The region corresponding to the vertebral centrum may be automatically or manually (e.g., via a user interaction) classified as such. Identifying vertebral centrums in this manner facilitates streamlined quantitative analysis of 3D images for osteological research, notably, providing a basis for rapid and consistent evaluation of vertebral centrum morphometric attributes.
PROCESSING METHOD, MODEL TRAINING METHOD, MEANS, AND STORAGE MEDIUM FOR SPINAL IMAGES
The present application discloses a method, device, and system for processing a medical image. The method includes obtaining a source spinal image, identifying one or more vertebral bodies and one or more intervertebral discs comprised in the source spinal image, determining the vertebral body recognition results corresponding to the one or more vertebral bodies and the intervertebral disc recognition results corresponding to the one or more intervertebral discs, and determining target recognition results corresponding to the source spinal image based at least in part one on one or more of the vertebral body recognition results and the intervertebral disc recognition results.
DETERMINING HOW WELL AN OBJECT INSERTED INTO A PATIENT'S BODY IS POSITIONED
A method is for determining a positioning quality of an external apparatus inserted into a patient's body. In the method, image data of the patient's body is detected; an at least one first anatomical landmark is detected in or with a first subregion, in which the external apparatus is positioned as expected; at least a first subsection of the external apparatus is sought in or with the at least one first subregion; a second anatomical landmark is detected in or with at least one second subregion; at least one second subsection is then detected based upon the already localized subsection; and a quality of the positioning of the at least one second subsection is determined by measuring a suitable dimension between the localized at least one second subsection and the at least one second landmark. A training method, a positioning quality determination facility and a training facility are also disclosed.
Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
Mediated-reality imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array. The processing device can be configured to synthesize a three-dimensional (3D) image corresponding to a virtual perspective of the scene based on the intraoperative image data from the cameras. The imaging system is further configured to receive and/or store preoperative image data, such as medical scan data corresponding to a portion of a patient in the scene. The processing device can register the preoperative image data to the intraoperative image data, and overlay the registered preoperative image data over the corresponding portion of the 3D image of the scene to present a mediated-reality view.
Method and system for register operating space
A system for register operating space includes a first positioning mark, a local camera, a second positioning mark, a global camera and a computer system. The first positioning mark is set on a patient. The local camera captures a first image covering the first positioning mark. The second positioning mark is disposed on the local camera. The global camera captures a second image covering the second positioning mark. The focal length of the global camera is shorter than the focal length of the local camera. The computer system is communicatively connected to the local camera and the global camera to provide a navigation interface based on the first image and the second image.