Patent classifications
G06T2210/41
Soft tissue balancing in articular surgery
Systems and methods may be used to perform robot-aided surgery. A system may include a display device and a computing device including a memory device with instructions. The instructions can cause the system to access surgical data, calculate medial and lateral gap data, calculate a recommended component set, and generate a graphical user interface. Accessing surgical data can include accessing soft tissue data indicative of at least tension in soft tissues surrounding a surgical location. The graphical user interface can include an interactive trapezoidal graphic overlaid onto a graphical representation of a distal femur and a proximal tibia. The interactive trapezoidal graphic can include a graphical representation of a medial total gap, a lateral total gap, and a recommended spacer size. The interactive trapezoidal graphic can update in response to adjustments in implant parameters to assist in surgical planning.
SYSTEMS AND METHODS FOR USING VIRTUAL REALITY, AUGMENTED REALITY, AND/OR A SYNTHETIC 3-DIMENSIONAL INFORMATION FOR THE MEASUREMENT OF HUMAN OCULAR PERFORMANCE
A system or method for measuring human ocular performance can be implemented using an eye sensor, a head orientation sensor, an electronic circuit and a display that presents one of virtual reality information, augmented reality information, or synthetic computer-generated 3-dimensional information. The device is configured for measuring saccades, pursuit tracking during visual pursuit, nystagmus, vergence, eyelid closure, or focused position of the eyes. The eye sensor comprises a video camera that senses vertical movement and horizontal movement of at least one eye. The head orientation sensor senses pitch and yaw in the range of frequencies between 0.01 Hertz and 15 Hertz. The system uses a Fourier transform to generate a vertical gain signal and a horizontal gain signal.
Skin 3D model for medical procedure
The present disclosure provides a method of medical procedure using augmented reality for superimposing a patient's medical images (e.g., CT or MRI) over a real-time camera view of the patient. Prior to the medical procedure, the patient's medical images are processed to generate a 3D model that represents a skin contour of the patient's body. The 3D model is further processed to generate a skin marker that comprises only selected portions of the 3D model. At the time of the medical procedure, 3D images of the patient's body are captured using a camera, which are then registered with the skin marker. Then, the patient's medical images can be superimposed over the real-time camera view that is presented to the person performing the medical procedure.
METHOD AND APPARATUS FOR GENERATION OR EDITING OF LAYER DELINEATIONS
Methods are disclosed for the generation and editing of layer delineations within three-dimensional tomography scans. Cross sections of a subject are generated and presented to an operator, who has the ability to edit layer delineations within the cross section, or determine parameters used to generate new cross sections. By guiding an operator through a set of displayed cross sections, the methods can allow for a more rapid, efficient, and error-free segmentation of the subject. The cross sections can be nonplanar in shape or planar and non-axis-aligned. The cross sections can be restricted to exclude one or more user-defined regions of the subject, or to include only one or more user-defined regions of the subject. The cross sections can be localized to a point-of-interest. Iterative implementations of the methods can be used to arrive at a segmentation deemed satisfactory by the user.
METHOD FOR GENERATING A 3D PRINTABLE MODEL OF A PATIENT SPECIFIC ANATOMY
A computer implemented method for generating a 3D printable model of a patient specific anatomic feature from 2D medical images is provided. A 3D image is automatically generated from a set of 2D medical images. A machine learning based image segmentation technique is used to segment the generated 3D image. A 3D printable model of the patient specific anatomic feature is created from the segmented 3D image.
SYSTEMS, METHODS, APPARATUSES, AND COMPUTER-READABLE MEDIA FOR IMAGE MANAGEMENT IN IMAGE-GUIDED MEDICAL PROCEDURES
Presented herein are methods, systems, devices, and computer-readable media for image management in image-guided medical procedures. Some embodiments herein allow a physician to use multiple instruments for a surgery and simultaneously provide image-guidance data for those instruments. Various embodiments disclosed herein provide information to physicians about procedures they are performing, the devices (such as ablation needles, ultrasound transducers or probes, scalpels, cauterizers, etc.) they are using during the procedure, the relative emplacements or poses of these devices, prediction information for those devices, and other information. Some embodiments provide useful information about 3D data sets and allow the operator to control the presentation of regions of interest. Additionally, some embodiments provide for quick calibration of surgical instruments or attachments for surgical instruments.
AUGMENTING A MEDICAL IMAGE WITH AN INTELLIGENT RULER
Disclosed is a computer-implemented method of overlaying a representation of a medical instrument over a two-dimensional medical image. It finds at least one feature point along a detection line which is defined relative to the medical instrument in the medical image, calculates a geometrical quantity based on this feature point and adds the geometrical quantity to the two-dimensional medical image.
SYSTEMS AND METHODS FOR ASSISTING IN PUNCTURE
The present disclosure relates to systems and methods. The method may include obtaining at least one image of an object. The method may include determining a focal point in each of the at least one image. The method may include determining at least one puncture parameter of a puncture operation to be performed on the object based on information associated with the focal point. The method may further include displaying the focal point and a puncture representation of the at least one puncture parameter. The puncture representation may at least indicate a puncture point.
HANDS-FREE MEDICATION TRACKING
The disclosed systems and methods provide hands-free medication tracking. A method includes providing an augmented reality device attachable to a face of a user. The method also includes determining, using one or more sensors of the augmented reality device, a user action to be carried out with respect to a medication. The method also includes presenting, via a display interface of the augmented reality device, a visual indicator to assist with the user action. The method also includes confirming, via the one or more sensors of the augmented reality device, a completion of the user action. The method also includes sending, via a communication interface of the augmented reality device, an update message to a server indicating the completion of the user action, wherein the update message causes the server to update a medication inventory in a database.
3D BIOLOGICAL CELL CONSTITUENT CONCENTRATION
A three-dimensional (3D) biological cell constituent concentration reconstruction method may include capturing two-dimensional images of a biological cell at different angles, virtually partitioning the biological cell into a 3D stacks of voxels, assigning cell constituent concentration estimations to respective voxels based upon a plurality of the two-dimensional images and forming a 3D cell constituent concentration model of the biological cell based upon the voxels and respective cell constituent concentration estimations.