G06T2219/028

INTERACTIVE AUGMENTED REALITY SYSTEM FOR LAPAROSCOPIC AND VIDEO ASSISTED SURGERIES
20230147826 · 2023-05-11 ·

This disclosure describes an interactive augmented reality system for improving surgeon's view and context awareness during laparoscopic and video assisted surgeries. Instead of purely relying on computer vision algorithms for image registration between pre-operation (or intra-operation) images/models and later intra-operation scope images, the system can implement an interactive mechanism where surgeons may provide supervised information in initial calibration phase of the augmented reality function, thus achieving high accuracy in image registration. Besides the initialization phase before operation starts, interaction between surgeon and the system can also happens during the surgery. Specifically, patient tissue might move or deform during surgery, caused by for example cutting. The augmented reality system can re-calibrate during surgery when image registration accuracy deteriorates, by seeking additional supervised labeling from surgeons. The augmented reality system can improve surgeon's view during surgery, by utilizing surgeon's guidance sporadically to achieve high image registration accuracy.

Determination and visualization of damage to an anatomical joint

A system for determining and visualizing damage to an anatomical joint of a patient. The system is to: obtain a three dimensional image representation of an anatomical joint which is based on a medical image stack; determine damage to an anatomical structure in the anatomical joint by analyzing the medical image stack; mark damage to the anatomical structures in the obtained three dimensional image representation; obtain a 3D model based on the three dimensional image representation; and create a graphical user interface (GUI). The GUI may comprise: functionality to visualize and enable manipulation of the at least one 3D model; functionality to enable removal of the visualization of the anatomical structure from the 3D model; functionality to visualize and enable browsing of the medical image stack; and functionality to visualize the position of the medical image that is currently visualized.

INTERACTIVE PLACEMENT OF ANATOMICAL ATLAS STRUCTURES IN PATIENT IMAGES
20170365103 · 2017-12-21 ·

This disclosure describes systems, devices, and techniques for adjusting an anatomical atlas to patient anatomy. In one example, a system may include processing circuitry configured to generate, for display at a user interface, a representation of an anatomical region of a patient, generate, for display at the user interface, a representation of one or more atlas-defined anatomical structures at a first position over the representation of the anatomical region of the patient, receive a user annotation that defines an adjustment to at least one atlas-defined anatomical structure relative to the representation of the anatomical region of the patient, and adjust, based on the adjustment, the first position of the representation of the one or more atlas-defined anatomical structures to a second position of the representation of the one or more atlas-defined anatomical structures over the representation of the anatomical region of the patient.

RENDERING DEPTH-BASED THREE-DIMENSIONAL MODEL WITH INTEGRATED IMAGE FRAMES

A system aligns a 3D model of an environment with image frames of the environment and generates a visualization interface that displays a portion of the 3D model and a corresponding image frame. The system receives LIDAR data collected in the environment and generates a 3D model based on the LIDAR data. For each image frame, the system aligns the image frame with the 3D model. After aligning the image frames with the 3D model, when the system presents a portion of the 3D model in an interface, it also presents an image frame that corresponds to the portion of the 3D model.

System and method for rendering three-dimensional scenes by a computer graphics processor using orthogonal projection
09842425 · 2017-12-12 · ·

A method is provided for rendering a three dimensional scene upon an electronic processor based system such as a computer, cellular phone, games console or other device. The method involves rendering a three dimensional scene by activating pixels of an electronic display device using a perspective projection for some portions of the scene and an orthogonal projection for others. A far less computationally expensive orthogonal projection is used for rendering other portions of the scene which meet a predetermined condition. The method results in a rendered scene displayed by pixels of the display device that appears overall to have been realistically rendered using a perspective transformation. However, since portions of the rendered scene have been rendered using an orthogonal projection, the method is computationally less expensive than rendering using only perspective projection.

Imaging system and methods displaying a fused multidimensional reconstructed image

A system, method, and apparatus for displaying a fused reconstructed image with a multidimensional image are disclosed. An example imaging system receives a selection corresponding to a portion of a displayed multidimensional visualization of a surgical site. At the selected portion of the multidimensional visualization, the imaging system displays a portion of a three-dimensional image which corresponds to the selected multidimensional visualization such that the displayed portion of the at least one of the three-dimensional image or model is fused with the displayed multidimensional visualization.

Enhancing three-dimensional models using multi-view refinement

Systems and techniques are provided for modeling three-dimensional (3D) meshes using multi-view image data. An example method can include determining, based on a first image of a target, first 3D mesh parameters for the target corresponding to a first coordinate frame; determining, based on a second image of the target, second 3D mesh parameters for the target corresponding to a second coordinate frame; determining third 3D mesh parameters for the target in a third coordinate frame, the third 3D mesh parameters being based on the first and second 3D mesh parameters and relative rotation and translation parameters of image sensors that captured the first and second images; determining a loss associated with the third 3D mesh parameters, the loss being based on the first and second 3D mesh parameters and the relative rotation and translation parameters; determining 3D mesh parameters based on the loss and third 3D mesh parameters.

USER INTERFACE FOR AND METHOD OF PRESENTING RAW IMAGE DATA WITH COMPOSITE IMAGE
20230186533 · 2023-06-15 ·

Methods and systems for presenting raw image data with a composite image are provided. For example, a method comprises displaying a composite image on a user interface; identifying an area of interest on the composite image; and displaying at least one source image on the user interface simultaneously with the composite image, the at least one source image containing at least one pixel in the area of interest. As another example, a method comprises projecting a composite image onto a representation of a three-dimensional (3D) surface displayed on a user interface to produce a 3D composite image and changing an orientation of the 3D composite image in response to a user input prior to identifying an area of interest on the 3D composite image. An image display system comprising a user interface is configured to simultaneously display the composite image and a plurality of source images.

Image-based rendering of real spaces

Under an embodiment of the invention, an image capturing and processing system creates 3D image-based rendering (IBR) for real estate. The system provides image-based rendering of real property, the computer system including a user interface for visually presenting an image-based rendering of a real property to a user; and a processor to obtain two or more photorealistic viewpoints from ground truth image data capture locations; combine and process two or more instances of ground truth image data to create a plurality of synthesized viewpoints; and visually present a viewpoint in a virtual model of the real property on the user interface, the virtual model including photorealistic viewpoints and synthesized viewpoints.

Medical image processing apparatus and medical image processing method which are for medical navigation device
11676706 · 2023-06-13 · ·

The present invention relates to a medical image processing apparatus and a medical image processing method for a medical navigation device, and more particularly, to an apparatus and method for processing an image provided when using the medical navigation device. To this end, the present invention provides a medical image processing apparatus for a medical navigation device, including: a position tracking unit configured to obtain position information of the medical navigation device within an object; a memory configured to store medical image data generated based on a medical image of the object; and a processor configured to set a region of interest (ROI) based on position information of the medical navigation device in reference to the medical image data, and generate partial medical image data corresponding to the ROI, and a medical image processing method using the same.