G06V10/25

AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
20230050636 · 2023-02-16 ·

A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.

SYSTEM FOR PROCESSING RADIOGRAPHIC IMAGES AND OUTPUTTING THE RESULT TO A USER

The invention relates to the field of computer engineering for processing images that provides increased accuracy of finding and classifying a similar object . The technical result is achieved by: downloading files of a radiographic image which comprise metadata including information about the object or subject of the image and information about the image itself; encrypting the downloaded files if the above-mentioned files comprise personal data about a person; decrypting the above-mentioned, encrypted, downloaded files; and processing the radiographic image, wherein, as a result of the processing, the following occurs: finding and capturing a relevant region of the radiographic image; removing noise from the captured, relevant region of the radiographic image, wherein a region with a found object is meant by a relevant region of the radiographic image; compressing or unzipping a previously processed radiographic image; and finding a similar object in two previously processed images, and processing said object.

SYSTEM FOR PROCESSING RADIOGRAPHIC IMAGES AND OUTPUTTING THE RESULT TO A USER

The invention relates to the field of computer engineering for processing images that provides increased accuracy of finding and classifying a similar object . The technical result is achieved by: downloading files of a radiographic image which comprise metadata including information about the object or subject of the image and information about the image itself; encrypting the downloaded files if the above-mentioned files comprise personal data about a person; decrypting the above-mentioned, encrypted, downloaded files; and processing the radiographic image, wherein, as a result of the processing, the following occurs: finding and capturing a relevant region of the radiographic image; removing noise from the captured, relevant region of the radiographic image, wherein a region with a found object is meant by a relevant region of the radiographic image; compressing or unzipping a previously processed radiographic image; and finding a similar object in two previously processed images, and processing said object.

IMAGE PROCESSING SYSTEM, ENDOSCOPE SYSTEM, AND IMAGE PROCESSING METHOD
20230050945 · 2023-02-16 · ·

An image processing system includes a processor, the processor performing processing, based on association information of an association between a biological image captured under a first imaging condition and a biological image captured under a second imaging condition, of outputting a prediction image corresponding to an image in which an object captured in an input image is to be captured under the second imaging condition. The association information is indicative of a trained model obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under the second imaging condition. The processor is capable of outputting a plurality of different kinds of prediction images based on a plurality of trained models and the input image, and performs processing, based on a given condition, of selecting the prediction image to be output among a plurality of prediction images.

IMAGE PROCESSING SYSTEM, ENDOSCOPE SYSTEM, AND IMAGE PROCESSING METHOD
20230050945 · 2023-02-16 · ·

An image processing system includes a processor, the processor performing processing, based on association information of an association between a biological image captured under a first imaging condition and a biological image captured under a second imaging condition, of outputting a prediction image corresponding to an image in which an object captured in an input image is to be captured under the second imaging condition. The association information is indicative of a trained model obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under the second imaging condition. The processor is capable of outputting a plurality of different kinds of prediction images based on a plurality of trained models and the input image, and performs processing, based on a given condition, of selecting the prediction image to be output among a plurality of prediction images.

EXPLAINING A MODEL OUTPUT OF A TRAINED MODEL

The invention relates a computer-implemented method (500) of generating explainability information for explaining a model output of a trained model. The method uses one or more aspect recognition models configured to indicate a presence of respective characteristics in the input instance. A saliency method is applied to obtain a masked source representation of the input instance at a source layer of the trained model (e.g., the input layer or an internal layer), comprising those elements at the source layer relevant to the model output. The masked source representation is mapped to a target layer (e.g., input or internal layer) of an aspect recognition model, and the aspect recognition model is then applied to obtain a model output indicating a presence of the given characteristic relevant to the model output of the trained model. As explainability information, the characteristics indicated by the aspect recognition models are output.

METHOD AND SYSTEM FOR AUTOMATIC CHARACTERIZATION OF A THREE-DIMENSIONAL (3D) POINT CLOUD

Methods of and systems for characterization of a 3D point cloud are disclosed. The method comprises accessing a 3D point cloud, the 3D point cloud being a set of data points representative of the object, determining, based on the 3D point cloud, a 3D reconstructed object, determining, based on the 3D reconstructed object, a digital framework of the 3D point cloud, the digital framework being a ramified 3D tree structure, the digital framework being representative of a base structure of the object, morphing a 3D reference model of the object onto the 3D reconstructed object, the morphing being based on the digital framework; and determining, based on the morphed 3D reference model and the 3D reconstructed object, characteristics of the object.

METHOD AND SYSTEM FOR AUTOMATIC CHARACTERIZATION OF A THREE-DIMENSIONAL (3D) POINT CLOUD

Methods of and systems for characterization of a 3D point cloud are disclosed. The method comprises accessing a 3D point cloud, the 3D point cloud being a set of data points representative of the object, determining, based on the 3D point cloud, a 3D reconstructed object, determining, based on the 3D reconstructed object, a digital framework of the 3D point cloud, the digital framework being a ramified 3D tree structure, the digital framework being representative of a base structure of the object, morphing a 3D reference model of the object onto the 3D reconstructed object, the morphing being based on the digital framework; and determining, based on the morphed 3D reference model and the 3D reconstructed object, characteristics of the object.

SYSTEMS AND METHODS FOR CONTEXTUAL IMAGE ANALYSIS
20230050833 · 2023-02-16 ·

In one implementation, a computer-implemented system is provided for real- time video processing. The system includes at least one memory configured to store instructions and at least one processor configured to execute the instructions to perform operations. The at least one processor is configured to receive real-time video generated by a medical image system, the real-time video including a plurality of image frames, and obtain context information indicating an interaction of a user with the medical image system. The at least processor is also configured to perform an object detection to detect at least one object in the plurality of image frames and perform a classification to generate classification information for at least one object in the plurality of image frames. Further, the at least one processor is configured to perform a video manipulation to modify the received real-time video based on at least one of the object detection and the classification. Moreover, the processor is configured to invoke at least one of the object detection, the classification, and the video manipulation based on the context information.

SYSTEMS AND METHODS FOR CONTEXTUAL IMAGE ANALYSIS
20230050833 · 2023-02-16 ·

In one implementation, a computer-implemented system is provided for real- time video processing. The system includes at least one memory configured to store instructions and at least one processor configured to execute the instructions to perform operations. The at least one processor is configured to receive real-time video generated by a medical image system, the real-time video including a plurality of image frames, and obtain context information indicating an interaction of a user with the medical image system. The at least processor is also configured to perform an object detection to detect at least one object in the plurality of image frames and perform a classification to generate classification information for at least one object in the plurality of image frames. Further, the at least one processor is configured to perform a video manipulation to modify the received real-time video based on at least one of the object detection and the classification. Moreover, the processor is configured to invoke at least one of the object detection, the classification, and the video manipulation based on the context information.