G06V2201/03

COMPUTER-IMPLEMENTED METHOD FOR PROVIDING AN OUTLINE OF A LESION IN DIGITAL BREAST TOMOSYNTHESIS

One or more example embodiments of the present invention relates to a computer-implemented method for providing an outline of a lesion in digital breast tomosynthesis includes receiving input data, wherein the input data comprises a reconstructed tomosynthesis volume dataset based on projection recordings, a virtual target marker within a lesion being in the tomosynthesis volume dataset; applying a trained function to at least a part of the tomosynthesis volume dataset to establish an outline enclosing the lesion, the part of the tomosynthesis volume dataset corresponding to a region surrounding the virtual target marker in the tomosynthesis volume dataset; and providing output data, wherein the output data is an outline of a two-dimensional area or a three-dimensional volume surrounding the target marker.

MONITORING OF DENTITION

A method for acquiring at least one two-dimensional image of a part of arches of a patient includes steps carried out by the patient or other person who is not a dental health professional, for example, including placing a dental separator in the mouth of the patient in order to separate the lips of the patient and improve the visibility of the teeth during the acquisition of said at least one two-dimensional image, and acquiring, in a mouth closed position and with a personal image acquisition apparatus, said at least one two-dimensional image.

SYSTEM FOR PROCESSING RADIOGRAPHIC IMAGES AND OUTPUTTING THE RESULT TO A USER

The invention relates to the field of computer engineering for processing images that provides increased accuracy of finding and classifying a similar object . The technical result is achieved by: downloading files of a radiographic image which comprise metadata including information about the object or subject of the image and information about the image itself; encrypting the downloaded files if the above-mentioned files comprise personal data about a person; decrypting the above-mentioned, encrypted, downloaded files; and processing the radiographic image, wherein, as a result of the processing, the following occurs: finding and capturing a relevant region of the radiographic image; removing noise from the captured, relevant region of the radiographic image, wherein a region with a found object is meant by a relevant region of the radiographic image; compressing or unzipping a previously processed radiographic image; and finding a similar object in two previously processed images, and processing said object.

METHODS AND APPARATUSES FOR TRAINING MAGNETIC RESONANCE IMAGING MODEL

Methods and apparatuses for training a magnetic resonance imaging model, electronic devices and computer readable storage media are provided. A method may include: acquiring a magnetic resonance image data set; constructing a ring deep neural network to be trained; inputting an under-sampled magnetic resonance image and a full-sampled magnetic resonance image respectively to two neural networks included in the ring deep neural network, to generate respective simulated magnetic resonance images; inputting a first simulated full-sampled magnetic resonance image and the full-sampled magnetic resonance image to a pre-constructed first simulated magnetic resonance image class discrimination model, to obtain a first discrimination result indicating whether or not the first simulated full-sampled magnetic resonance image is of a simulated magnetic resonance image class; and adjusting a network parameter of the ring deep neural network based on a preset loss function, to obtain a trained magnetic resonance imaging model.

EXPLAINING A MODEL OUTPUT OF A TRAINED MODEL

The invention relates a computer-implemented method (500) of generating explainability information for explaining a model output of a trained model. The method uses one or more aspect recognition models configured to indicate a presence of respective characteristics in the input instance. A saliency method is applied to obtain a masked source representation of the input instance at a source layer of the trained model (e.g., the input layer or an internal layer), comprising those elements at the source layer relevant to the model output. The masked source representation is mapped to a target layer (e.g., input or internal layer) of an aspect recognition model, and the aspect recognition model is then applied to obtain a model output indicating a presence of the given characteristic relevant to the model output of the trained model. As explainability information, the characteristics indicated by the aspect recognition models are output.

Method of Diagnosis
20230047141 · 2023-02-16 ·

The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.

SYSTEMS AND METHODS FOR CONTEXTUAL IMAGE ANALYSIS
20230050833 · 2023-02-16 ·

In one implementation, a computer-implemented system is provided for real- time video processing. The system includes at least one memory configured to store instructions and at least one processor configured to execute the instructions to perform operations. The at least one processor is configured to receive real-time video generated by a medical image system, the real-time video including a plurality of image frames, and obtain context information indicating an interaction of a user with the medical image system. The at least processor is also configured to perform an object detection to detect at least one object in the plurality of image frames and perform a classification to generate classification information for at least one object in the plurality of image frames. Further, the at least one processor is configured to perform a video manipulation to modify the received real-time video based on at least one of the object detection and the classification. Moreover, the processor is configured to invoke at least one of the object detection, the classification, and the video manipulation based on the context information.

DIGITAL TISSUE SEGMENTATION AND MAPPING WITH CONCURRENT SUBTYPING
20230050168 · 2023-02-16 ·

Accurate tissue segmentation is performed without a priori knowledge of tissue type or other extrinsic information not found within the subject image, and may be combined with classification analysis so that diseased tissue is not only delineated within an image but also characterized in terms of disease type. In various embodiments, a source image is decomposed into smaller overlapping subimages such as square or rectangular tiles. A predictor such as a convolutional neural network produces tile-level classifications that are aggregated to produce a tissue segmentation and, in some embodiments, to classify the source image or a subregion thereof.

SELF-SUPERVISED LEARNING FRAMEWORK TO GENERATE CONTEXT SPECIFIC PRETRAINED MODELS

Systems and methods for self-supervised representation learning as a means to generate context-specific pretrained models include selecting data from a set of available data sets; selecting a pretext task from domain specific pretext tasks; selecting a target problem specific network architecture based on a user selection from available choices or any customized model as per user preference; and generating a pretrained model for the selected network architecture using the selected data obtained from the set of available data sets and a pretext task as obtained from domain specific pretext tasks.

SYSTEMS AND METHODS FOR PROVIDING DISPLAYED FEEDBACK WHEN USING A REAR-FACING CAMERA

A system includes a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising displaying a prompt to a user of a mobile device on a display of a mobile device to capture an image representing at least a portion of a mouth of the user using a rear-facing camera of the mobile device, where the rear-facing camera is on an opposite side of the mobile device including the display. The operations further comprise controlling the rear-facing camera to enable the rear-facing camera to capture the image, receiving the image, and outputting, user feedback based on the image, where the user feedback is outputted on the display that is on the opposite side of the mobile device than the rear-facing camera.