ARTIFICIAL INTELLIGENCE SYSTEM INCLUDING THREE-DIMENSIONAL LABELING USING FRAME OF REFERENCE PROJECTIONS
20260080595 ยท 2026-03-19
Inventors
- Philippe Raffy (Edina, MN, US)
- Jean-Francois Pambrun (La Prairie, CA)
- David Dubois (Mirabel, CA)
- Ashish Kumar (Danville, CA, US)
Cpc classification
International classification
G06F21/62
PHYSICS
Abstract
A method includes receiving an image and classifying the image using a machine learning engine. The machine learning engine is trained using a training image that is labeled with a label associated with a three-dimensional volume responsive to image metrics for the training image satisfying respective thresholds. The image metrics include a first image metric based on the training image and a projection of the three-dimensional volume, and a second image metric based on pixel intensity values associated with the training image.
Claims
1. A computer-implemented method comprising: receiving, by one or more processors, an image; and classifying, by the one or more processors, the image using a machine learning engine, wherein: the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.
2. The computer-implemented method of claim 1, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.
3. The computer-implemented method of claim 1, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.
4. The computer-implemented method of claim 1, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.
5. The computer-implemented method of claim 1, wherein the three-dimensional volume is defined based on an intersection of a first two-dimensional bounding box in a frame of reference and a second two-dimensional bounding box in the frame of reference.
6. The computer-implemented method of claim 1, wherein the first image metric includes a ratio determined based on the projection.
7. The computer-implemented method of claim 1, further comprising: performing the training of the machine learning engine using the training image.
8. The computer-implemented method of claim 1, further comprising: performing the labeling of the training image, at least in part by determining that the plurality of image metrics for the training image satisfies the plurality of respective thresholds.
9. A system comprising: one or more processors; and at least one memory storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving an image; and classifying the image using a machine learning engine, wherein the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.
10. The system of claim 9, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.
11. The system of claim 9, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.
12. The system of claim 9, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.
13. The system of claim 9, wherein the three-dimensional volume is defined based on an intersection of a first two-dimensional bounding box in a frame of reference and a second two-dimensional bounding box in the frame of reference.
14. The system of claim 9, wherein the first image metric includes a ratio determined based on the projection.
15. The system of claim 9, wherein the operations further comprise: performing the training of the machine learning engine using the training image.
16. The system of claim 9, wherein the operations further comprise: performing the labeling of the training image, at least in part by determining that the plurality of image metrics for the training image satisfies the plurality of respective thresholds.
17. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving an image; and classifying the image using a machine learning engine, wherein the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.
18. The one or more non-transitory computer-readable media of claim 17, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.
19. The one or more non-transitory computer-readable media of claim 17, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.
20. The one or more non-transitory computer-readable media of claim 17, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Other features of embodiments will be more readily understood from the following detailed description of specific embodiments thereof when read in conjunction with the accompanying drawings, in which:
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
DETAILED DESCRIPTION
[0017] In the following detailed description, numerous specific details are set forth to provide a thorough understanding of embodiments of the present inventive concept. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present
[0018] inventive concept. It is intended that all embodiments disclosed herein can be implemented separately or combined in any way and/or combination. Aspects described with respect to one embodiment may be incorporated in different embodiments although not specifically described relative thereto. That is, all embodiments and/or features of any embodiments can be combined in any way and/or combination.
[0019] Embodiments of the inventive concept are described herein in the context of a prediction engine that includes a machine learning engine and an artificial intelligence (AI) engine. It will be understood that embodiments of the inventive concept are not limited to a machine learning implementation of the prediction engine and other types of AI systems may be used including, but not limited to, a multi-layer neural network, a deep learning system, a natural language processing system, and/or computer vision system Moreover, it will be understood that the multi-layer neural network is a multi-layer artificial neural network comprising artificial neurons or nodes and does not include a biological neural network comprising real biological neurons.
[0020] Some embodiments of the inventive concept stem from a realization that when labeling images in a dataset to train an AI system, many of the images may be related to the same item. For example, in medical imaging, many of the images of a magnetic resonance imaging (MRI) or computed tomography (CT) scan represent slices of the same three-dimensional volume, such as a body part. Rather than label each image individually, some embodiments of the inventive concept may provide a labeling platform in which a three-dimensional volume can be defined in the same frame of reference as a plurality of two-dimensional images. In the context of a medical application, the images may be two dimensional images of a patient's body part. The three-dimensional volume may encompass images of the body part from multiple perspectives and may be assigned a label, such as the name of the body part. The three-dimensional volume may then be projected onto the respective ones of the plurality of two-dimensional images. An image metric may be determined for two-dimensional images. For example, the amount of surface area of an image that falls inside the three-dimensional volume and the amount of surface are of the image that falls outside of the three-dimensional volume may be determined. When the amount of surface area of the image that falls inside the three-dimensional volume relative to a total surface area of the images exceeds a defined threshold, then the image may be considered part of the same three-dimensional object, e.g., body part image, that is encompassed by the three-dimensional volume and, therefore, labeled with the label assigned to the three-dimensional volume. For example, when the three-dimensional volume encompasses a patient's hand, then all the two-dimensional images showing slices of the patient's hand from different cross-sectional perspectives can be automatically labeled with the same label as the three-dimensional volume thereby avoiding the manual labeling process for numerous images. Image surface area is one image metric that can be used to determining whether to assign a label to a two-dimensional image. Other image metrics that may be used may include, but are not limited to a standard deviation of image pixel values, and/or a histogram of image pixel values,
[0021] Referring to
[0022] An AI system may provide an AI labeling platform through use of a labeling interface server 130, which is communicatively coupled to an AI system server 140. Both the labeling interface server 130 and the AI system server 140 are coupled to a database 160, which contains the records to be labeled. The labeling interface server 130 may include a labeling interface module 135 that is configured to securely present or provide records from the database to the labeling entities 110a, 110b, and 110c for labeling. In some embodiments of the inventive concept, the labeling interface module 135 may provide a secure Web application that is configured to implement any security protocols associated with restricting access to the records in the database. For example, the handling of certain types of data may be controlled by a regulatory constraint of a governmental administrative authority. One such example is PHI data, which are protected by the HIPAA act. Thus, the labeling interface module 135 may ensure that only those labelling entities 110a, 110b, and 110c that possess the proper security qualifications (e.g., security qualifications that comply with any governmental regulatory constraint or private security policy) are allowed to view and label the data contained in the records stored in the database 160. In addition to the labeling entities 110a, 110b, and 110c, the labeling interface module 135 may further protect the database 160 with an electronic security access wall to ensure that the database records 160 are not exposed to any entity that is not authorized to access or view the information contained therein.
[0023] In some embodiments the records in the database 160 may be images, such as, for example, images resulting from medical imaging applications. It will be understood, however, that embodiments of the inventive concept may be applied to other types of imaging applications including, but not limited to, manufacturing, construction, agriculture, security, or other applications where images may be labeled as three-dimensional objects or subjects. In medical imaging, for example, many of the images of a magnetic resonance imaging (MRI) or computed tomography (CT) scan may represent slices of the same three-dimensional volume, such as a body part. The labeling interface module 135 may present a plurality of two-dimensional images, which are in the same frame of reference, to one or more of the labeling entities 110a, 110b, and 110c. A labeling entity 110a, 110b, and 110c may define a three-dimensional volume by selecting two of the two-dimensional images and creating two-dimensional bounding boxes on the two-dimensional images, respectively. The two-dimensional bounding boxes may be in respective planes that intersect one another and can be used to define a three-dimensional volume based on their respective dimensions. The thee-dimensional volume may then be assigned a label, which can be used to automatically label other images in the database 160 without the manual intervention or assistance of the labeling entities 110a, 110b, and 110c.
[0024] The images that are manually labeled by the labeling entities 110a, 110b, and 110c or automatically labeled by the labeling interface server 135 and/or the AI system server 140 may be used to train the AI system 145 running on the AI system server 140. It will be understood that the division of functionality described herein between the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 is an example. Various functionality and capabilities can be moved between the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 in accordance with different embodiments of the inventive concept. Moreover, in some embodiments, the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 may be merged as a single logical and/or physical entity.
[0025] A network 150 couples the labeling entities 110a, 110b, and 110c to the labeling interface server 130/labeling interface module 135. The network 150 may be a global network, such as the Internet or other publicly accessible network. Various elements of the network 150 may be interconnected by a wide area network, a local area network, an Intranet, and/or other private network, which may not be accessible by the general public. Thus, the communication network 150 may represent a combination of public and private networks or a virtual private network (VPN). The network 150 may be a wireless network, a wireline network, or may be a combination of both wireless and wireline networks.
[0026] The AI system with the three-dimensional labeling capability using frame of reference projections service provided through the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135, in some embodiments, may be embodied as a cloud service. In some embodiments, the AI system and labeling service may be implemented as a Representational State Transfer Web Service (RESTful Web service).
[0027] Although
[0028]
[0029] The machine learning engine 220 may aggregate labels for one or more objects or subjects in an image to obtain a consensus label for the object or subject. The image including the labeled object or subject may then be used as a training record that can be used to train the decision making used in the AI engine 230. The machine learning engine 220 may use modeling techniques to evaluate the effects of various input data (e.g., labeled objects or subjects contained in the images) on the generated outputs. These effects may then be used to tune and refine the quantitative relationship between the labeled images in the training records from the database 160 and the generated outputs. The tuned and refined quantitative relationship between the labeled images in the training records generated by the machine learning engine 220 is output for use in the AI engine 230. The machine learning engine 220 may be referred to as a machine learning algorithm. The AI engine 230 may, in effect, be generated by the machine learning engine 220 in the form of the quantitative relationship determined between the labeled images in the training records and the generated outputs (e.g., predictions, answers to questions, classification of images, etc.). The AI engine 230 may be referred to as an AI model.
[0030] The AI engine 230 may be used to process new images 260 from the database 160 or other source locations to classify the subject or objects contained therein based on the quantitative relationships generated during the training process described above. The classification module 270 may be configured to communicate the classification of an image to a user or other destination.
[0031]
[0032] Referring to
[0033] Returning to
[0034] Referring now to
[0035]
[0036] Referring now to
[0037]
[0038]
[0039] Although
[0040] Computer program code for carrying out operations of data processing systems discussed above with respect to
[0041] Moreover, the functionality of the labeling interface server 130 of
[0042] The data processing apparatus described herein with respect to
[0043] Some embodiments of the inventive concept may provide an AI system in which image data may be labeled more efficiently by reducing the amount of manual labeling involved in images that may be associated with the same subject or object. A three-dimensional volume may be defined that encompasses images of the subject or object from multiple perspectives and the three-dimensional volume may be assigned a label. Many of the two-dimensional images to be labeled, however, may be cross-sectional slices and/or different perspective views of the subject or object encompassed in the three-dimensional volume. The three-dimensional volume can be projected onto the various images to be labeled and, based on the amount of surface area of the image that falls inside the three-dimensional volume relative to the total surface area of the image, the image may be automatically labeled with the same label assigned to the three-dimensional volume without the need for manual intervention. The threshold for how much of an images surface area needs to fall within the three-dimensional volume for the image to qualify for automatic labeling using the three-dimensional volume can be adjusted based on accuracy/error rates, the types of subject or objects being labeled, or other factors.
[0044] Further Definitions and Embodiments:
[0045] In the above description of various embodiments of the present inventive concept, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this inventive concept belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
[0046] The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present inventive concept. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0047] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the inventive concept. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Like reference numbers signify like elements throughout the description of the figures.
[0048] In the above-description of various embodiments of the present inventive concept, aspects of the present inventive concept may be illustrated and described herein in any of a number of patentable classes or contexts including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present inventive concept may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a "circuit," "module," "component," or "system." Furthermore, aspects of the present inventive concept may take the form of a computer program product comprising one or more computer readable media having computer readable program code embodied thereon.
[0049] Any combination of one or more computer readable media may be used. The computer readable media may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an appropriate optical fiber with a repeater, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0050] The description of the present inventive concept has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the inventive concept in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the inventive concept. The aspects of the inventive concept herein were chosen and described to best explain the principles of the inventive concept and the practical application, and to enable others of ordinary skill in the art to understand the inventive concept with various modifications as are suited to the particular use contemplated.