ARTIFICIAL INTELLIGENCE SYSTEM INCLUDING THREE-DIMENSIONAL LABELING USING FRAME OF REFERENCE PROJECTIONS

20260080595 ยท 2026-03-19

    Inventors

    Cpc classification

    International classification

    Abstract

    A method includes receiving an image and classifying the image using a machine learning engine. The machine learning engine is trained using a training image that is labeled with a label associated with a three-dimensional volume responsive to image metrics for the training image satisfying respective thresholds. The image metrics include a first image metric based on the training image and a projection of the three-dimensional volume, and a second image metric based on pixel intensity values associated with the training image.

    Claims

    1. A computer-implemented method comprising: receiving, by one or more processors, an image; and classifying, by the one or more processors, the image using a machine learning engine, wherein: the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.

    2. The computer-implemented method of claim 1, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.

    3. The computer-implemented method of claim 1, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.

    4. The computer-implemented method of claim 1, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.

    5. The computer-implemented method of claim 1, wherein the three-dimensional volume is defined based on an intersection of a first two-dimensional bounding box in a frame of reference and a second two-dimensional bounding box in the frame of reference.

    6. The computer-implemented method of claim 1, wherein the first image metric includes a ratio determined based on the projection.

    7. The computer-implemented method of claim 1, further comprising: performing the training of the machine learning engine using the training image.

    8. The computer-implemented method of claim 1, further comprising: performing the labeling of the training image, at least in part by determining that the plurality of image metrics for the training image satisfies the plurality of respective thresholds.

    9. A system comprising: one or more processors; and at least one memory storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving an image; and classifying the image using a machine learning engine, wherein the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.

    10. The system of claim 9, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.

    11. The system of claim 9, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.

    12. The system of claim 9, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.

    13. The system of claim 9, wherein the three-dimensional volume is defined based on an intersection of a first two-dimensional bounding box in a frame of reference and a second two-dimensional bounding box in the frame of reference.

    14. The system of claim 9, wherein the first image metric includes a ratio determined based on the projection.

    15. The system of claim 9, wherein the operations further comprise: performing the training of the machine learning engine using the training image.

    16. The system of claim 9, wherein the operations further comprise: performing the labeling of the training image, at least in part by determining that the plurality of image metrics for the training image satisfies the plurality of respective thresholds.

    17. One or more non-transitory computer-readable media storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving an image; and classifying the image using a machine learning engine, wherein the machine learning engine is trained using a training image, the training image being labeled with a label associated with a three-dimensional volume responsive to a plurality of image metrics for the training image satisfying a plurality of respective thresholds, and the plurality of image metrics including (i) a first image metric based on the training image and a projection of the three-dimensional volume and (ii) a second image metric based on pixel intensity values associated with the training image.

    18. The one or more non-transitory computer-readable media of claim 17, wherein the first image metric is based on a projection of the three-dimensional volume onto the training image.

    19. The one or more non-transitory computer-readable media of claim 17, wherein the second image metric includes a standard deviation of the pixel intensity values associated with the training image.

    20. The one or more non-transitory computer-readable media of claim 17, wherein the second image metric includes a histogram of the pixel intensity values associated with the training image.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0007] Other features of embodiments will be more readily understood from the following detailed description of specific embodiments thereof when read in conjunction with the accompanying drawings, in which:

    [0008] FIG. 1 is a block diagram that illustrates a communication network including an Artificial Intelligence (AI) system with a three-dimensional labeling capability using frame of reference projections in accordance with some embodiments of the inventive concept;

    [0009] FIG. 2 is a block diagram of the AI system of FIG. 1 in accordance with some embodiments of the inventive concept;

    [0010] FIGS. 3 and 4 are flowcharts that illustrate operations of three-dimensional labeling using frame of reference projections in accordance with some embodiments of the inventive concept;

    [0011] FIG. 5 is a diagram of an interface for drawing bounding boxes to define a three-dimensional volume for use in labeling in accordance with some embodiments of the inventive concept;

    [0012] FIG. 6 is a flowchart that illustrates further operations of three-dimensional labeling using frame of reference projections in accordance with some embodiments of the inventive concept;

    [0013] FIG. 7 is a diagram that illustrates projections of a three-dimensional volume onto two-dimensional images in accordance with some embodiments of the inventive concept;

    [0014] FIG. 8 is a data processing system that may be used to implement one or more servers in the AI system of FIG. 1 in accordance with some embodiments of the inventive concept;

    [0015] FIG. 9 illustrates a memory that may be used to facilitate three-dimensional labelling using frame of reference projections in accordance with some embodiments of the inventive concept.

    [0016] FIG. 10 illustrates a memory that may be used to facilitate three-dimensional labeling using frame of reference projections in accordance with some embodiments of the inventive concept.

    DETAILED DESCRIPTION

    [0017] In the following detailed description, numerous specific details are set forth to provide a thorough understanding of embodiments of the present inventive concept. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present

    [0018] inventive concept. It is intended that all embodiments disclosed herein can be implemented separately or combined in any way and/or combination. Aspects described with respect to one embodiment may be incorporated in different embodiments although not specifically described relative thereto. That is, all embodiments and/or features of any embodiments can be combined in any way and/or combination.

    [0019] Embodiments of the inventive concept are described herein in the context of a prediction engine that includes a machine learning engine and an artificial intelligence (AI) engine. It will be understood that embodiments of the inventive concept are not limited to a machine learning implementation of the prediction engine and other types of AI systems may be used including, but not limited to, a multi-layer neural network, a deep learning system, a natural language processing system, and/or computer vision system Moreover, it will be understood that the multi-layer neural network is a multi-layer artificial neural network comprising artificial neurons or nodes and does not include a biological neural network comprising real biological neurons.

    [0020] Some embodiments of the inventive concept stem from a realization that when labeling images in a dataset to train an AI system, many of the images may be related to the same item. For example, in medical imaging, many of the images of a magnetic resonance imaging (MRI) or computed tomography (CT) scan represent slices of the same three-dimensional volume, such as a body part. Rather than label each image individually, some embodiments of the inventive concept may provide a labeling platform in which a three-dimensional volume can be defined in the same frame of reference as a plurality of two-dimensional images. In the context of a medical application, the images may be two dimensional images of a patient's body part. The three-dimensional volume may encompass images of the body part from multiple perspectives and may be assigned a label, such as the name of the body part. The three-dimensional volume may then be projected onto the respective ones of the plurality of two-dimensional images. An image metric may be determined for two-dimensional images. For example, the amount of surface area of an image that falls inside the three-dimensional volume and the amount of surface are of the image that falls outside of the three-dimensional volume may be determined. When the amount of surface area of the image that falls inside the three-dimensional volume relative to a total surface area of the images exceeds a defined threshold, then the image may be considered part of the same three-dimensional object, e.g., body part image, that is encompassed by the three-dimensional volume and, therefore, labeled with the label assigned to the three-dimensional volume. For example, when the three-dimensional volume encompasses a patient's hand, then all the two-dimensional images showing slices of the patient's hand from different cross-sectional perspectives can be automatically labeled with the same label as the three-dimensional volume thereby avoiding the manual labeling process for numerous images. Image surface area is one image metric that can be used to determining whether to assign a label to a two-dimensional image. Other image metrics that may be used may include, but are not limited to a standard deviation of image pixel values, and/or a histogram of image pixel values,

    [0021] Referring to FIG. 1, a communication network 100 including an AI system with a three-dimensional labeling capability using frame of reference projections, in accordance with some embodiments of the inventive concept, comprises labeling entities 110a, 110b, and 110c that may use devices, such as computers, laptops, tables, mobile communication devices (e.g., smart phones), and the like, to label records for use in training an AI system. The labeling entities 110a, 110b, and 110c may each represent a single person or may each represent multiple persons. For example, a labeling entity may be representative of a committee that works together in labeling records.

    [0022] An AI system may provide an AI labeling platform through use of a labeling interface server 130, which is communicatively coupled to an AI system server 140. Both the labeling interface server 130 and the AI system server 140 are coupled to a database 160, which contains the records to be labeled. The labeling interface server 130 may include a labeling interface module 135 that is configured to securely present or provide records from the database to the labeling entities 110a, 110b, and 110c for labeling. In some embodiments of the inventive concept, the labeling interface module 135 may provide a secure Web application that is configured to implement any security protocols associated with restricting access to the records in the database. For example, the handling of certain types of data may be controlled by a regulatory constraint of a governmental administrative authority. One such example is PHI data, which are protected by the HIPAA act. Thus, the labeling interface module 135 may ensure that only those labelling entities 110a, 110b, and 110c that possess the proper security qualifications (e.g., security qualifications that comply with any governmental regulatory constraint or private security policy) are allowed to view and label the data contained in the records stored in the database 160. In addition to the labeling entities 110a, 110b, and 110c, the labeling interface module 135 may further protect the database 160 with an electronic security access wall to ensure that the database records 160 are not exposed to any entity that is not authorized to access or view the information contained therein.

    [0023] In some embodiments the records in the database 160 may be images, such as, for example, images resulting from medical imaging applications. It will be understood, however, that embodiments of the inventive concept may be applied to other types of imaging applications including, but not limited to, manufacturing, construction, agriculture, security, or other applications where images may be labeled as three-dimensional objects or subjects. In medical imaging, for example, many of the images of a magnetic resonance imaging (MRI) or computed tomography (CT) scan may represent slices of the same three-dimensional volume, such as a body part. The labeling interface module 135 may present a plurality of two-dimensional images, which are in the same frame of reference, to one or more of the labeling entities 110a, 110b, and 110c. A labeling entity 110a, 110b, and 110c may define a three-dimensional volume by selecting two of the two-dimensional images and creating two-dimensional bounding boxes on the two-dimensional images, respectively. The two-dimensional bounding boxes may be in respective planes that intersect one another and can be used to define a three-dimensional volume based on their respective dimensions. The thee-dimensional volume may then be assigned a label, which can be used to automatically label other images in the database 160 without the manual intervention or assistance of the labeling entities 110a, 110b, and 110c.

    [0024] The images that are manually labeled by the labeling entities 110a, 110b, and 110c or automatically labeled by the labeling interface server 135 and/or the AI system server 140 may be used to train the AI system 145 running on the AI system server 140. It will be understood that the division of functionality described herein between the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 is an example. Various functionality and capabilities can be moved between the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 in accordance with different embodiments of the inventive concept. Moreover, in some embodiments, the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135 may be merged as a single logical and/or physical entity.

    [0025] A network 150 couples the labeling entities 110a, 110b, and 110c to the labeling interface server 130/labeling interface module 135. The network 150 may be a global network, such as the Internet or other publicly accessible network. Various elements of the network 150 may be interconnected by a wide area network, a local area network, an Intranet, and/or other private network, which may not be accessible by the general public. Thus, the communication network 150 may represent a combination of public and private networks or a virtual private network (VPN). The network 150 may be a wireless network, a wireline network, or may be a combination of both wireless and wireline networks.

    [0026] The AI system with the three-dimensional labeling capability using frame of reference projections service provided through the AI system server 140/AI system module 145 and the labeling interface server 130/labeling interface module 135, in some embodiments, may be embodied as a cloud service. In some embodiments, the AI system and labeling service may be implemented as a Representational State Transfer Web Service (RESTful Web service).

    [0027] Although FIG. 1 illustrates an example communication network including an AI system with a three-dimensional labeling capability using frame of reference projections, it will be understood that embodiments of the inventive subject matter are not limited to such configurations, but are intended to encompass any configuration capable of carrying out the operations described herein.

    [0028] FIG. 2 is a block diagram of the AI system of FIG. 1 in accordance with some embodiments of the inventive concept. As shown in FIG. 2, the AI system module 145 may comprise a machine learning engine 220 and an AI engine 230. The machine learning engine 220 may process records, i.e., labeled images 210 from the database 160 that include labels manually provided from the labeling entities 110a, 110b, and 110c along with labels automatically generated using the labeling interface server 130 and/or the AI system server 140. The labeled images 210 may include a three-dimensional volume that may generated from a pair of two-dimensional bounding boxes drawn by the labeling entities 110a, 110b, and 110c. The labeling entities 110a, 110b, and 110c may assign a label to the three-dimensional volume, which can be used to automatically label other images in the database 160. Various types of image metrics may be used in determining whether to automatically label an image from the database 160. Such image metrics may include, but are not limited to, image surface area, a standard deviation of image pixel values, and/or a histogram of image pixel values. In some embodiments, the label projection module 250 may be configured to project the three-dimensional volume onto respective ones of the two-dimensional images contained in the database 160. Based on the projection, the amount of surface area of an image from the database 160 that falls inside the three-dimensional volume and the amount of surface are of the image that falls outside of the three-dimensional volume may be determined. The label assignment module 240 may automatically assign the label that was assigned to the three-dimensional volume to respective ones of the images from the database 160 having an amount of surface area that falls inside the three-dimensional volume relative to a total surface area of the image that exceeds a defined threshold. Thus, when a relatively high percentage of the surface area of an image falls within a three-dimensional volume of a labeled object or subject, for example, it can be assumed that the image corresponds the same object or subject encompassed by the three-dimensional volume and can, therefore, be automatically assigned the same label.

    [0029] The machine learning engine 220 may aggregate labels for one or more objects or subjects in an image to obtain a consensus label for the object or subject. The image including the labeled object or subject may then be used as a training record that can be used to train the decision making used in the AI engine 230. The machine learning engine 220 may use modeling techniques to evaluate the effects of various input data (e.g., labeled objects or subjects contained in the images) on the generated outputs. These effects may then be used to tune and refine the quantitative relationship between the labeled images in the training records from the database 160 and the generated outputs. The tuned and refined quantitative relationship between the labeled images in the training records generated by the machine learning engine 220 is output for use in the AI engine 230. The machine learning engine 220 may be referred to as a machine learning algorithm. The AI engine 230 may, in effect, be generated by the machine learning engine 220 in the form of the quantitative relationship determined between the labeled images in the training records and the generated outputs (e.g., predictions, answers to questions, classification of images, etc.). The AI engine 230 may be referred to as an AI model.

    [0030] The AI engine 230 may be used to process new images 260 from the database 160 or other source locations to classify the subject or objects contained therein based on the quantitative relationships generated during the training process described above. The classification module 270 may be configured to communicate the classification of an image to a user or other destination.

    [0031] FIGS. 3 and 4 are flowcharts that illustrate operations of three-dimensional labeling using frame of reference projections in accordance with some embodiments of the inventive concept. Referring now to FIG. 3, operations begin at block 300 where the labeling interface module 135 receives a plurality of images from the database 160 for labeling by one or more of the labeling entities 110a, 110b, and 110c. A labeling entity 110a, 110b, and 110c may define a three-dimensional volume in the same frame of reference as the plurality of images, which may be received at block 305. In the context of a medical application, the images may be two dimensional images of a patient's body part and the three-dimensional volume may encompass the body part.

    [0032] Referring to FIGS. 4 and 5, operations for defining the three-dimensional volume, according to some embodiments of the invention, begin at block 405, where as shown in FIG. 5, a labeling entity 110a, 110b, and 110c may define a three-dimensional volume by creating a first two-dimensional bounding box 505 on the two-dimensional image 515. At block 405, a labeling entity 110a, 110b, and 110c creates a second two-dimensional bounding box 510 on the two-dimensional image 520. Image 515 is a cross-sectional view from the perspective of a top of the patient's hand while image 520 is a cross-sectional view from the perspective of the patient's fingers and thumb pointing at the camera. The two-dimensional bounding boxes 505 and 510 are in respective planes that intersect one another and can be used to define a three-dimensional volume based on their respective dimensions at block 410.

    [0033] Returning to FIG. 3, a labeling entity 110a, 110b, and 110c may assign the three-dimensional volume a label, which may be received at block 310. The three-dimensional volume may be projected onto the respective ones of the plurality of two-dimensional images from the database 160 at block 315. This projection is illustrated, for example, in FIG. 7, which highlights with thicker lines boundaries of a defined three-dimensional box that are projected onto a sequence of two-dimensional images for eight different images showing various perspective views of a patient's hand. A determination is made at block 320 of the amount of surface area of the two-dimensional image that falls inside the three-dimensional volume relative to a total amount of surface area of the two-dimensional training image. A determination is then made at block 325 whether to assign the label corresponding to the three-dimensional volume to respective ones of the images from the database 160 based on the amount of surface area contained within the volume relative to the total surface area of the image.

    [0034] Referring now to FIG. 6, in some embodiments, the label assigned to the three-dimensional volume may be assigned to one of the images from the database 160 when the amount of surface area contained within the three-dimensional volume relative to a total surface area of the image exceeds a surface area percentage threshold at block 600. For example, when the three-dimensional volume encompasses images of a patient's hand, then all the two-dimensional images showing slices of the patient's hand from different cross-sectional perspectives using the same frame of reference can be automatically labeled with the same label as the three-dimensional volume thereby avoiding the manual labeling process for numerous images. The surface area percentage threshold can be adjusted based on accuracy/error rates, the types of subject or objects being labeled, or other factors.

    [0035] FIGS. 3 and 6 illustrate example embodiments of the inventive concept in which image surface area is used as an image metric used in assigning a label to a two-dimensional image. Other image metrics may be used in place or in addition to the image surface area metric. These image metrics may include, but are not limited to, standard deviation of image pixel values and/or a histogram of image pixel values. A number and/or intensity of pixel values of an image that fall within a three-dimensional volume may be used to determine whether to assign a label to the image. The number and/or intensity of the pixel values may be compared to a threshold to determine whether the label should be assigned. A histogram of pixel values for image that show the distribution of pixels falling inside and outside of a three-dimensional volume may also be used to determine whether to assign a label to the image.

    [0036] Referring now to FIG. 8, a data processing system 800 that may be used to implement the labeling interface server 130 and/or AI system server of FIG. 1, in accordance with some embodiments of the inventive concept, comprises input device(s) 802, such as a keyboard or keypad, a display 804, and a memory 806 that communicate with a processor 808. The data processing system 800 may further include a storage system 810, a speaker 812, and an input/output (I/O) data port(s) 814 that also communicate with the processor 808. The processor 808 may be, for example, a commercially available or custom microprocessor. The storage system 810 may include removable and/or fixed media, such as floppy disks, ZIP drives, hard disks, or the like, as well as virtual storage, such as a RAMDISK. The I/O data port(s) 814 may be used to transfer information between the data processing system 900 and another computer system or a network (e.g., the Internet). These components may be conventional components, such as those used in many conventional computing devices, and their functionality, with respect to conventional operations, is generally known to those skilled in the art. The memory 806 may be configured with computer readable program code 816 to facilitate three-dimensional labeling using frame of reference projections according to some embodiments of the inventive concept.

    [0037] FIG. 9 illustrates a memory 905 that may be used in embodiments of data processing systems, such as the labeling interface server 130 of FIG. 1 and the data processing system 800 of FIG. 8, respectively, to facilitate three-dimensional labelling using frame of reference projections according to some embodiments of the inventive concept. The memory 905 is representative of the one or more memory devices containing the software and data used for facilitating operations of the labeling interface server 130 and the labeling interface module 135 as described herein. The memory 905 may include, but is not limited to, the following types of devices: cache, ROM, PROM, EPROM, EEPROM, flash, SRAM, and DRAM. As shown in FIG. 9, the memory 905 may contain four or more categories of software and/or data: an operating system 910, a user interface module 915, a labeling manager module 920, and a communication module 940. In particular, the operating system 910 may manage the data processing system's software and/or hardware resources and may coordinate execution of programs by the processor. The user interface module 915 may be configured to perform one or more of the operations described above with respect to the labeling interface server 130, the labeling interface module 135, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The labeling manager module 920 may be configured to perform one or more of the operations described above with respect to the labeling interface server 130, the labeling interface module 135, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The communication module 940 may be configured to support communication between, for example, the labeling interface server 130, the AI system server 140, the labeling entities 110a, 110b, and 110c, and/or the database 160.

    [0038] FIG. 10 illustrates a memory 1005 that may be used in embodiments of data processing systems, such as the AI system server 140 of FIG. 1 and the data processing system 900 of FIG. 9, respectively, to facilitate three-dimensional labeling using frame of reference projections according to some embodiments of the inventive concept. The memory 1005 is representative of the one or more memory devices containing the software and data used for facilitating operations of the AI system server 140 and the AI system module 145 as described herein. The memory 1005 may include, but is not limited to, the following types of devices: cache, ROM, PROM, EPROM, EEPROM, flash, SRAM, and DRAM. As shown in FIG. 10, the memory 1005 may contain five or more categories of software and/or data: an operating system 1010, a volume and label projection module 1015, a training module 1025, which includes a machine learning engine module 1030 and an AI engine module 1035, a classification module 1040, and a communication module 1045. In particular, the operating system 1010 may manage the data processing system's software and/or hardware resources and may coordinate execution of programs by the processor. The volume and label projection module 1015 may be configured to perform one or more of the operations described above with respect to the AI server 140, the machine learning engine 220, the label projection module 250, the label assignment module 240, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The machine learning engine module 1030 may be configured to perform one or more of the operations described above with respect to the AI server 140, the machine learning engine 220, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The AI engine module 1035 may be configured to perform one or more of the operations described above with respect to the AI server 140, the AI engine 230, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The classification module 1040 may be configured to perform one or more of the operations described above with respect to the AI server 140, the machine learning engine 220, AI engine 230, the classification module 270, the flowcharts of FIGS. 3, 4, and 6, and the diagrams of FIGS. 5 and 7. The communication module 1045 may be configured to support communication between, for example, the AI system server 140 and the labeling interface server 130 and/or the database 160.

    [0039] Although FIGS. 9 and 10 illustrate hardware/software architectures that may be used in data processing systems, such as the labeling interface server 130 of FIG. 1, the AI engine server 140 of FIG. 1, and the data processing system 800 of FIG. 8, respectively, in accordance with some embodiments of the inventive concept, it will be understood that embodiments of the present invention are not limited to such a configuration but are intended to encompass any configuration capable of carrying out operations described herein.

    [0040] Computer program code for carrying out operations of data processing systems discussed above with respect to FIGS. 1 - 10 may be written in a high-level programming language, such as Python, Java, C, and/or C++, for development convenience. In addition, computer program code for carrying out operations of the present invention may also be written in other programming languages, such as, but not limited to, interpreted languages. Some modules or routines may be written in assembly language or even micro-code to enhance performance and/or memory usage. It will be further appreciated that the functionality of any or all of the program modules may also be implemented using discrete hardware components, one or more application specific integrated circuits (ASICs), or a programmed digital signal processor or microcontroller.

    [0041] Moreover, the functionality of the labeling interface server 130 of FIG. 1, the AI engine server 140 of FIG. 1, and the data processing system 800 of FIG. 8 may each be implemented as a single processor system, a multi-processor system, a multi-core processor system, or even a network of stand-alone computer systems, in accordance with various embodiments of the inventive concept. Each of these processor/computer systems may be referred to as a "processor" or "data processing system."

    [0042] The data processing apparatus described herein with respect to FIGS. 1 - 10 may be used to facilitate three-dimensional labelling using frame of reference projections according to some embodiments of the inventive concept described herein. These apparatus may be embodied as one or more enterprise, application, personal, pervasive and/or embedded computer systems and/or apparatus that are operable to receive, transmit, process and store data using any suitable combination of software, firmware and/or hardware and that may be standalone or interconnected by any public and/or private, real and/or virtual, wired and/or wireless network including all or a portion of the global communication network known as the Internet, and may include various types of tangible, non-transitory computer readable media. In particular, the memory 905 and the memory 1005 when coupled to a processor includes computer readable program code that, when executed by the processor, causes the processor to perform operations including one or more of the operations described herein with respect to FIGS. 1 - 7.

    [0043] Some embodiments of the inventive concept may provide an AI system in which image data may be labeled more efficiently by reducing the amount of manual labeling involved in images that may be associated with the same subject or object. A three-dimensional volume may be defined that encompasses images of the subject or object from multiple perspectives and the three-dimensional volume may be assigned a label. Many of the two-dimensional images to be labeled, however, may be cross-sectional slices and/or different perspective views of the subject or object encompassed in the three-dimensional volume. The three-dimensional volume can be projected onto the various images to be labeled and, based on the amount of surface area of the image that falls inside the three-dimensional volume relative to the total surface area of the image, the image may be automatically labeled with the same label assigned to the three-dimensional volume without the need for manual intervention. The threshold for how much of an images surface area needs to fall within the three-dimensional volume for the image to qualify for automatic labeling using the three-dimensional volume can be adjusted based on accuracy/error rates, the types of subject or objects being labeled, or other factors.

    [0044] Further Definitions and Embodiments:

    [0045] In the above description of various embodiments of the present inventive concept, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this inventive concept belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.

    [0046] The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present inventive concept. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

    [0047] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the inventive concept. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Like reference numbers signify like elements throughout the description of the figures.

    [0048] In the above-description of various embodiments of the present inventive concept, aspects of the present inventive concept may be illustrated and described herein in any of a number of patentable classes or contexts including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present inventive concept may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a "circuit," "module," "component," or "system." Furthermore, aspects of the present inventive concept may take the form of a computer program product comprising one or more computer readable media having computer readable program code embodied thereon.

    [0049] Any combination of one or more computer readable media may be used. The computer readable media may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an appropriate optical fiber with a repeater, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

    [0050] The description of the present inventive concept has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the inventive concept in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the inventive concept. The aspects of the inventive concept herein were chosen and described to best explain the principles of the inventive concept and the practical application, and to enable others of ordinary skill in the art to understand the inventive concept with various modifications as are suited to the particular use contemplated.