Method for managing annotation job, apparatus and system supporting the same
11062800 ยท 2021-07-13
Assignee
Inventors
Cpc classification
G06F18/217
PHYSICS
G16H50/20
PHYSICS
G06V10/7788
PHYSICS
G16H10/40
PHYSICS
G16H40/20
PHYSICS
G06V10/7753
PHYSICS
G06V10/22
PHYSICS
International classification
G06Q10/06
PHYSICS
G16H40/20
PHYSICS
Abstract
A computing device obtains information about a medical slide image, and determines a dataset type of the medical slide image and a panel of the medical slide image. The computing device assigns to an annotator account, an annotation job defined by at least the medical slide image, the determined dataset type, an annotation task, and a patch that is a partial area of the medical slide image. The annotation task includes the determined panel, and the panel is designated as one of a plurality of panels including a cell panel, a tissue panel, and a structure panel. The dataset type indicates a use of the medical slide image and is designated as one of a plurality of uses including a training use of a medical learning model and a validation use of the machine learning model.
Claims
1. An annotation job management method performed by at least one computing device, comprising: obtaining a medical slide image that is an annotation target; determining at least one of a dataset type and a panel type of the obtained medical slide image; selecting at least one annotation job target patch from among a plurality of candidate patches which are included in the medical slide image, based on at least one of the determined dataset type and panel type; assigning the at least one selected annotation job target patch to at least one annotator account; obtaining an annotation result for the at least one annotation job target patch from the assigned annotator account; and training or validating a machine learning model using a dataset generated based on the annotation result, wherein the panel type indicates at least one of a cell panel, a tissue panel, and a structure panel.
2. The annotation job management method of claim 1, wherein selecting the at least one annotation job target patch comprises: selecting the plurality of candidate patches sampled from the medical slide image; calculates at least one of a confidence score and an entropy value of each of the plurality of candidate patches; and selecting the at least one annotation job target patch based on the at least one of the confidence score and the entropy value calculated for each of the plurality of candidate patches.
3. The annotation job management method of claim 2, wherein selecting the plurality of candidate patches comprises: dividing at least a part of the medical slide image based on information associated with the medical slide image; and selecting the plurality of candidate patches from the divided at least part.
4. The annotation job management method of claim 1, wherein selecting the at least one annotation job target patch comprises: selecting the plurality of candidate patches sampled from the medical slide image; calculating a misprediction probability of the machine learning model for each of the plurality of candidate patches; and selecting the at least one annotation job target patch from among the plurality of candidate patches based on the calculated misprediction probability.
5. The annotation job management method of claim 1, wherein assigning the at least one annotation job target patch to the at least one annotator account comprises assigning the at least one annotation job target patch to the at least one annotator account based on at least one of the determined dataset type and panel and an annotation performance history of an annotator.
6. The annotation job management method of claim 1, further comprising: comparing the obtained annotation result with a result of the machine learning model for the at least one annotation job target patch; and determining whether to reassign the at least one annotation job target patch based on a compared result.
7. The annotation job management method of claim 1, wherein the at least one annotator account includes a plurality of annotator accounts, wherein obtaining the annotation result includes obtaining annotation results for the at least one annotation job target patch from the plurality of annotator accounts assigned for the at least one annotation job target patch, and wherein the annotation job management method further comprises: comparing the annotation results of the plurality of annotator accounts; and determining whether to reassign the at least one annotation job target patch based on a compared result.
8. The annotation job management method of claim 1, wherein the dataset type indicates a use of the medical slide image, and the use of the medical slide image includes at least one of a training use of the medical learning model and a validation use of the machine learning model.
9. The annotation job management method of claim 1, wherein determining the at least one comprises determining at least one of the dataset type and panel type of the medical slide image based on an output value outputted by inputting the medical slide image to the machine learning model.
10. An annotation job management apparatus comprising: a memory including one or more instructions; and a processor that, by executing the one or more instructions: obtains a medical slide image that is an annotation target; determines at least one of a dataset type and a panel type of the obtained medical slide image; selects at least one annotation job target patch from among a plurality of candidate patches which are included in the medical slide image, based on at least one of the determined dataset type and panel type; assigns the at least one selected annotation job target patch to at least one annotator account; obtains an annotation result for the at least one annotation job target patch from the assigned annotator account; and trains or validates a machine learning model using a training dataset generated based on the annotation result, wherein the panel type indicates at least one of a cell panel, a tissue panel, and a structure panel.
11. The annotation job management apparatus of claim 10, wherein the processor: selects the plurality of candidate patches sampled from the medical slide image; calculates at least one of a confidence score and an entropy value of each of the plurality of candidate patches; and selects the at least one annotation job target patch based on the at least one of the confidence score and the entropy value calculated for each of the plurality of candidate patches.
12. The annotation job management apparatus of claim 11, wherein the processor: divides at least a part of the medical slide image based on information associated with the medical slide image; and selects the plurality of candidate patches from the divided at least part.
13. The annotation job management apparatus of claim 10, wherein the processor: selects the plurality of candidate patches sampled from the medical slide image; calculates a misprediction probability of the machine learning model for each of the plurality of candidate patches; and selects the at least one annotation job target patch from among the plurality of candidate patches based on the calculated misprediction probability.
14. The annotation job management apparatus of claim 10, wherein the processor assigns the at least one annotation job target patch to the at least one annotator account based on at least one of the determined dataset type and panel and an annotation performance history of an annotator.
15. The annotation job management apparatus of claim 10, wherein the processor: compares the obtained annotation result with a result of the machine learning model for the at least one annotation job target patch; and determines whether to reassign the at least one annotation job target patch based on a compared result.
16. The annotation job management apparatus of claim 10, wherein the at least one annotator account includes a plurality of annotator accounts, and wherein the processor: obtains annotation results for the at least one annotation job target patch from the plurality of annotator accounts assigned for the at least one annotation job target patch; compares the annotation results of the plurality of annotator accounts; and determines whether to reassign the at least one annotation job target patch based on a compared result.
17. The annotation job management apparatus of claim 10, wherein the dataset type indicates a use of the medical slide image, and the use of the medical slide image includes at least one of a training use of the medical learning model and a validation use of the machine learning model.
18. The annotation job management apparatus of claim 10, wherein the processor determines at least one of the dataset type and panel type of the medical slide image based on an output value outputted by inputting the medical slide image to the machine learning model.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) FIG.1 is an example diagram for explaining a relationship between supervised learning and an annotation job.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(15) Hereinafter, preferred embodiments of the present disclosure will be described with reference to the attached drawings. Advantages and features of the present disclosure and methods of accomplishing the same may be understood more readily by reference to the following detailed description of preferred embodiments and the accompanying drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the disclosure to the person of ordinary skill in the art, and the present disclosure will only be defined by the appended claims. Like reference numerals designate like elements throughout the specification.
(16) Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by the person of ordinary skill in the art to which this disclosure belongs. Further, it will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. The terms used herein are for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms a, an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise.
(17) It will be further understood that, although the terms first, second, A, B, (a), (b), and the like may be used herein to describe various elements, components, steps and/or operations. These terms are only used to distinguish one element, component, step or operation from another element, component, step, or operation. Thus, a first element, component, step or operation discussed below could be termed a second element, component, step or operation without departing from the teachings of the present inventive concept. It will be further understood that when an element is referred to as being connected to or coupled with another element, it can be directly connected or coupled with the other element or intervening elements may be present.
(18) It will be further understood that the terms comprise or comprising, include or including, and have or having specify the presence of stated elements, steps, operations, and/or devices, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or devices.
(19) Before description of this specification, some terms used herein will be clarified.
(20) As used herein, the term label information is correct answer information of a data sample and refers to information obtained through an annotation job. The term label may be used interchangeably with terms such as annotation and tag.
(21) As used herein, the term annotation refers to a job for tagging label information on a data sample or tagged information (i.e., annotation) itself. The term annotation may be used interchangeably with terms such as tagging and labeling.
(22) As used herein, the term misprediction probability refers to a probability or a possibility that a prediction result includes an error (that is, a probability that a prediction is incorrect), when a specific model for a given data sample performs prediction.
(23) As used herein, the term panel refers to a type of patch to be extracted from a medical slide image or a type of the medical slide image itself. The panel may be classified into a cell panel, a tissue panel, and a structure panel, but the technical scope of the present disclosure is not limited thereto. For reference, examples of patches corresponding to each type of panels are shown in
(24) As used herein, the term instruction refers to a series of instructions that are grouped by function and are executed by a processor or a component of a computer program.
(25) Hereinafter, some embodiments of the present disclosure are described in detail with reference to the accompanying drawings.
(26)
(27) Referring to
(28) The elements of the system shown in
(29) In the annotation job management system, the storage server 10 is a server that stores and manages various data associated with an annotation job. In order to manage data effectively, the storage server 10 may store and manage the various data using a database.
(30) The various data may include a medical slide image file, metadata of the medical slide image (e.g. an image format, a name of related disease, a related tissue, related patient information, and the like), data about the annotation job, data about an annotator, and a result of the annotation job. However, the technical concept of the present disclosure is not limited thereto.
(31) In some embodiments, the storage server 10 may function as a web server that presents a job management web page. In this case, through the job management web page, an administrator may perform job management such as assigning an annotation job, and the annotator may recognize and perform the assigned job.
(32) In some embodiments, a data model (e.g. DB schema) for the annotation job management may be designed as shown in
(33) A slide entity 45 is an entity related to a medical slide image. The slide entity 45 may have various information associated with the medical slide image as attributes. Since a plurality of annotation jobs may be generated from one medical slide image, a relationship between the slide entity 45 and the job entity 44 is 1:n.
(34) A dataset entity 49 is an entity that represents a use of the annotated data. For example, the use may be classified into a training use (that is, utilized as a training dataset), a validation use (that is, utilized as a validation dataset), a test use (that is, utilized as a test dataset), or an OPT (Observer Performance Test) use (that is, utilized in the OPT test), but the technical concept of the present disclosure is not limited thereto.
(35) An annotator entity 47 is an entity that represents an annotator. The annotator entity 47 may have as attributes, such as a current job status of the annotator, a job history performed previously, an evaluation result of the previously-performed job, personal information of the annotator (e.g., education, major, etc.), and the like. Since one annotator can perform multiple jobs, a relationship between the annotator entity 47 and the job entity 44 is 1:n.
(36) A patch entity 46 is an entity associated with a patch derived from the medical slide image. Since the patch may include a plurality of annotations, a relationship between the patch entity 46 and an annotation entity 48 is 1:n. Furthermore, since one annotation job may be performed on a plurality of patches, a relationship between the patch entity 46 and the job entity 44 is n:1.
(37) An annotation task entity 43 represents an annotation task that is a detailed type of the annotation job. For example, the annotation task is defined and categorized as a task of tagging whether a specific cell is a mitotic cell, a task of tagging the number of mitotic cells, a task of tagging a type of a lesion, a task of tagging a location of the lesion, a task of tagging a name of disease, or the like. The detailed types of the annotation job can vary according to panels (that is, an annotation tagged on the cell panel may be different from an annotation tagged on the tissue panel). Since different tasks can be performed on the same panel, the task entity 43 may have a panel entity 41 and a task class entity 42 as attributes. Here, the task class entity 42 represents an annotation target (e.g., the mitotic cell, the location of the lesion) defined from a perspective of the panel or an annotation type defined from a perspective of the panel. Since multiple annotation jobs can be generated from one annotation task (that is, there can be multiple jobs performing the same task), a relationship between the annotation task entity 43 and the annotation job entity 44 is 1:n. From a programming point of view, the annotation task entity 43 may correspond to a class or a program, and the annotation job entity 44 may correspond to an instance of the class or to a process generated through execution of the program.
(38) In some embodiments, the storage server 10 may build a database based on the data model described above, and may systematically manage various data associated with the annotation job. As a result, the data management cost can be reduced and the overall work process of annotation can be facilitated.
(39) The data model for the annotation job management has been described above. Next, each element of the annotation job management system is continuously described with reference to
(40) In the annotation job management system, the annotation job management apparatus 100 is a computing device that performs various management functions such as assigning annotation jobs to the annotator terminals 20-1 to 20-n. Here, the computing device may be a tablet, a desktop, a laptop, or the like but is not limited thereto, and may include any kind of device having computing functions. An example of the computing device will be described with reference to
(41) The job management apparatus 100 may be a device used by an administrator. For example, through the job management apparatus 100, the administrator may access the job management web page, log in with an administrator account, and then perform overall management on the annotation jobs. For example, the manager may perform management such as assigning an annotation job to a specific annotator or requesting a review by transmitting the annotation result to the reviewer's account. The overall management process as described above may be automatically performed by the job management apparatus 100, which will be described with reference to
(42) In the annotation job management system, the annotator terminal 20 is a terminal on which the annotation task is performed by the annotator. An annotation tool may be installed in the terminal 20. Various functions for annotation may be provided through the job management web page. In this case, the annotator may access the job management web page through the terminal 20 and then perform the annotation job on the web. An example of the annotation tool is shown in
(43) In the annotation job management system, the reviewer terminal 30 is a terminal of the reviewer to perform a review on the annotation result. The reviewer may review the annotation result using the reviewer terminal 30, and provide a review result to the management apparatus 100.
(44) In some embodiments, at least some elements of the annotation job management system may communicate over a network. Here, the network may be implemented by using any type of wired/wireless network such as a local area network (LAN), a wide area network (WAN), a mobile radio communication network, a wireless broadband Internet (Wibro), or the like.
(45) The annotation job management system according to some embodiments of the present disclosure has been described above with reference to
(46) Each step of the annotation job management method may be performed by a computing device. In other words, each step of the annotation job management method may be implemented as one or more instructions to be executed by a processor of the computing device. For convenience of understanding, the description is continued on the assumption that the annotation job management method is performed in an environment shown in
(47)
(48) As shown in
(49) In some embodiments, the information about the new medical slide image may be obtained through a worker agent in real time. More specifically, the worker agent may detect that the medical slide image file is added on the storage that is designated by the worker agent (that is, a storage server 10 or a storage of a medical institution providing the medical slide image). In addition, the worker agent may insert the information about the new medical slide image into a database of the job management apparatus 100 or the storage server 10. Then, the information about the new medical slide image may be obtained from the database.
(50) In step S200, the management apparatus 100 generates an annotation job for the medical slide image. Here, the annotation job may be defined based on information such as the medical slide image, a dataset type, an annotation task, and a patch which is a partial area of the medical slide image (that is, an annotation target area) (refer to
(51) In step S300, the management apparatus 100 selects an annotator to perform the generated annotation job.
(52) In some embodiments, as shown in
(53) In one embodiment, whether the job performance history includes the job associated with the generated annotation job may be determined based on whether a combination of a dataset type and a panel of an annotation task in each job is similar to a combination of those in the generated annotation job. In another embodiment, it may be determined based on whether a combination of the panel and a task class of the annotation task in each job is similar to a combination of those in the generated annotation job. In yet another embodiment, it may be determined based on whether the above-described two combinations are similar to those in the generated annotation job.
(54) In some embodiments, when the new medical slide image is significant data (e.g. a slide image associated with rare disease, a high quality slide image, and the like.), a plurality of annotators may be selected. In addition, the number of the annotators may be increased in proportion to the significance. In this case, verification of the annotation results may be performed by comparing the respective annotation results of the plurality of annotators with each other. According to the present embodiment, through more strict verification on the significant data, the accuracy of annotation results can be improved.
(55) In step S400, the management apparatus 100 assigns the annotation job to the terminal 20 of the selected annotator. For example, the management apparatus 100 may assign the annotation job to an account of the selected annotator.
(56) In step S500, an annotation is performed on the annotator terminal 20. The annotator may perform the annotation using an annotator tool provided in the terminal 20 or an annotation service provided through web (e.g., a job management web page), but the technical scope of the present disclosure is not limited thereto.
(57) Some examples of the annotator tools are shown in
(58) It should be understood that the annotation tool 60 shown in
(59) In step S600, the annotator terminal 20 provides a result of the annotation job. The result of the annotation job may be label information tagged to the corresponding patch.
(60) In step S700, the management apparatus 100 performs validation (evaluation) on the job result. The validation result may be recorded as an evaluation result of the annotator. A method for validation may vary according to embodiments.
(61) In some embodiments, the validation may be performed based on an output of a machine learning model. Specifically, when a first annotation result data is acquired from the annotator assigned to the job, the first annotation result data may be compared with a result obtained by inputting a patch of the annotation job to the machine learning model. As a result of the comparison, if it is determined that a difference between the two results exceeds a reference value, the first annotation result data may be suspended or disapproved.
(62) In one embodiment, the reference value may be a predetermined fixed value or a variable value according to a condition. For example, the reference value may be smaller as the accuracy of the machine learning model is higher.
(63) In step S800, the management apparatus 100 determines whether the annotation job should be performed again. For example, when the validation result indicates that the annotation job is not performed successfully in step S700, the management apparatus 100 may determine that the annotation job needs to be performed again.
(64) In step S900, in response to the determination, the management apparatus 100 selects other annotator and reassigns the annotating job to the other annotator. In this case, the other annotator may be selected through a similar way as described in step S300. Alternatively, the other annotator may be a reviewer or a machine learning model having the best performance.
(65) Although not shown in
(66) An annotation job management method according to some embodiments of the present disclosure has been described above with reference to
(67) Furthermore, the accuracy of the annotation results can be ensured by comparing and validating the results of the annotation job with those of the machine learning model or those of other annotators. Accordingly, the performance of the machine learning model that learns the annotation results can also be improved.
(68) Hereinafter, a detailed process of the annotation job generation step S200 is described with reference to
(69)
(70) As shown in
(71) In some embodiments, the dataset type may be determined by the administrator.
(72) In some embodiments, the dataset type may be determined based on a confidence score of a machine learning model for medical slide images. Here, the machine learning model refers to a model (that is, a learning target model) that performs a specific task (that is. lesion classification, lesion location recognition, or the like) based on the medical slide images. Details of the present embodiment are shown in
(73) In some embodiments, the dataset type may be determined based on an entropy value of the machine learning model for the medical slide image. The entropy value is an indicator of uncertainty and may have a larger value as the confidence scores are distributed more evenly over the classes. In this embodiment, in response to the determination that the entropy value is more than or equal to the reference value, the dataset type may be determined to the training use. Otherwise, it may be determined to the validation use.
(74) Referring back to
(75) In some embodiments, the panel type may be determined by the administrator.
(76) In some embodiments, the panel type may be determined based on an output value of the machine learning model. Referring to
(77) In some embodiments, the medical slide image may have a plurality of panel types. In this case, patches corresponding to each panel may be extracted from the medical slide image.
(78) Referring back to
(79) In some embodiments, the annotation task may be determined by the administrator.
(80) In some embodiments, the annotation task may be automatically determined based on a combination of the determined dataset type and panel type. For example, when the annotation task corresponding to the combination of the dataset type and the panel type is predefined, the corresponding annotation task may be automatically determined based on the combination.
(81) In step S270, the management apparatus 100 automatically extracts a patch on which the annotation is actually to be performed, from the medical slide image. In some embodiments, an area designated by the administrator may be extracted as the patch. A specific method of automatically extracting the patch may vary depending on embodiments, and various embodiments related to the patch extraction will be described with reference to
(82) In some embodiments, although not shown in
(83) The method of generating the annotation job according to some embodiments of the present disclosure has been described above with reference to
(84)
(85) As shown in
(86) In some embodiments, in a case that at least cell regions constituting a specific tissue are sampled as candidate patches (that is, patches of a cell panel type), a tissue area 83 may be extracted from a medical slice image 81 through image analysis, and a plurality of candidate patches 85 may be sampled within the extracted area 83, as shown in
(87) In some embodiments, the candidate patches may be generated by uniformly dividing an entire area of a medical slide image and then sampling each of the divided areas. That is, the sampling may be performed in an equally dividing manner In this case, the size of each candidate patch may be a predetermined fixed value, or a variable value determined based on a size, resolution, panel type, or the like of the medical slide image.
(88) In some embodiments, the candidate patches may be generated by randomly dividing the entire area of the medical slide image and then sampling each of the divided areas.
(89) In some embodiments, the candidate patches may be configured such that the number of objects exceeds a reference value. For example, object recognition may be performed on the entire area of the medical slide image, and, as a result, an area having a larger number of objects, which are calculated as a result of the object recognition, than the reference value may be sampled as a candidate patch. In such a case, the sizes of the candidate patches may be different.
(90) In some embodiments, the candidate patches that are divided according to a policy determined based on metadata of the medical slide image may be sampled. Here, the metadata may be a disease name, tissue or demographic information of a patient associated with the medical slide image, a location of a medical institution, a quality (e.g. resolution) or format type of the medical slide image, or the like. For example, when the medical slide image is an image about a tissue of a tumor patient, candidate patches may be sampled at cell level to be used as training data of a machine learning model for mitotic cell detection. In another example, in a case that lesion location in the tissue is critical when the prognosis of disease associated with the medical slide image is diagnosed, candidate patches may be sampled at tissue level.
(91) In some embodiments, when sampling candidate patches of the structure panel type in the medical slide image, outlines are extracted from the medical slide image through image analysis, and sampling may be performed so that the outlines that are connected to each other among the extracted outlines form one candidate patch.
(92) As described above, a specific method of sampling a plurality of candidate patches in step S271 may vary depending on embodiments. The description is continued referring back to
(93) In step S273, an annotation target patch is selected based on an output value of the machine learning model. The output value may be, for example, a confidence score (or a confidence score of each class), and a specific method of selecting a patch based on the confidence score may vary depending on embodiments.
(94) In some embodiments, the annotation target patch may be selected based on the entropy value calculated by the confidence score for each class. Details of the present embodiment are shown in
(95) As shown in
(96) In some embodiments, the annotation target patch may be selected based on the confidence score itself. For example, among a plurality of the candidate patches, a candidate patch having a confidence score less than a reference value may be selected as the annotation target patch.
(97)
(98) As shown in
(99) The misprediction probability of the machine learning model may be calculated based on a misprediction probability calculation model (hereinafter, referred as calculation model) that is constructed through machine learning. For convenience of understanding, a method of construction the calculation model is described with reference to
(100) As shown in
(101) Some examples of tagging label information to the data for evaluation are shown in
(102) After learning the images 101 and 102 and the label information thereof, the calculation model outputs a high confidence score when an image similar to the image correctly predicted by the machine learning model is input and, otherwise, outputs a low confidence score. Therefore, the calculation model can calculate a misprediction probability of the machine learning model for the input image.
(103) Meanwhile,
(104) Further, according to some embodiments of the present disclosure, a first value (e.g., 0) may be tagged when the prediction error of an image for evaluation is greater than or equal to a reference value, and otherwise a second value (e.g., 1) may be tagged.
(105) Referring back to
(106) When the calculation model is constructed according to above-described process, in step S275, the management apparatus 100 may calculate a misprediction probability for each of the plurality of candidate patches. For example, as shown in
(107) However, as shown in
(108) When the misprediction probability of each of the candidate patches is calculated, the management apparatus 100 may select as the annotation target a candidate patch, having a calculated misprediction probability equal to or greater than a reference value, from the plurality of candidate patches. A high misprediction probability means that the prediction results of the machine learning model are likely to be incorrect since the corresponding patches are important data for improving the performance of the machine learning model. Thus, when patches are selected based on the misprediction probability, high-quality training datasets may be generated because the patches effective for learning are selected as the annotation targets.
(109) The automatic patch extraction method according to various embodiments of the present disclosure has been described above with reference to
(110) Hereinafter, an example computing device 200 that may implement an apparatus (e.g. management apparatus 100)/system according to various embodiments of the present disclosure will be described with reference to
(111)
(112) As shown in
(113) The processor 210 controls the overall operation of each component of the computing device 200. The processor 210 may configured to include at least one of a central processing unit (CPU), a micro processor unit (MPU), a micro controller unit (MCU), a graphics processing unit (GPU), or any form of processor well known in the technical field of the present disclosure. The processor 210 may perform calculation of at least one application or program for executing methods or operations according to embodiments of the present disclosure.
(114) The memory 230 stores various data, commands, and/or information. To execute methods or operations according to various embodiments of the present disclosure, the memory 230 may load one or more programs 291 from the storage 290. The memory 230 may be implemented as a volatile memory such as a random access memory (RAM), but the technical scope of the present disclosure is not limited thereto.
(115) The bus 250 provides a communication function between elements of the computing device 200. The bus 250 may be implemented as various forms of buses, such as an address bus, a data bus, and a control bus, and the like.
(116) The communication interface 270 supports wired or wireless Internet communication of the computing device 200. Further, the communication interface 270 may support various communication methods as well as Internet communication. To this end, the communication interface 270 may include a communication module well known in the technical field of the present disclosure.
(117) The storage 290 may non-temporarily store the one or more programs 291. The storage 290 may include a non-volatile memory, such as Read Only Memory (ROM), an eraseable Programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), and a flash memory, a hard disk, a removable disk, or any form of computer-readable recording medium well known in the art to which the present disclosure pertains.
(118) Computer program 291 may include one or more instructions which cause the processor 210 to perform methods or operations according to various embodiments of the present disclosure by performing the one or more instructions. In other words, the processor 210 may execute methods or operations according to various embodiments of the present disclosure by performing the one or more instructions.
(119) For example, the computer program 291 may include one or more instructions to perform an operation of obtaining information about a new medical slide image, determining a dataset type and a panel of the medical slide image, and the pathological slide image, the determined dataset type, assigning an annotation task and annotation job, which is defined by a patch that is a part of the medical slide image, to an account of an annotator. In this case, the management apparatus 100 according to some embodiments of the present disclosure may be implemented through the computing device 200.
(120) An example computing device that may implement apparatuses according to various embodiments of the present disclosure has been described above with reference to
(121) The concepts of the disclosure described above with reference to
(122) The technical concept of the present disclosure is not necessarily limited to these embodiments, as all the elements configuring the embodiments of the present disclosure have been described as being combined or operated in combination. That is, within the scope of the present disclosure, all of the elements may be selectively operable in combination with one or more.
(123) Although operations are shown in a specific order in the drawings, it should not be understood that desired results can be obtained when the operations must be performed in the specific order or sequential order or when all of the operations must be performed. In certain situations, multitasking and parallel processing may be advantageous. According to the above-described embodiment, it should not be understood that the separation of various configurations is necessarily required, and it should be understood that the described program components and systems may generally be integrated together into a single software product or be packaged into multiple software products.
(124) While the present disclosure have been particularly illustrated and described with reference to embodiments thereof, it will be understood by a person of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims. The embodiments should be considered in a descriptive sense only and not for purposes of limitation.