Learning device, method, and program for discriminator, and discriminator
11328522 · 2022-05-10
Assignee
Inventors
Cpc classification
G02B21/365
PHYSICS
C12M1/34
CHEMISTRY; METALLURGY
G06F18/214
PHYSICS
G01N23/041
PHYSICS
G06F18/217
PHYSICS
G01N21/17
PHYSICS
G01N33/4833
PHYSICS
G02B27/0068
PHYSICS
G06V20/69
PHYSICS
International classification
G01N23/041
PHYSICS
G06V20/69
PHYSICS
B01L3/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
In a learning device, method, and program for a discriminator, and a discriminator, it is possible to enable accurate learning of a discriminator that discriminates a state of an object to be observed, such as a cell. An image acquisition unit acquires a first image including an influence of a meniscus and a second image with the influence of the meniscus eliminated for the same object to be observed. Next, a training data generation unit generates training data for learning a discriminator based on the second image. Then, a learning unit learns the discriminator based on the first image and the training data.
Claims
1. A learning device for a discriminator, which discriminates a state of an object to be observed based on a captured image including an influence of a meniscus acquired by imaging a container, in which a liquid with a meniscus formed on a surface and the object to be observed are contained, the learning device configured to: acquire a first image including the influence of the meniscus and a second image with the influence of the meniscus eliminated for the same object to be observed; generate training data for learning the discriminator based on the second image; and learn the discriminator based on the first image and the training data, wherein in the training data, a first region where the object to be observed is in a differentiated state is labeled by a first label, a second region where the object to be observed is in the middle of differentiation is labeled by a second label, and a third region where there is no object to be observed is labeled by a third label, and the first label, the second label, and the third label are each different from each other.
2. The learning device according to claim 1, wherein the second image is acquired by imaging the object to be observed with the liquid eliminated.
3. The learning device according to claim 1, wherein the second image is acquired by imaging the object to be observed in the container filled with the liquid and sealed with a transparent plate.
4. The learning device according to claim 1, wherein the second image is acquired by imaging the object to be observed with an imaging device comprising an optical element configured to eliminate the influence of the meniscus.
5. The learning device according to claim 1, wherein the learning device is further configured to generate an image obtained by applying a label according to the state of the object to be observed to the second image or the second image as the training data.
6. The learning device according to claim 1, wherein the discriminator has a feature quantity of a pixel to be discriminated in the captured image as input, and outputs a discrimination result of the state of the object to be observed for the pixel to be discriminated.
7. The learning device according to claim 1, wherein the captured image is acquired by imaging the container with a phase contrast microscope.
8. The learning device according to claim 1, wherein the learning device is further configured to collate a discrimination result output from the discriminator for a pixel to be discriminated in the first image with a pixel in the training data corresponding to the pixel to be discriminated to learn the discriminator.
9. A discriminator learned by the learning device for a discriminator according to claim 1.
10. A learning method for a discriminator, which discriminates a state of an object to be observed based on a captured image including an influence of a meniscus acquired by imaging a container, in which a liquid with a meniscus formed on a surface and the object to be observed are contained, the learning method comprising: acquiring a first image including the influence of the meniscus and a second image with the influence of the meniscus eliminated for the same object to be observed; generating training data for learning the discriminator based on the second image; and learning the discriminator based on the first image and the training data, wherein in the training data, a first region where the object to be observed is in a differentiated state is labeled by a first label, a second region where the object to be observed is in the middle of differentiation is labeled by a second label, and a third region where there is no object to be observed is labeled by a third label, and the first label, the second label, and the third label are each different from each other.
11. A non-transitory computer readable recording medium storing a learning program for a discriminator that causes a computer to execute: a step of discriminating a state of an object to be observed based on a captured image including an influence of a meniscus acquired by imaging a container, in which a liquid with a meniscus formed on a surface and the object to be observed are contained; a step of acquiring a first image including the influence of the meniscus and a second image with the influence of the meniscus eliminated for the same object to be observed; a step of generating training data for learning the discriminator based on the second image; and a step of learning the discriminator based on the first image and the training data, wherein in the training data, a first region where the object to be observed is in a differentiated state is labeled by a first label, a second region where the object to be observed is in the middle of differentiation is labeled by a second label, and a third region where there is no object to be observed is labeled by a third label, and the first label, the second label, and the third label are each different from each other.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DESCRIPTION OF THE PREFERRED EMBODIMENTS
(12) Hereinafter, an embodiment of the invention will be described.
(13) In the embodiment, the microscope device 1 is a phase contrast microscope, and captures, as a captured image, a phase contrast image of, for example, a cultivated cell as an object to be observed. Specifically, as shown in
(14) A cultivation container 60 in which an object S to be observed, such as a cell, and a culture solution C are contained is placed on the stage 61. A rectangular opening is formed at the center of the stage 61. The cultivation container 60 is installed on a member that forms the opening, and light passing through the object S to be observed in the cultivation container 60 and light diffracted by the object S to be observed pass through the opening.
(15) As the cultivation container 60, for example, while a well plate having a plurality of wells (corresponding to a container of the invention) is used, the invention is not limited thereto, and a schale, a dish, or the like may be used. As the object S to be observed that is contained in the cultivation container 60, multipotential stem cells, such as an iPS cell and an ES cell, cells of nerve, skin, myocardium, and liver differentiated and induced from a stem cell, cells of skin, retina, myocardium, blood corpuscles, nerves, and organs, and the like may be used.
(16) A bottom surface of the cultivation container 60 placed on the stage 61 is an installation surface P of the object S to be observed, and the object S to be observed is disposed on the installation surface P. The cultivation container 60 is filled with the culture solution C, and a meniscus having a concave shape is formed on a liquid surface of the culture solution C. In the embodiment, while a cell cultivated in the culture solution is used as the object S to be observed, the object S to be observed is not limited to the cell in the culture solution, and a cell fixed in a liquid, such as water, formalin, ethanol, or methanol, may be used as the object S to be observed. Even in this case, a meniscus is formed on the liquid surface of the liquid in the container.
(17) The illumination light irradiation unit 10 irradiates the object S to be observed contained in the cultivation container 60 on the stage 61 with illumination light for so-called phase contrast measurement. In the embodiment, the illumination light irradiation unit 10 irradiates ring-shaped illumination light as illumination light for phase contrast measurement.
(18) Specifically, the illumination light irradiation unit 10 of the embodiment comprises a white light source 11 that emits white light for phase contrast measurement, a slit plate 12 that has a ring-shaped slit, on which white light emitted from the white light source 11 is incident, and that emits ring-shaped illumination light, and a condenser lens 13 on which receives ring-shaped illumination light emitted from the slit plate 12 is incident, and that irradiates the object S to be observed with the received ring-shaped illumination light.
(19) The slit plate 12 is provided with the ring-shaped slit that transmits white light in a light shielding plate configured to shield white light emitted from the white light source 11, and as white light passes through the slit, ring-shaped illumination light is formed. The condenser lens 13 converges ring-shaped illumination light emitted from the slit plate 12 toward the object S to be observed.
(20) In the cultivation container 60 placed on the stage 61, a cultivated cell group (cell colony) is disposed as the object S to be observed. As the cultivated cells, multipotential stem cells, such as an iPS cell and an ES cell, cells of nerve, skin, myocardium, and liver differentiated and induced from a stem cell, cells of skin, retina, myocardium, blood corpuscles, nerves, and organs, and the like may be used. As the cultivation container 60, a well plate in which a schale and a plurality of wells are arranged, or the like can be used. In a case where the well plate is used, each well corresponds to a container of the invention. In the embodiment, the well plate in which a plurality of wells are arranged is used as the cultivation container 60.
(21) The imaging optical system 30 forms an image of the object S to be observed in the cultivation container 60 on the imaging unit 40. The imaging optical system 30 comprises an objective lens 31, a phase plate 32, and an imaging lens 33.
(22) In the phase plate 32, a phase ring is formed in a transparent plate that is transparent with respect to a wavelength of ring-shaped illumination light. The size of the slit of the slit plate 12 described above has a relationship conjugate with the phase ring.
(23) In the phase ring, a phase membrane that shifts the phase of incident light by ¼ wavelength, and a dimmer filter that dims incident light are formed in a ring shape. Direct light incident on the phase plate 32 passes through the phase ring, and thus, the phase thereof is shifted by a ¼ wavelength and the brightness thereof is weakened. On the other hand, most of diffracted light diffracted by the object S to be observed passes through a portion of the transparent plate of the phase plate 32, and thus, the phase and brightness thereof are not changed.
(24) The imaging lens 33 is a member on which direct light and diffracted light passing through the phase plate 32 are incident, and that and forms images of direct light and diffracted light on the imaging unit 40.
(25) The imaging unit 40 comprises an imaging element that receives an image of the object S to be observed formed by the imaging lens 33, images the object S to be observed, and outputs a phase contrast image as an observation image. As the imaging element, a charge-coupled device (CCD) image sensor, a complementary metal-oxide semiconductor (CMOS) image sensor, or the like can be used.
(26) Here, the stage 61 is driven by a stage drive unit (not shown) and moves in an X direction and a Y direction perpendicular to each other within a horizontal plane. With the movement of the stage 61, each observation region in each well of the well plate is scanned, and a captured image of each observation region is acquired by the imaging unit 40. The captured image of each observation region is output to the microscope control device 2.
(27)
(28) In the embodiment, although the captured image of each observation region in the well is acquired with the movement of the stage 61, the invention is not limited thereto, and the imaging optical system 30 may be moved with respect to the stage 61 to acquire the captured image of each observation region. Alternatively, both of the stage 61 and the imaging optical system 30 may be moved.
(29) The microscope control device 2 is constituted of a computer comprising a central processing unit (CPU), a semiconductor memory, a hard disk, and the like. Then, a program that includes a learning program for a discriminator of the invention and controls the system is installed in the hard disk. As the program is executed by the CPU, the CPU functions as the respective units of the microscope control device 2. The microscope control device 2 controls the entire image evaluation system. As shown in
(30) The controller 21 controls the drive of the illumination light irradiation unit 10, the stage drive unit (not shown) that drives the stage 61, the imaging optical system 30, and the imaging unit 40 to acquire the captured image of the object S to be observed.
(31) The image evaluation device 22 evaluates a state of the object S to be observed included in the captured image.
(32) In the embodiment, the image evaluation device 22 acquires a captured image of each observation region and evaluates a state of an object S to be observed included in the captured image. In the embodiment, the object S to be observed is a cell. For this reason, evaluating the state of the object S to be observed refers to, for example, evaluating whether the cell include in the captured image is an undifferentiated cell or a differentiated cell, evaluating whether a differentiated cell is in a differentiated state or in the middle of differentiation, evaluating the ratio of an undifferentiated cell and a differentiated cell included in the captured image, evaluating the degree of growth of the cell or a cell colony, or evaluating a reduction rate of a cancer cell by a carcinostatic agent. It should be noted that the evaluation of the state of the cell is not limited thereto, and other evaluations may be applied. In the embodiment, it is assumed that, in a case where the cell is a differentiated cell, the image evaluation device 22 evaluates whether the cell is in the differentiated state or in the middle of differentiation.
(33) The image acquisition unit 50 acquires the captured image of the object S to be observed captured by the imaging unit 40. In the embodiment, since the cultivation container 60 is the well plate in which a plurality of wells are arranged, the captured image of each observation region in each well is acquired.
(34) The discriminator 51 outputs a discrimination result for the captured image. In the embodiment, the image evaluation device 22 evaluates whether the cell is in the differentiated state or in the middle of differentiation. For this reason, the discriminator 51 outputs a discrimination result regarding whether the cell included in the captured image is in the differentiated state or in the middle of differentiation. In order to perform such discrimination, the discriminator 51 has a feature quantity of a pixel to be discriminated to be an object to be discriminated in the captured image as input, and is machine-learned so as to output a discrimination result of a state of the pixel to be discriminated. In the embodiment, the discriminator 51 uses, as input, a pixel value in a region determined in advance centering on the pixel to be discriminated in the captured image as the feature quantity as input, and outputs three discrimination results of a cell in the differentiated state, a cell in the middle of differentiation, and a cell not in the differentiated state and not in the middle of differentiation.
(35) To this end, the discriminator 51 outputs scores representing the cell in the differentiated state and the cell in the middle of differentiation for the input feature quantity and compares the two output scores with corresponding threshold values determined in advance. Then, in a case where the score representing the cell in the differentiated state exceeds the threshold value representing the cell in the differentiated state, and the score representing the cell in the middle of differentiation does not exceed the threshold value representing the cell in the middle of differentiation, a discrimination result that the pixel to be discriminated is the cell in the differentiated state is output. On the other hand, in a case where the score representing the cell in the middle of differentiation exceeds the threshold value representing the cell in the middle of differentiation, and the score representing the cell in the differentiated state does not exceed the threshold value representing the cell in the differentiated state, a discrimination result that the pixel to be discriminated is the cell in the middle of differentiation is output. In a case where both of the two scores do not exceed the corresponding threshold values and in a case where both of the two scores exceed the corresponding threshold values, a discrimination result that the cells are not in the differentiated state and not in the middle of differentiation is output.
(36) Here, as a method of machine learning, a known method can be used. For example, support vector machine (SVM), a deep neural network (DNN), a convolutional neural network (CNN), or the like can be used.
(37) Here, the object S to be observed and the culture solution C are contained in the cultivation container 60, and the meniscus is formed on the liquid surface of the culture solution C.
(38) In the embodiment, in a case where the learning device 52 learns the discriminator 51, a first image including the influence of the meniscus and a second image with the influence of the meniscus eliminated for the same object S to be observed are used.
(39) For this reason, the image acquisition unit 50 acquires the first image including the influence of the meniscus and the second image with the influence of the meniscus eliminated. The first image and the second image are images for the same object S to be observed. Here, the first image may be acquired by imaging the cultivation container 60 as it is.
(40) On the other hand, the meniscus M is formed on the liquid surface of the culture solution C in the cultivation container 60.
(41) As shown in
(42) The illumination light irradiation unit 10 may be provided with an optical element that eliminates the influence of the meniscus.
(43) Specifically, the optical path correction lens 14 has a convex surface 14a on the object S to be observed side, and is a positive meniscus lens, the refractive power of which increases as the distance from the optical axis is greater. At least one of the convex surface 14a on the object S to be observed side or a concave surface 14b on the white light source 11 side of the optical path correction lens 14 may be formed of an aspheric surface. In this way, as the optical path correction lens 14 is provided, as shown in
(44) The training data generation unit 53 generates training data for learning the discriminator 51 based on the second image. To this end, the training data generation unit 53 displays the second image on the display device 4. Then, training data is generated by applying a label according to the state of the object S to be observed at each pixel position of the second image through an input from an operator with the input device 3.
(45) The learning unit 54 learns the discriminator 51 based on the first image and training data T0. Here, the first image includes the influence of the meniscus M, and the state of the cell as the object S to be observed is not clearly represented in the meniscus region. However, in a case where training data T0 and the first image are associated with each other, it is possible to discriminate the state of the object S to be observed at the individual pixel position of the first image.
(46) To this end, the learning unit 54 inputs the feature quantity of the pixel to be discriminated in the first image to the discriminator 51 and collates the discrimination result output from the discriminator 51 with a pixel in training data T0 corresponding to the pixel to be discriminated. In a case where the discrimination result is a correct answer, the learning unit 54 performs learning of the discriminator 51 to the effect that the discrimination result is a correct answer. In a case where the discrimination result is an incorrect answer, the learning unit 54 performs learning of the discriminator 51 so as to correct the discrimination result. In addition, the learning unit 54 acquires first images and second images for a plurality of objects S to be observed, and generates training data T0 to repeatedly perform learning of the discriminator 51. Then, the learning unit 54 determines whether or not the discrimination result of the discriminator 51 exceeds a correct answer rate determined in advance, and in a case where the determination is affirmative, ends learning of the discriminator 51. The first images and the second images for a plurality of objects to be observed may be acquired in advance and stored in the hard disk (not shown) of the microscope control device 2. In this case, the image acquisition unit 50 acquires the first images and the second images from the hard disk.
(47) The discriminator 51 learned in this way outputs the discrimination result of the state of the pixel to be discriminated in a case where the feature quantity of the pixel to be discriminated in the captured image is input.
(48) Returning to
(49) The display device 4 is constituted of a display device, such as a liquid crystal display, and displays the captured image captured in the imaging unit 40, an evaluation result of the captured image, and the like. The display device 4 may be constituted of a touch panel, and thus, may also be used as the input device 3.
(50) Next, processing that is executed in the embodiment will be described.
(51) In this way, in the embodiment, the first image including the influence of the meniscus and the second image with the influence of the meniscus eliminated for the same object to be observed are acquired, training data T0 is generated based on the second image, and the discriminator 51 is learned based on the first image and training data T0. For this reason, it is possible to determine whether or not the output of the discriminator 51 is a correct answer with excellent accuracy in a case where the feature quantity of the individual pixel of the first image including the influence of the meniscus is input, and to perform learning of the discriminator 51 with excellent accuracy. As the discriminator 51 learned in this way is used, even in the captured image including the influence of the meniscus, it is possible to discriminate the state of the object S to be observed included in the captured image with excellent accuracy.
(52) In the above-described embodiment, although the captured image formed by the imaging optical system 30 is captured by the imaging unit 40, the imaging unit 40 may not be provided, and an observation optical system or the like may be provided such that the user can directly observe the captured image of the object S to be observed formed by the imaging optical system 30.
(53) In the above-described embodiment, although the invention is applied to the phase contrast microscope, the invention is not limited to the phase contrast microscope, and can be applied to other microscopes, such as a differential interference microscope and a bright-field microscope.
(54) In the above-described embodiment, although the image obtained by applying the label according to the state of the object S to be observed to the second image is used as training data, the second image itself may be used as training data.
(55) According to the above-described embodiment, since the influence of the meniscus M is eliminated in the second image, the state of the object S to be observed in the container is clearly represented in the second image. Furthermore, since training data T0 is generated based on the second image, the state of the object S to be observed is clearly represented even in training data T0. On the other hand, since the first image includes the influence of the meniscus M, the state of the object S to be observed is not clearly represented; however, in a case where the first image and training data T0 are associated with each other, it is possible to clearly discriminate the state of the object S to be observed at the individual pixel position of the first image. Accordingly, as the discriminator 51 is learned based on the first image and training data T0, it is possible to determine whether or not the discrimination result of the discriminator 51 is a correct answer with excellent accuracy in a case where the feature quantity of the pixel position to be discriminated of the first image is used as input. Therefore, it is possible to perform learning of the discriminator 51 with excellent accuracy. Furthermore, as the discriminator 51 learned in this way is used, it is possible to discriminate the state of the object S to be observed included in the captured image with excellent accuracy.
EXPLANATION OF REFERENCES
(56) 1: microscope device 2: microscope control device 3: input device 4: display device 10: illumination light irradiation unit 11: white light source 12: slit plate 13: condenser lens 14: optical path correction lens 14a: convex surface 14b: concave surface 21: controller 22: image evaluation device 30: imaging optical system 31: objective lens 32: phase plate 33: imaging lens 40: imaging unit 50: image acquisition unit 51: discriminator 52: learning device 53: training data generation unit 54: learning unit 60: cultivation container 61: stage 62: region 63: transparent plate 70: well plate 71: well 75: scanning start point 76: scanning end point 77: solid line indicating scanning locus A0, A1, A2, A3, A4: arrow C: culture solution M: meniscus P: installation surface R1: meniscus region R2: non-meniscus region S: object to be observed T0: training data