Analysis and Characterization of Epithelial Tissue Structure
20230084952 · 2023-03-16
Inventors
- Xavier Descombes (Sophia Antipolis Cedex, FR)
- Imane Lboukili (Val de Reuil, FR)
- Thierry Oddos (Clamart, FR)
- Georgios N. Stamatas (Val de Reuil, FR)
Cpc classification
A61B5/0059
HUMAN NECESSITIES
A61B5/055
HUMAN NECESSITIES
A61B5/441
HUMAN NECESSITIES
G16H50/20
PHYSICS
G06T7/187
PHYSICS
G16H20/10
PHYSICS
International classification
Abstract
Methods for non-invasive or minimally invasive assessment of epithelial tissue structure are disclosed. Digital imaging and processing are used to identify cell locations. More specifically, an automated algorithm that may be used to identify epithelial tissue structure, and/or to specify the coordinates/locations of cells in the epithelial tissue structure, through non-invasive or minimally invasive imaging, and use of this information to extract values of epithelial structure related parameters are disclosed.
Claims
1. A non-invasive or minimally invasive method for characterizing epithelial tissue structure, comprising: (a) comparing, by one or more computing device, an unknown image segment to a known epithelial tissue structure model image segment to determine the degree of match between the unknown segment and a model of an epithelial tissue structure; (b) comparing, by the one or more computing device, the determined degree of match to a predetermined value; and (c) characterizing, by the one or more computing device, the particular segment of the sample as an epithelial tissue structure based on the determined degree of match; wherein the degree of match is identified by determining a difference value to a value derived from the model of the epithelial tissue structure; wherein characterizing the particular segment of the sample as an epithelial tissue structure comprises characterizing the particular segment of the sample as an epithelial tissue structure if the determined difference value is equal to or below a predetermined threshold.
2. The method of claim 1, wherein an algorithm is employed to identify cell coordinates information and wherein the cell coordinates information is used to extract values of epithelial tissue structure parameters.
3. The method of claim 2, wherein the epithelial cell coordinates information are employed to extract values of cell geometry and cell topology parameters.
4. The method of claim 3, wherein the parameters are selected from cell area, perimeter, cell density, distribution of nearest neighbors, and distributions of distances between neighbors.
5. The method of claim 4, wherein the parameters are used to compare images for screening for appropriate treatment selected from use of ingredients and use of formulations.
6. The method of claim 3, wherein image segment and/or epithelial tissue structure model is a movie taken during maturation.
7. The method of claim 1, wherein the unknown image segment is acquired by non-invasive or minimally invasive imaging with cellular resolution.
8. The method of claim 7, wherein the non-invasive or minimally invasive imaging is selected from the group consisting of reflectance confocal microscopy, fluorescence confocal microscopy, fluorescence lifetime microscopy, multiphoton fluorescence microscopy, second harmonic generation microscopy, chemiluminescence imaging, photoacoustic microscopy, magnetic resonance imaging, optical coherence tomography, line-field optical coherence tomography and photo-thermal microscopy.
9. The method of claim 1, further comprising: characterizing the particular segment of the sample as not an epithelial tissue structure skin cell if the determined difference is above the predetermined threshold.
10. The method of claim 1, further comprising characterizing a training set of epithelial tissue structure to be used for screening for appropriate treatment selected from use of ingredients and use of formulations .
11. The method of claim 1, further comprising repeating the operations of the method to analyze a further segment of the image.
12. A method of screening a skin treatment regimen, ingredient and/or composition for benefit to skin, comprising: a) comparing, by one or more computing device, an unknown image segment to a known epithelial tissue structure model image segment to determine the degree of match between the unknown segment and a model of an epithelial tissue structure prior to application of the skin treatment regimen, ingredient and/or composition; b) applying the skin treatment regimen, ingredient and/or composition to the area of skin for a period of time; c) comparing, by one or more computing device, an unknown image segment to a known epithelial tissue structure model image segment to determine the degree of match between the unknown segment and a model of an epithelial tissue structure after the skin treatment regimen, ingredient and/or composition application; wherein the skin treatment regimen, ingredient and/or composition is of benefit to skin if within a confidence level of greater than 90% the level of the degree of match between the unknown segment and a model of an epithelial tissue structure changes vs. the no treatment control.
13. The method of claim 12, wherein the test skin treatment regimen, ingredient and/or composition is left in contact with skin for an application time between about 1 minute to about 4 weeks.
14. The method of claim 12, wherein the application time is at least about 7 days.
15. The method of claim 12, wherein the application time is at least about 21 days.
16. The method of claim 12, wherein the application time is between about 5 minutes and about 24 hours.
17. The method of claim 12, wherein the skin treatment regimen, ingredient and/or composition enhances one or more attributes selected from the group consisting of reduction of visual dryness, reduction of trans-epidermal water loss, and increase in skin hydration.
18. A method of providing skin care treatment, comprising: receiving user-specific information, wherein said user-specific information includes an unknown image segment of interest; comparing, by one or more computing device, said unknown image segment to a known epidermal tissue structure model image segment to determine the degree of match between the unknown segment and a model of an epidermal tissue structure; accessing a data structure containing information reflecting relationships between categories of user-specific information and skin care treatment information, the information reflecting relationships derived from known epidermal tissue structure model image segment for skin care treatment obtained using artificial intelligence; comparing, using an artificial intelligence engine, the received user-specific information with the accessed data; identifying, using the artificial intelligence engine, skin care treatment recommendation determined by the artificial intelligence engine to be related to the user-specific information; and providing the identified skin care treatment to the user.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings and the appended claims. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.
[0033]
[0034]
[0035]
[0036]
[0037]
DETAILED DESCRIPTION OF DISCLOSED EMBODIMENTS
Definitions
[0038] “Architecture” or “cell architecture” as used herein means the structure and form of a living cell, specifically the structure of a tissue considered in terms of the form and spatial relations of its constituent cells.
[0039] “Arrangement” or “cell arrangement” as used herein means the manner in which particular cell types cling together often giving them distinct arrangements.
[0040] “Artificial intelligence” as used herein broadly describes any computationally intelligent systems that combine knowledge, techniques, and methodologies. An AI engine may be any system configured to apply knowledge and that can adapt itself and learn to do better in changing environments. Thus, the AI engine may employ any one or combination of the following computational techniques: neural network, constraint program, fuzzy logic, classification, conventional artificial intelligence, symbolic manipulation, fuzzy set theory, evolutionary computation, cybernetics, data mining, approximate reasoning, derivative-free optimization, decision trees, and/or soft computing. Employing any computationally intelligent techniques, the AI engine may learn to adapt to unknown and/or changing environment for better performance.
[0041] The term “capturing (an image)”, or any form thereof, refers to the use of an image capture device to acquire an image. “Capturing” may refer to the direct act of using the image capture device to acquire the image. It may also include indirect acts to promote acquisition. To this end, “capturing” may include the indirect acts of providing access to hardware, or to at least one of a client-based algorithm and a server-based algorithm for causing the image capture device to capture an image. This may be accomplished by providing a user with software to aid in the image capture process or providing the user with access to a network location at which the software resides. Also consistent with certain embodiments of the invention, capturing may include at least one of receiving an instruction from the subject to capture an image, indicating to the subject before the image is captured, and indicating to the subject when the image is captured.
[0042] “Cell organization” as used herein means the components that make up the cell and how they are arranged inside it.
[0043] “Epidermal tissue structure” as used herein includes keratinocytes at each distinct stage of differentiation; melanocytes; Langerhans cells; resident T lymphocytes and Merkel cells. “Epidermal tissue structure” includes cell organization, topology, geometry, arrangement, and architecture as these terms are defined herein. “Epidermal tissue structure” is a subset of “epithelial tissue structure.”
[0044] “Epidermis” as used herein means the outer layer of skin, and is divided into five strata, which include the: stratum corneum, stratum lucidum, stratum granulosum, stratum spinosum, and stratum basale. The stratum corneum contains many layers of dead, anucleated keratinocytes that are essentially filled with keratin. The outermost layers of the stratum corneum are constantly shed, even in healthy skin. The stratum lucidum contains two to three layers of anucleated cells. The stratum granulosum contains two to four layers of cells that are held together by desmosomes that contain keratohyaline granules. The stratum spinosum contains eight to ten layers of modestly active dividing cells that are also held together by desmosomes. The stratum basale contains a single layer of columnar cells that actively divide by mitosis and provide the cells that are destined to migrate through the upper epidermal layers to the stratum corneum. The predominant cell type of the epidermis is the keratinocyte. These cells are formed in the basal layer and exist through the epidermal strata to the granular layer at which they transform into the cells know as corneocytes or squames that form the stratum corneum. During this transformation process, the nucleus is digested, the cytoplasm disappears, the lipids are released into the intercellular space, keratin intermediate filaments aggregate to form microfibrils, and the cell membrane is replaced by a cell envelope made of cross-linked protein with lipids covalently attached to its surface. Keratins are the major structural proteins of the stratum corneum. Corneocytes regularly slough off (a process known as desquamation) to complete an overall process that takes about a month in healthy human skin. In stratum corneum that is desquamating at its normal rate, corneocytes persist in the stratum corneum for approximately 2 weeks before being shed into the environment.
[0045] “Epithelial tissue structure” as used herein includes any of the cells making up an epithelium at each distinct stage of differentiation. “Epithelial tissue structure” includes cell organization, topology, geometry, arrangement and architecture as these terms are defined herein.
[0046] “Epithelium” as used herein means a thin, continuous, protective layer of cells that line the outer surfaces of organs and blood vessels throughout the body, as well as the inner surfaces of cavities in many internal organs. It is classified based on the number of layers that make up the tissue: simple, stratified, and pseudostratified. It may also be classified histologically, according to the cell shape: squamous, columnar, and cuboidal. It is primarily involved in providing protection of the underlying structures, secretory functions, transcellular transport, and selective absorption. Examples of epithelium include the epidermis; the lining of the digestive tract; and the lining of the reproductive tract.
[0047] “Geometry” or “cell geometry” as used herein means the characteristic positioning of organelles within the cell body in order for a cell to be able to carry out its specified function.
[0048] The term “image capture device”, similar terms, and terms representing structures with similar functions may include one or more of a digital camera, webcam, film camera, analog camera, digital video camera, scanner, facsimile machine, copy machine, infrared imager, (laser scanning) confocal reflectance microscope, confocal fluorescence microscope, optical coherence tomography (OCT), multiphoton fluorescence (MPF) microscope, second harmonic generation (SHG) microscope, fluorescence lifetime imaging microscope (FLIM), photo acoustic microscope or any other mechanism for acquiring an image of a subject's skin. An ultrasonic device might provide skin thickness information, or it might create a map on an area of the external location. Thus, the term “image” as used herein may be broader than a picture. Combinations of image capture devices may be used. For example, an image captured on photographic paper using a film camera might then be scanned on a flat bed scanner to create another image. A confocal endoscope that is compatible with the biopsy channel of a conventional endoscope can be used to capture images of epithelial cells in a method according to the invention. See, e.g., Jiafu Wang, Min Yang, Li Yang, Yun Zhang, Jing Yuan, Qian Liu, Xiaohua Hou, Ling Fu, A Confocal Endoscope for Cellular Imaging, Engineering, Volume 1, Issue 3, 2015, Pages 351-360, ISSN 2095-8099, https://doi.org/10.15302/J-ENG-2015081. (https://www.sciencedirect.com/science/article/pii/S2095809916300121).
[0049] The term “image processing technique” or similar terms, may include a software program, computer, application specific integrated circuit, electronic device and/or a processor designed to identify in an image one or more characteristics, such as a skin condition. Such techniques may involve binarization, image partitioning, Fourier transforms, fast Fourier transforms (FFTs), and/or discrete cosine transforms may be performed on all or part of the image, resulting in coefficients. Based on the coefficients, conditions may be located, as known in the art. Artificial intelligence, such as fuzzy logic, neural networks, genetic programming and decision tree programming, may also be used to identify conditions. Alternatively, one or more digital filters may be passed through the image for locating specific conditions. These examples are provided for illustrative purposes with the understanding that any image processing technique may be used.
[0050] “Morphological Snakes” are a family of methods for image segmentation. Their behavior is similar to that of active contours (for example, Geodesic Active Contours or Active Contours without Edges). However, Morphological Snakes use morphological operators (such as dilation or erosion) over a binary array instead of solving partial differential equations over a floating-point array, which is the standard approach for active contours. This makes Morphological Snakes faster and numerically more stable than their traditional counterpart. (P. Márquez-Neila, L. Baumela and L. Alvarez, “A Morphological Approach to Curvature-Based Evolution of Curves and Surfaces,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 1, pp. 2-17, Jan. 2014, doi: 10.1109/TPAMI.2013.106).
[0051] “Topology” or “cell topology” as used herein means the connectivity among cells in a tissue.
[0052] “Marker controlled Watershed segmentation” is a region-based technique that utilizes image morphology. (Jadwiga Rogowska, in Handbook of Medical Imaging, 2000). It requires selection of at least one marker (“seed” point) interior to each object of the image, including the background as a separate object. The markers are chosen by an operator or are provided by an automatic procedure that takes into account the application-specific knowledge of the objects. Once the objects are marked, they can be grown using a morphological watershed transformation.
[0053] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.
[0054] Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding embodiments; however, the order of description should not be construed to imply that these operations are order dependent.
[0055] The description may use perspective-based descriptions such as up/down, back/front, and top/bottom. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of disclosed embodiments.
[0056] The description may use the terms “embodiment” or “embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments, are synonymous, and are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.).
[0057] With respect to the use of any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0058] In various embodiments, methods, apparatuses, and systems for non-invasive or minimally invasive assessment of epithelial tissue structure are provided. In exemplary embodiments, a computing device may be endowed with one or more components of the disclosed apparatuses and/or systems and may be employed to perform one or more methods as disclosed herein.
[0059] Visualization of cells in skin, such as for the detection of cancer, normally requires invasive biopsy and review by a trained pathologist who assesses the size, shape, and distribution of epidermal tissue and identifies patterns typical of healthy skin or of skin cancer. However, embodiments herein provide methods for non-invasive assessment of epidermal tissue structure. Digital images and processing of grey-level intensities are used to identify cells. As an example, in a confocal reflectance image, keratinocytes appear to have a bright border, which consists of high gray-levels, while the cytoplasm and intercellular spaces appear as a dark center, which consists of low gray-level intensities. More specifically, embodiments provide an automated algorithm that may be used to identify epidermal tissue structure, and/or to specify the coordinates/locations of cells in the epidermal tissue structure, through non-invasive confocal imaging or other types of imaging, such as OCT, MPF, SHG, FLIM, etc. as mentioned above. Methods herein may be repeated to analyze other areas of tissue.
[0060] In accordance with an embodiment, a method for characterizing epithelial tissue structure includes comparing, by one or more computing devices, an unknown image segment to a known epithelial tissue structure model image segment to determine the degree of match between the unknown segment and the known segment; comparing, by the one or more computing device, the determined degree of match to a predetermined value, and characterizing, by the one or more computing device, the particular segment of the sample as a epithelial tissue structure if the determined degree of match is sufficient, such as equal to or above the predetermined value. The degree of match may be determined by any suitable method/calculation to determine the degree of similarity. The degree of match may include symmetry measurements and mathematical operations such as subtraction, division and multiplication of the known segment and the unknown image segment.
[0061] In an embodiment, the predetermined value may be a range or a single value.
[0062] For example, a method is provided in which the difference between a particular segment of an image and a known epithelial tissue structure model image segment is examined. A comparative analysis may also be used to determine that an image does not depict an epithelial tissue structure.
[0063] Thus, in an embodiment, there is provided a method for characterizing epithelial tissue structure, comprising comparing, by one or more computing device, an unknown image segment to a known epithelial tissue structure model image segment to determine a difference value between the particular segment of the diagnostic image and a value derived from a known epithelial tissue structure model image segment; comparing, by one or more computing device, the determined difference to a predetermined threshold; and characterizing, by one or more computing device, the particular segment of the sample as an epithelial tissue structure if the determined difference is equal to or below the predetermined threshold.
[0064] In embodiments, the unknown image segment may be n-dimensional, such as a 3-dimensional image of a cell volume within a tissue volume or a 4-dimensional image of an epidermal tissue structure maturing as it migrates and changes shape and volume from the basal layer to the stratum corneum. The unknown image segment may be (a) a line sampled through a living cell (1D), (b) on a plane (2D) bisecting a living cell center (1D function) where the 1D function is rotated in an en face plane about the cell center, (c) a volume (3D) where directional cell aspects are defined, or (d) a movie (4D) digitally sampled and analyzed over a life cycle under cellular maturation.
[0065] As used herein, the term “en face” refers to a 2-dimensional image of a plane parallel to the surface of the tissue.
[0066] In an embodiment, the unknown image segment may be acquired by non-invasive or minimally invasive imaging with cellular resolution. Examples of suitable non-invasive or minimally invasive imaging methods include confocal microscopy, optical coherence tomography, photo-thermal microscopy, etc.
[0067] Image masking is a process of graphics software used to hide some portions and to reveal other portions of an image. It is a non-destructive process of image editing. Most of the time it enables a user to adjust and tweak the mask later if necessary. Very often, it is an efficient and more creative way of image manipulation.
[0068] As used herein with respect to image processing, a “mask” is a term for a model image segment (such as the epithelial tissue structure model) from which similar regions are to be searched in an unknown image segment.
[0069] This method may be used to non-invasively evaluate epidermal tissue structure populations as a quantitative morphometric diagnostic in skin condition detection and in evaluation of dermatological cosmetics. Methods herein may be used as a secondary test in conjunction with, or as a follow-on to, other detection methods.
EXAMPLE
[0070] Reflectance confocal microscopy (RCM) allows in vivo visualization of epithelial tissue structure at a cellular level. The study of RCM images provides information on the topological and geometrical properties of the epidermis. These properties change in each layer of the epidermis and with age.
[0071] Studying RCM requires manual identification of each cell to derive geometrical information, which is then used to compare populations, presenting physiological or biological differences. This task is time-consuming and subject to human error, highlighting the need for an automated cell identification method.
Step 1
[0072] The first step is to differentiate between the tissue area and the dark background. This can be done by applying an appropriate band-width filter on the Fourier-transformed image.
[0073] In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time. Almost every imaginable signal can be broken down into a combination of simple waves. An image is a “two dimensional” signal. The Fourier Transform is an important image processing tool which is used to decompose an image into its sine and cosine components. The Fourier Transform is used in a wide range of applications, such as image analysis, image filtering, image reconstruction and image compression. Wang et al. discloses the use of FT to detect collagen fiber orientation in the dermis. Wang et al. “Age-related morphological changes of the dermal matrix in human skin documented in vivo by multiphoton microscopy” J. Biomed. Opt. 23(3), 030501 2018.
[0074] Tissue areas can sometimes contain background enclosures, which can be identified using the Morphological Geodesic Active Contour (MGAC) or “Snake” method. The combination of the Fourier Transform method and the Snake method leads to a more accurate identification of borders between tissue and background.
[0075] Step 2 The second step is to identify individual cells in the segmented tissue area. To perform this task, the following methods were tested: A) Gabor filtering, which analyzes local textural aspects in the tissue and highlights cell membranes; B) Frangi filtering, which detects vessel-like structures, modified to detect blobs, that in this case represent keratinocytes; and C) Sato filtering, which detects curvilinear structures in images. A combination of methods could be employed in accordance with the invention.
[0076] Gabor filter is a linear filter used for texture analysis, that analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis. See, e.g., J. Kamarainen, “Gabor features in image analysis,” 2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA), 2012, pp. 13-14, doi: 10.1109/IPTA.2012.6469502.
[0077] Frangi et al., Multiscale Vessel Enhancement Filtering, Lecture Notes in Computer Science February 2000, discloses a method for vessel enhancement filtering which is based on local structure.
[0078] Sato et al., Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images, Medical Image Analysis (1998) volume 2, number 2, pp 143-168, discloses a method for the enhancement of curvilinear structures such as vessels and bronchi in three-dimensional (3-D) medical images.
Step 3
[0079] The outputs of the methods are then locally normalized and binarized, and small elements are removed using connected-component analysis. The output is then skeletonized, and the obtained skeletons (representing cell membranes) are cleaned by pruning and removing spurious branches and loops using a sequential repeated dilation followed by skeletonization steps (preferably two).
[0080] A distance transform is then applied on the skeletons and local maxima are identified, representing cell centers, which are then used to apply a marker-controlled watershed algorithm. This step considers the original image as a topographic surface, flooded starting from the seeds. The frontiers between flooded areas define the cell borders.
[0081] The results from the methods were compared against manual cell identification on sample images and on artificially created test images. The results show that automatically identifying cells can be achieved. With this respect, the Gabor filtering method appears to be better in identifying keratinocytes of the granular layer, whereas the Frangi filtering works better for the spinous layer. The Sato filter worked best in identifying spinous cells and it is expected that the Gabor filter followed by the Sato filter will work best for granular cells.
[0082] This is a promising step towards automatic cell identification and characterization of tissue geometry.
[0083] Although the algorithms have been applied to Reflectance Confocal Microscopy (RCM), they could be applied to other imaging technologies including, e.g., optical coherence tomography (OCT) and Line-field Optical Coherence Tomography, MPF, SHG, FLIM, photoacoustic microscopy, MRI, etc. and in general to image modalities with resolution capable of providing information at the cell level and the organization of epithelial organization.
[0084] Although the algorithms are used on skin epidermis, they could be used for any type of epithelial tissue, including, e.g., gut, lung, mouth.
[0085] In accordance with the invention, a method of cell identification includes the following steps:
[0086] segmenting images to remove non-cellular areas;
[0087] combining Fourier filtered image for external contours with Morphological Snakes image for cellular areas;
[0088] identifying regions of interest;
[0089] wherein identifying includes local normalizing outputs and turning resultant images into gray scale;
[0090] identifying individual cells within the tissue, wherein parameter employed for identification depends upon layer of interest;
[0091] wherein Gabor filter is employed on Fourier filtered image based on white ridges detection;
[0092] wherein Frangi filter is employed on negative of Fourier filtered image based on white ridges detection;
[0093] wherein modified Frangi filter is employed based on black blobs detection; and
[0094] wherein Sato filter is employed on original image background set to 0.
[0095] In accordance with the invention, a method of cell identification includes the following steps:
[0096] normalization;
[0097] set background to 0 level;
[0098] binarization;
[0099] connected component analysis to remove noise.
[0100] In accordance with the invention, a method of cell identification includes the following steps:
[0101] identify cells' centers (for each method in step 2, with or without step 3.4);
[0102] clean the results of the previous step (for each method);
[0103] local normalization;
[0104] binarization;
[0105] connected components analysis to remove noise (or signals coming from intracellular structures);
[0106] distance transform on clean binary image;
[0107] local maxima (considered as the cell center);
[0108] remove overlapping cell centers, centers that are too close to the tissue border, cell centers within the background, cell centers within the membrane.
[0109] In accordance with the invention, a method of cell identification includes the following steps: [0110] rebuild the cells; [0111] marker controlled watershed from these centers/Voronoi; [0112] evaluate accuracy with DAccuracy by comparing real vs. detected cells. The DAccuracy software allows to compute some accuracy measures of an N-dimensional detection or segmentation image when the ground-truth is represented by a CSV file or an image. It works in 3 contexts: one-to-one: single ground-truth, single segmentation; one-to-many: unique ground-truth, several segmentations (typically obtained with several methods); many-to-many: set of ground-truth/segmentation pairs. DAccuracy is available at https://gitlab.inria.fr/edebreuv/daccuracy.
[0113] In accordance with the invention, a method of cell identification includes the following steps:
[0114] extraction of characteristics and classification:
[0115] define descriptors characteristic of the different structures of the epidermis and compute them for all images and compare results per layer/participant/condition;
[0116] preprocessing to improve contrast between cells (Fourier filtering);
[0117] Gabor filter;
[0118] Frangi filter;
[0119] Sato filter;
[0120] evaluate accuracy (DAccuracy workflow).
[0121] In accordance with the invention, a method of cell identification is employed to obtain a model of the evolution of the skin between early age and adulthood.
[0122] Although certain embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope. Those with skill in the art will readily appreciate that embodiments may be implemented in a very wide variety of ways. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments be limited only by the claims and the equivalents thereof.