Phonodermoscopy, a medical device system and method for skin diagnosis
11484247 · 2022-11-01
Assignee
Inventors
Cpc classification
A61B5/0095
HUMAN NECESSITIES
A61B5/445
HUMAN NECESSITIES
G16H50/20
PHYSICS
A61B5/444
HUMAN NECESSITIES
A61B5/7264
HUMAN NECESSITIES
A61B5/6898
HUMAN NECESSITIES
International classification
Abstract
The present invention provides for a new system and method for diagnosing skin cancer that provides for a more effective analysis of changes in skin tissue due to the duality of acquiring visual data and transforming such visual into an audio signal. The conversion of complicated patterns of visual information of a skin lesion by a computer aided classification analysis into diagnostic sounds results in a much higher resolution rate and increased precision of diagnosis.
Claims
1. A computer-implemented method of evaluating a surface skin lesion for determining if malignant or non-malignant by developing a digital data map representing the surface skin lesion for conversion to audio signals, the method comprising: providing a tissue image of the surface skin lesion; generating segmentation of the tissue image using a computer-aided classification system, wherein similar types of tissue or features of the surface skin lesion are grouped into the same segment resulting in a plurality of different segments representing different types of tissue or features; classifying each of the plurality of segments to provide a plurality of classified segments; applying a clustering process to the classified segments to provide a plurality of clusters, wherein the clustering process comprises use of a K-means algorithm and wherein each cluster has a centroid and a user of the K-means algorithm forces the classified segments into a maximum of 14 centroids; and applying a specific audio signal, using sonification techniques, for each of the plurality of clusters to provide an audio output, and optionally a visual image of the audio output, wherein a pitch is assigned to each centroid; wherein the audio output or visual image indicate whether the surface skin lesion is malignant or non-malignant.
2. The method of claim 1 wherein the tissue image is obtained using photography, dermos copy, thermography, multiphoton fluorescence microscopy, multiphoton excitation microscopy, optical coherence tomography, or confocal scanning laser microscopy.
3. The method of claim 1, wherein features of the tissue image are extracted from the image and grouped into a plurality of segments for analysis using the computer-aided classification system.
4. The method of claim 3, wherein the classifying step characterizes the tissue in the tissue image using at least one of the features selected from the group of brightness, shape, color, size, and quality of tissue.
5. The method of claim 3, wherein the computer-aided classification system is a convolution Neural Network selected from Googlenet, ENET or Inception.
6. The method of claim 1, wherein the audio signal is selected from the group consisting of different pitch, loudness, timbre, spatialization and temporal patterns of each visual feature to provide a human-interpretable audio output.
7. The method of claim 1, wherein the audio output is audified by headphones, speaker, iphone, or any device that audifies the audio output at a frequency for auditory perception.
8. The method of claim 1, wherein the user of the K-means algorithm chooses a maximum of 8 centroids to 14 centroids.
9. The method according to claim 1, wherein the tissue image is captured by applying electromagnetic or mechanical energy to the surface skin lesion suspected of being malignant; capturing reflected and/or refracted electromagnetic or mechanical energy through a dermoscope or microscope or camera for a digital image; converting the reflected and/or refracted or acquired energy into the visual image.
10. The method of claim 1, further comprising converting the audio output to a visual representation.
11. The method of claim 1, further comprising reviewing the audio signal to provide guidance for excising the surface skin tissue suspected of being malignant, wherein the surface skin tissue suspected of being malignant is subsequently excised.
12. The method of claim 1, wherein the sonification techniques comprise a parameter mapping sonification method of centroids.
13. The method of claim 1, wherein the surface skin lesion is an atypical melanocytic hyperplasia, an atypical mole, a dysplastic mole, a cancerous skin disease, actinic keratosis, a basal cell carcinoma or a squamous cell carcinoma.
14. The method of claim 1, wherein the malignant lesions sound comparatively more loud, sharp, or urgent than benign lesions.
15. The method of claim 1, wherein the segmentation of the tissue image comprises pixel segmentation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
DESCRIPTION OF THE INVENTION
(20) The present invention provides for a medical device system and method for diagnosing skin lesions, or skin cancers, and particularly, skin tumors of melanocytic origin, i.e. malignant melanoma or non-melanocytic skin tumors, such as basal and squamous cell carcinoma. The present invention relates to a device which (i) acquires visual data from a skin lesion by an optical or mechanical method, or captures visual data by a sensor refracted electromagnetic waves such as UVA, visible spectrum or infrared wavelenghts or molecular vibrations, (ii) processing visual data to provide a visual field image; (iii) applies a dermoscopic pattern analysis to the acquired visual image and (iv) transforms the data into an audio signal, assigning different audio pitch, loudness, timbre, spatialization and temporal patterns of sound to each visual pattern, to be communicated through an algorithm to the practitioner; or directly applies to an acquired visual image by an available method or device a dermoscopic pattern analysis followed by audio signal transduction into specific diagnostic audio pitch, loudness, timbre, spatialization and temporal patterns.
(21) An audio signal is important for further identification of the suspected cancerous tissue. Notably, the Calyx of Held, first described in 1893, is a giant glutamate secreting relay synapse in the auditory mammalian brainstem. It is involved in transduction of sound into neuronal activity and relatively fast transmission of auditory input [20]. Upon stimulation, sound waves transduction follow a mechanical process, lasting for about 1 ms, contrary to processing of visual stimuli, a photochemical operation lasting for about 50 ms [22]. Due to this at least 50 fold factor slower processing of visual input, auditory input can be quickly perceived and delivered to consciousness. Part of this delay for visual stimuli may be related to longer and slower neuronal pathways of delivering information to the cortex [23].
(22) Thus, the sensitivity of the acoustic systems overcomes the vision system. If the audio and visual input are close to the perceiver, no brain compensation and adjustment of brain function are applied to, rendering a higher resolution rate and more stimuli identification for the acoustic signal than the visual function.
(23) Data transformation into acoustic parameters which represent the acquired information, i.e. sonification, was used from the ancient Greek period and Medieval China to provide information of elapsed time. In the middle ages it was used by Kepler, finally contributing to his third law of planetary motion [24]. Sonification in various adaptations was used or proposed to be used, amongst others, as a highly perceptive substitute to visual information as apparatus providing warnings to pilots, device for monitoring architectural integrity of large structures, guiding the manipulation of surgical instruments during brain surgery, anesthesiology, analyzing seismology data, data display for the visually impaired, monitoring the oscillation of subatomic particles in quantum physics, fingerprint identification, skin pore audification by area and color distribution, training and rehabilitation, seizure detection in infants, optical coherence tomography monitoring, stroke rehabilitation [25, 26, 27, 28, 29].
(24) As previously stated, cancers of the skin are the most common forms of cancer. There are several modalities [30], discussed hereinbelow, to assist with generating visual data and/or images for further sonification of data.
(25) Photography is a technique that uses photographic devices to capture surface images of the skin in order to primarily identify suspicious and pigmented skin lesions. Polarized light photography relies on the fact that reflected light has two components, one regular reflectance to reflect the skin surface morphology, the other “back-scattered” from within the tissue. It is useful in the assessment of skin surface morphology when the proper polarizing filters and techniques are used.
(26) Dermoscopy, also known as epiluminescence microscopy, uses handheld devices to show subsurface structures of the skin and optical light ray penetration beyond the skin surface and minimize surface reflection. Different types of dermoscopy include nonpolarized light contact dermoscopy that uses a nonpolarized light source such as a halogen light source and requires the use of an oil or gel to prevent surface reflection, attached directly to the skin mechanically. Additionally, dermoscopy can include the use of non-polarized dermoscopy devices that do not need a liquid interface and are equipped with a cross-polarized lens that absorbs scattered light waves. Polarized contact dermoscopy can attain the images of vascular and other structures. These devices are useful in visualizing melanin, blue nevi, and shiny white streaks. Still further, both devices can be combined.
(27) Thermography involves the measuring and mapping surface skin temperature through direct contact (via application of liquid crystal plates to a part of the body) or at a distance (utilizing a highly-sensitive medical infrared camera and sophisticated computer interface). Thermography can be used in conjunction with thermostimulation which applies thermal stress on the skin to be examined.
(28) Other methods of providing an image include the use of multiphoton fluorescence microscopy or multiphoton excitation microscopy that use more than one photon excitation to illuminate endogenous fluorophores in skin tissues, which emits a fluorescence signal to be captured by a detector. Additionally, optical coherence tomography (OCT) may be used and this device utilizes reflected light to produce cross-sectional subcutaneous images of tissue at a resolution equivalent to a low-power microscope. Confocal scanning laser microscopy (CSLM) works by first projecting a low-power laser beam through a lens on a specific point on the skin, and then detecting the light reflected from the focal point through a confocal pinhole filter. The reflected light is transformed into an electrical signal, which is recorded as an image by a computer.
(29) Photodynamic diagnosis includes the use of topical agents that stimulate the production of endogenous photosensitizers that produce a photodynamic effect when exposed to light of certain wavelengths and energy. For example, UV is absorbed by melanin. The theory behind this experimental technique is that illumination by ultraviolet light could reveal irregular pigment distribution, and therefore could be useful in defining the borders of melanoma.
(30) The features extracted from the image are then used to classify the image wherein the classification step is comprised of characterizing the tissue based on features such as shape, color, size, or quality of the tissue, to name a few, and the characterization of a tissue is compared to the characterization of a reference tissue and the tissue is classified based on the comparison.
(31) Embodiments of present invention can employ computer aided classification systems (sometimes termed “machine learning,” or “deep learning”). There are a plethora of pattern recognition algorithms to employ to biometrically model and classify different tissue types. Those skilled in the art will recognize that many such classifications systems could be used in the present invention, including but not limited to Linear Discriminant Analysis (LDA), Kernel Discriminant Analysis (KDA), Neighborhood Preserving Embedding (NPE), Orthogonal Linear Graph Embedding (OLGE), Unsupervised Discriminant Projection (UDP), Marginal Fisher Analysis (MFA), Locality Preserving Projection (LPP), Local Fisher Discriminant Analysis (LFDA), Convolutional Neural Network (CNN), Support Vector Machine (SVD) and Kernel Correlation Feature Analysis (KCFA
(32) A preferred classification system is the CNN system that is used in order to automatically extract local feature. Some examples of a CNN system includes Lenets, Alexnet, Overfeat, VGG, RESNET, Googlenet and Inception (V2, V3, V4), ENET and Xception. CNN consists of many layers, each layer plays a feature extraction role and performs different operators such as convolutions, subsampling, pooling, full connection, etc. Similar to other neural network, CNN is trained by backpropagation. Based on performance, online error backpropagation is used in general. The learning process is an iterative procedure where the weights are updated by a small step in the opposite direction of the steepest gradient. The present invention has found that the use of sonification, i.e. of deriving audio data, in order to convey information, set up on a processed image employs variable tone input, melodic alarms and changes of sound patterns which meaningfully increase the spectrum of diagnosis. Specifically, transduction of patterns of visual information from a pattern analysis of a skin lesion into diagnostic sounds results in a much higher resolution rate and precision of diagnosis (
(33) Parameter mapping sonification involves the association of information with auditory parameters for the purpose of data display. Since sound is inherently multidimensional, is particularly well suited for displaying multivariate data. Data exploration is often thought of as the most ‘scientific’ of sonifications and usually makes use of a type of sonification called parameter based sonification. For example, sonification approaches can be used to interpret and sonify the weighted activations of nodes in a machine learning system (computer aided classification system), including “Raw” weights sonification, Concept Mapping sonification and K-Means sonification.
(34) K-Means is an unsupervised learning algorithm that classifies a given data set into certain number of clusters. The data is preferably gained from the classification systems discussed above, such as the CNN system. The main idea is to define k centroids, one for each cluster. Initially the algorithm preferably places the centroids far away as possible from each other. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. Each point belonging to a given data set is associated to the nearest centroid. When no point is pending, the first step is completed and an early grouping is done. Again re-calculate k new centroids as centers of the clusters (resulting from the previous step). Repeat the process until centroids do not move any more. In the successive loops, the k centroids change their location step by step. In the present invention, each image has different features due to the type of skin cancer and such different features are used for classification. Texture is an important aspect of the image including brightness, color, slop and size. Such features are useful from the dataset of the image and can be used in the classification. In the present invention, it has been found that the number of centroids relating to the features of the visual skin image can range from about 8 to about 14 centroids, and more preferably from about 9 to 12. Thus an image filled with different points of data can be extracted and classified with a subsequent connection to an audio sound, generating various amplitudes, decays and frequencies of the sound. Importantly, the audio sound includes different pitches, loudness, durations, timbres, and other sound attributes to make the malignant lesions sound comparatively more loud, sharp, or urgent than benign lesions. This difference in sound attributes allows an experienced listener to learn to differentiate the sound of different classes of lesions. Notably, using the sound attributes is diagnostically more powerful because audio output based on K-means provides a severity to a data point which utilizes the audio data collection of the brain which is more sensitive as compared to the visual examination. Specifically using an audio signal based on the K-means data can denote a severe malignant sound and the clinician may excise more margin and provide for a faster follow up based on this message.
(35) The above extraction of data from the Classifier may be a standalone or combined with additional methodologies of sound extraction in order to represent it as sound, as exemplified but not restricted to: Raw weights analysis, i.e. defining an activated point in a processed classifier image as either benign or malign, assigning it a specific weight, and sorting by magnitude the sum of all in order to derive sequential sounds or/and by concept mapping data analysis, i.e. determining an infinite number of parameters, starting with, such as benignness, malignancy, color, as a linear or non-linear function of the distance and polarity of a given image from the benign/malignant/color decision boundary in a complex dimensional space represented by the classifier input, attributing a virtual weight to each, with or without multiplying each parameter by its confidence, average, root mean square, etc. and generating amplitudes, decays, and frequencies of the calculated sum of these activations.
(36) The new device may be operational either as a stand-alone apparatus or as an added interface to an existing apparatus used for gaining visual data. It is the object of the present invention to acquire visual data, analyze it by pattern recognition rules of dermoscopy and transform it by parameter mapping sonification into a simple medical device (
(37) For exemplifying purposes, the device is described herein as being specifically designed for melanoma diagnosis. Nevertheless, it will be immediately obvious to persons skilled in the art that additional applications are possible, in particular diagnosis of cancerous skin diseases such as dysplastic nevus, actinic keratosis, basal and squamous cell carcinoma, or the use for definition of skin properties such as skin microstructure and wrinkles, which are another object of the invention, or to diagnose skin disease, alike identifying the existence of onychomycosis.
(38) Embodiments of the present invention are directed to application of pattern analysis recognition based, but not limited to, dermoscopy principles and sonification of the results. In one embodiment a standalone device acquires visual data from a skin lesion by an optical or mechanical method (
(39) In another embodiment the present device acts as an interface between any existing apparatus which acquires visual image or images of a skin structure, in order to perform dermoscopic pattern analysis. Digital data is further to be assessed into a second processing unit which transforms processed visual data into audio signals, i.e. a parameter mapping sonification, taking advantage of basic sound recognition based on pitch, loudness, timbre, spatialization and temporal patterns of sound.
(40) The present invention device is different from available methods and systems which use only digital or non-digital visual data and attempts to converge information at the end of a visual diagnostic stage in order to facilitate the diagnosis. In sharp contrast, the phonodermoscopy device of the present invention uses a system and method of creating a detailed mapping of digital data and diverging input during and at the end of visual dermoscopic pattern analysis stage. Data is further converted from a processed final feature map which was applied a dermoscopic features input by parameter mapping sonification of data, such as unsupervised learning K-Means Sonification input, which is converged into audio data at decisional points (
(41) The main components of the system are: 1. a unit capturing electromagnetic waves from a skin lesion 2. a direct input optical data visual screen, 3. a visual data processing unit to a final feature map, 4.a dermoscopic pattern transduction process unit, local or on cloud 5. a visual to audio transducer unit and 6. a headphone/speaker unit. The clinician moves the diagnostic apparatus from lesion to lesion, obtaining the audio signal, alike a stethoscope.
(42) In some embodiments (
(43) In yet other embodiments, the computer device analyzes skin photos and images derived passively from the skin lesion. In some embodiments, multidimensional images are obtained using multi spectrometry to be further processed for dermoscopy, classification through machine learning and sonification.
(44) In one embodiment, the system stores the digital data and analyzes it by a dermoscopic pattern analysis, assigning various digital values to each pattern and area. Each nevus is given a full range of dermoscopic grading, by digital data and or color output, taking advantage of the next step of visual to audio transduction. The computerized software uses a normal nevus pattern as a baseline, assigning it values in order to be used as a control and future background noise, e.g. referring to in one embodiment by at least 6 colors and at least 19 basic patterns of dermoscopy. Visual nevus dermoscopic structures are converted into numerical values, assigning each skin structure a definite figure. Diverse structural areas in a nevus might be referred to by the same algorithm.
(45) A full digital data map representing each skin lesion is developed to be further converted to audio signals. For example, by the use of conversion to audio, normal nevi are given a low pitch, low amplitude and basic timbre notification as compared to abnormal skin structures alike melanoma, which degree of irregularity generate a high pitch, high amplitude, high timbre and irregular pattern. Dysplastic nevi are assigned with intermediate degrees of amplitude and amplitude. The melanoma risk score derived from visual to audio conversation is divided into a highly sensitive arbitrary scale, and the practitioner is alerted by the change from baseline audio noise. Additional special pattern analysis melanoma identifiable risk factors allocated with a high amplitude. Specifically and contrary to existent systems, the diagnosis is not related to the status of the skin lesion in the past, and the diagnosis is absolute. Specifically and contrary to existent systems and systems which quantify melanoma risk in a limited scale, e.g. in a go no go indication to biopsy, the phonodermoscopy device of the present invention may quantify data into a much higher sensitivity scale. For example, in one embodiment a timbre of 1-61 keys, alike an organ, is employed. Another embodiment may use a short range of 1-25 keys, alike a synthesizer keyboard, which together with amplitude variation should indicate the degree of risk. Yet another embodiment may use a 3 octave sound scale, and an artisan in the art grasps the wide field of sound simulation, starting with use of any musical instrument or combination thereof.
(46) The audio output of the system may be used especially but not limited to a stereo output. Head phones, multi speaker, iphone, etc. and any means might be used for audifing the data. A usual frequency range of 15 Hz-25 KHz, preferably between 20 Hz and 20 KHz for auditory perception.
(47) In some embodiments, in order to facilitate assessment, the final audio data might be further converted to visual data on screen, expressed as a wave with amplitude and time, a bar graph showing intensity of signal or an algorithm which turns into quiet, e.g., blue colors or vivid red the audio data. For example, a dysplastic nevus, a lower level of pathologic entropy, may be assigned pink and orange colors.
(48) Further, when using a digital device for capturing the image, such as a smartphone, in order to stablilize the procedure of image acquisition of the dermoscope image the image may be captured as well by voice recognition technology as an alternative to pressing all the buttons for acquiring the image, including the start and play button. The results and output may also be delivered by voice recognition technology.
(49) Such audio data qualifies a primary care physician who uses the system in the diagnosis of melanomas. All data may be further recorded by the computer as visual and audio data, enclosed to the patient file. Such data may be further processed and transmitted as a medical record.
(50) The phonodermoscopy system of the present invention is not limited to melanoma diagnosis, but to skin carcinoma and dysplastic nevus and actinic keratosis diagnosis, as well as to general skin wrinkle assessment, skin damages by sun, vascularization extent, skin pigmentation, nail fungi identification, etc., all those to be represented by specific sound patterns. In one embodiment a piano sound may be assigned to melanocytic nevi lesion, while a trumpet sound assigned to carcinoma. In other embodiments, the systems of the present invention are inverted, or instruments changed by the user. In another embodiment, skin wrinkles audification may be presented as low pitch low amplitude for minor wrinkles and as high amplitude and noise for numerous wrinkles. In another embodiment, onychomycosis induced by a candida is allocated with a high amplitude and rate as opposed to onychomycosis caused by trichophyton, which is endowed with a low amplitude, thereby determining the etiological source.
(51) Further, the high sensitivity of phonodermoscopy method and system of the present invention can be used as a dynamic tool for skin quality assessment, for wrinkles, blood vessels and pigmentation.
(52)
(53)
(54)
(55) TABLE-US-00001 TABLE 1 Kruskal-Wallis One Way Analysis of Variance on Ranks All Pairwise Multiple Comparison Procedures (Dunn's Method) Comparison Diff of Ranks Q P < 0.05 Classifier vs Kmeans Listener 279.858 11.040 Yes Classifier vs Clinical 123 257.422 8.834 Yes Clinical 123 vs Kmeans Listen 22.436 0.768 No
(56)
EXAMPLES
(57) In the following examples the images were first analyzed by a convolutional neural network (CNN) classification system to provide to provide a pattern and image recognition output by using deep learning of different layers of features. Specifically, the inception V2 classification program was used for classification which is a computer aided program that performs image classification by looking for low level features, such as edges and curves, and then keeps building up to provide a series of convolutional layers. Photographic images of both malignant and benign tissue lesions were introduced into the classification system. The classifiers identified by the CNN system were then inputted into a clustering algorithm. In the tests, the inventor of the present invention used the K-Means clustering algorithm to segment the lesions.
(58) As discussed above, K-Means clustering separates n object into k cluster wherein each object belongs to the cluster with the nearest mean. The objective of K-Means clustering is to minimize the total intra-cluster variance so that then objects are forced into a few clusters thus have a small number of “centroids.” That is, the images in the classifier database are forced into a few clusters, with each cluster having a centroid. The number of clusters (k) is arbitrary, and is set by the person running the K-Means algorithm. In the present invention, the number of clusters (k=11) was chosen after using a machine learning process. The resulting clusters or centroids do not have any special meaning; this is just a mathematical way to force some grouping of the images based on their various features. These 11 centroids were then sorted according to their predictive power for classifying lesions as benign or malignant. The main point of this approach was to reduce the data coming from the classifier down to, in this case, 11 numbers.
(59) The sonification aspect involved assigning a pitch to these 11 centroids, wherein the centroids that more consistently predict benign lesions were mapped to lower pitches; and centroids that more consistently predict malignant lesions mapped to higher pitches. Since there are only 11 centroids, there were only 11 pitches required; thus each is mapped onto a proper musical note (i.e., a key on the keyboard), without the need to subdivide the keys up into micro-notes. The notes are separated by musical fourths, again centered around middle C. In addition to the pitch arrangement, the malignant centroids became more salient by applying a saw wave frequency modulator whose frequency increases with increasing malignant predictive power.
(60) Test images were processed by the machine learning system, which, in the above-discussed K-Means approach produced 11 numbers, which represent the distance from that image to each of the 11 centroids. These 11 distance measures were then used to adjust the loudness and duration of each of the 11 notes making up the sonification system. Notably, these 11 notes can be played in a number of ways. They can, for example, be played simultaneously in an 11-note chord. Or, they can be played sequentially in an arpeggio-like manner. Or, they can be played in two simultaneous arpeggios, “from the middle out”. The pitches, loudness, durations, timbres, and other sound attributes can all be adjusted to make the malignant lesions sound comparatively more loud, sharp, or urgent than benign lesions. The overall effect of this K-Means sonification approach was that the sonification still conveys information about the image, and how it compared to clusters of known images that are already in the database. However, relatively little is done to weight how the sounds come out (though, as just described, some of that can be employed), so the sonification allows the listener to have a clear sense of the components of the sound. This should, in turn, allows the experienced listener to learn to differentiate the sound of different classes of lesions. That is, the listener should be able to learn the sound of a seborrheic keratosis, as distinct from the sound of some other type of lesion. This is more diagnostically powerful, in theory, than simply distinguishing “something that is malignant” from “something that is benign”. The method and system of the present invention is more congruent with how experienced diagnosticians use visual information (i.e., visual inspection, or even just the images of lesions) because they can make a more fine-grained assessment than just malignant/benign, but instead can assign a category or type to the lesion, which is part of their overall clinical diagnosis.
(61) Reviewing
(62) All examples compared the classifier output as a double blinded evaluation versus the K-Means Sonoscopy, which was annotated prior to the deep learning enhanced classifier results. Both Sonoscopy and Classifier estimations ranged on a scale from +3 (benign) to −3 (malign) and were finally compared to the biopsy results, the ground truth. Clinical recommendation was also included by expert dermatologist who was blinded from classifier and biopsy.
Example 1
(63) An example of Melanoma detected by K-Means Sonoscopy and Deep Learning Classifier. A higher sensitivity of Sonoscopy of −2 as compared to a Classifier degree of −1 is to be noted. The audio annotation indicates the higher amplitude of malignant activators on the right side, corresponding to a −2 degree. Reviewing
Example 2
(64)
Example 3
(65)
Example 4
(66)
Example 5
(67)
Example 6
(68)
Example 7
(69)
Example 8
(70)
Example 9
(71)
Example 10
(72)
(73) The medical device system and associated method for diagnosing skin lesions by sonification of dermoscopic data have been described in the foregoing description with reference to definite embodiments. It is assumed that various adaptations and adjustments to the referenced embodiments may be made without digressing from the scope of the invention.
Example 11
(74) The present invention includes the use of an optical attachment that turns an iPhone into a digital dermoscope to provide an instantaneous digital photograph of a lesion that could be skin cancer. The digital photo is immediately sent to a server having appropriate computer programs for classifying the digital elements of the lesion and converting such classifications into an audible indication of the diagnosis of the skin lesion. The adapted iphone system shown in
REFERENCES
(75) The contents of all references cited herein are incorporated by reference herein for all purposes.
(76) 1. Eggermont A M, Spatz A, Robert C. Cutaneous melanoma. Lancet. 2014 Mar. 1; 383 (1919):816-27.
(77) 2. Mayer J E, Swetter S M, Teresa Fu, et al. Screening, early detection, education, and trends for melanoma: Current status (2007-2013) and future directions: Part I. Epidemiology, high-risk groups, clinical strategies, and diagnostic technology. Journal of the American Academy of Dermatology Volume 71, Issue 4, October 2014, Pages 599. e1-e12.
(78) 3. Siegel R, Naishadham D, Jemal A. Cancer statistics, 2012. CA Cancer J Clin. 2012 January-February; 62(1): 10-29.
(79) 4. Noor O, Nanda A, Rao B K. A dermoscopy survey to assess who is using it and why it is or is not being used. Int J Dermatol. 2009 September; 48(9): 951-2.
(80) 5. Tsao H, Olazagasti J M, Cordoro K M, et al. Early detection of melanoma: reviewing the ABCDEs. American Academy of Dermatology Ad Hoc Task Force for the ABCDEs of Melanoma, J Am Acad Dermatol. 2015 April; 72(4): 717-23.
(81) 6. Campos-do-Carmo G, Ramos-e-Silva M. Dermoscopy: basic concepts. Int J Dermatol. 2008 July; 47(7): 712-9.
(82) 7. Russo T, Piccolo V, Lallas A. et al, Dermoscopy of Malignant Skin Tumours: What's New? Dermatology. 2017: May 10.
(83) 8. Ferris L K, Harris R J. New diagnostic aids for melanoma. Dermatol Clin. 2012 July; 30(3): 535-45.
(84) 9. Annessi G, Bono R, Sampogna F, et al. Sensitivity, specificity, and diagnostic accuracy of three dermoscopic algorithmic methods in the diagnosis of doubtful melanocytic lesions: the importance of light brown structureless areas in differentiating atypical melanocytic nevi from thin melanomas. J Am Acad Dermatol. 2007 May; 56(5): 759-67.
(85) 10. Bramão I, Reis A, Petersson K M, Faisca L. The role of color information on object recognition: a review and meta-analysis. Acta Psychol. 2011 September; 138(1): 244-53.
(86) 11. Miller G A. The magical number seven plus or minus two: some limits on our capacity for processing information. Psychol Rev. 1956 March; 63(2): 81-97.
(87) 12. Skvara H, Teban L, Fiebiger M et al. Limitations of Dermoscopy in the Recognition of Melanoma. Arch Dermatol. 2005; 141(2): 155-160.8.
(88) 13. Tschandl P, Hofmann L, Fink C3, et al. Melanomas vs. nevi in high-risk patients under long-term monitoring with digital dermatoscopy: do melanomas and nevi already differ at baseline. J Eur Acad Dermatol Venereol. 2016 Nov. 29. doi: 10.1111/jdv.14065.
(89) 14. Quigley E A, Tokay B A, Jewell S T et al. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review. JAMA Dermatol. 2015 May 13.
(90) 15. Riccolo D, Ferrari A, Peris K, et al. Dermoscopic diagnosis by a trained clinician vs. a clinician with minimal dermoscopy training vs. computer-aided diagnosis of 341 pigmented skin lesions: a comparative study. Br J Dermatol. 2002 September; 147(3): 481-6.
(91) 16. Foner L N. Artificial synesthesia via sonification: A wearable augmented sensory system. Mobile Networks and Applications 4 (1999) 75-81.
(92) 17. Haniffa M A, Lloyd J J, Lawrence C M. The use of a spectrophotometric intracutaneous analysis device in the real-time diagnosis of melanoma in the setting of a melanoma screening clinic. Br J Dermatol. 2007 June; 156(6): 1350-2.
(93) 18. Monheit G, Cognetta A B, Ferris L et al. The performance of MelaFind: a prospective multicenter study. Arch Dermatol. 2011 February; 147(2): 188-94.
(94) 19. Marzuka A G, and Book S E, Basal Cell Carcinoma: Pathogenesis, Epidemiology, Clinical Features, Diagnosis, Histopathology, and Management. Yale J Biol Med. 2015 June; 88(2): 167-179.
(95) 20. Ratushny V, Gober M D, Hick R, et al. From keratinocyte to cancer: the pathogenesis and modeling of cutaneous squamous cell carcinoma. J Clin Invest. 2012 Feb. 1; 122(2): 464-472.
(96) 21. Borst J G, Soria van Hoeve J. The calyx of Held synapse: from model synapse to auditory relay. Annu Rev Physiol. 2012; 74: 199-224.
(97) 22. Alais D, Carlile S. Proc Synchronizing to real events: subjective audiovisual alignment scales with perceived auditory depth and speed of sound. Natl Acad Sci USA. 2005 Feb. 8; 102(6): 2244-7.
(98) 23. King A J. Multisensory Integration: Strategies for Synchronization. Current Biology, Volume 15, Issue 9, 10 May 2005, Pages R339-R341.
(99) 24. Gaizauskas, B R. The Harmony of the Spheres. Journal of the Royal Astronomical Society of Canada, Vol. 68, p.146.
(100) 25. Neuhoff J G, Kramer G, Wayand J. Pitch and loudness interact in auditory displays: can the data get lost in the map. J Exp Psychol Appl. 2002 March; 8(1):17-25.
(101) 26. Han Y C, Han B. Pattern Sonification as a New Timbral Expression. Leonardo Music Journal, Volume 24, 2014, pp. 41-43.
(102) 27. Scholz D S, Wu L, Pirzer J, et al Sonification as a possible stroke rehabilitation strategy. Front Neurosci. 2014 Oct. 20; 8: 332.
(103) 28. Ahmad A, Adie S G, Wang M, Boppart S A. Sonification of optical coherence tomography data and images. Opt Express. 2010 May 10; 18(10): 9934-44.
(104) 29. Dubus G and Bresin R. A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities. 2013; 8(12): e82491.
(105) 30. AHRQ Publication No. 11-EHC085-EF, Noninvasive Diagnostic Techniques for the Detection of Skin Cancers, September 2011.