SYSTEMS, DEVICES, AND METHODS FOR RECOGNIZING DEFECTS IN MEDICAL GRAFT PROCESSING
20230326020 · 2023-10-12
Inventors
Cpc classification
G06V20/70
PHYSICS
G06V10/145
PHYSICS
G06V10/26
PHYSICS
International classification
G06V10/145
PHYSICS
Abstract
Systems and methods for identifying material components on graft products include an image capture device for obtaining image data of a graft product and a processor for processing image data with an artificial neural network, the artificial neural network localizing and classifying materials of the graft product from the image data. The image capture device may include an optical filter and an ultraviolet light source for ultraviolet fluoresce imaging of the graft product. Using the captured image data, the artificial neural network may identify unwanted materials on the graft product for subsequent removal, such as, for example, fascia or flesh on a piscine skin.
Claims
1. A system for identifying material components on graft products, the system comprising: an image capture device configured to obtain image data of a graft product; and a processor configured to process the image data using an artificial neural network, the artificial neural network being configured to localize and classify materials of the graft product from the image data.
2. The system according to claim 1, wherein the image capture device comprises an ultraviolet light source, an optical filter, and an image sensor.
3. The system according to claim 2, wherein the ultraviolet light source is configured to emit light having a wavelength of 365 nm to 395 nm.
4. The system according to claim 2, wherein the optical filter comprises a long-pass filter configured with a cut-on wavelength of 435 nm.
5. The system according to claim 2, wherein the optical filter has a transmittance of 85% for wavelengths greater than 435 nm.
6. The system according to claim 1, wherein the processor is configured to divide the image data into a plurality of image tiles.
7. The system according to claim 6, wherein each of the plurality of image tiles has an identical size.
8. The system according to claim 1, wherein the graft product comprises piscine skin having unwanted materials thereon, including at least one of fascia and flesh.
9. The system according to claim 1, wherein the artificial neural network comprises a convolutional neural network.
10. The system according to claim 9, wherein the convolutional neural network comprises a stepped contracting path, each step of the stepped contracting path comprising: a first contracting convolutional layer; a second contracting convolutional layer; a first contracting rectifier layer following the first contracting convolutional layer; a second contracting rectifier layer following the second contracting convolutional layer; a storage operation that stores an output following the second contracting rectifier layer; and a pooling layer following the storage operation.
11. The system according to claim 10, wherein the convolutional neural network comprises a stepped expanding path, each step of the stepped expanding path comprising: a first expanding convolutional layer; a second expanding convolutional layer; a first expanding rectifier layer following the first expanding convolutional layer; a second expanding rectifier layer following the second expanding convolutional layer; an up-sampling layer following the second expanding rectifier layer; and a concatenation operation that stacks an output of the up-sampling layer with the stored output of the stepped contracting path.
12. The system according to claim 11, wherein the stepped contracting path and the stepped expanding path comprise a same number of steps.
13. The system according to claim 11, wherein the stepped contracting path and the stepped expanding path each comprise six steps.
14. The system according to claim 1, wherein an output step comprises: a first output convolutional layer; a second output convolutional layer; a first output rectifier layer following the first output convolutional layer; a second output rectifier layer following the second output convolutional layer; and a sigmoid layer following the second output rectifier layer.
15. The system according to claim 9, wherein the convolutional neural network is configured to output an image defining an area of each material feature of the graft product.
16. The system according to claim 2, wherein the optical filter is configured with a cut-on wavelength of 400 nm to 600 nm.
17. The system according to claim 6, wherein the processor is configured to resize the image data prior to dividing the image data into the plurality of image tiles.
18. A method for identifying material components on graft products, the method comprising the steps of: capturing with an image capture device image data of a graft product; and using a processor to process the image data using an artificial neural network to localize and classify materials of the graft product from the image data.
19. The method according to claim 18, wherein capturing the image data of the graft product further comprises: irradiating the graft product with an ultraviolet light source; filtering light emitted and reflected by the graft product with an optical filter of the image capture device; and capturing the filtered light using an image sensor of the image capture device.
20. A non-transitory hardware storage device having stored thereon computer executable instructions which, when executed by one or more processors of a computer, configure the computer to perform at least the following: capture with an image capture device image data of a graft product; and process the image data using an artificial neural network to localize and classify materials of the graft product from the image data.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] These and other features, aspects, and advantages of the present disclosure will become better understood regarding the following description, appended claims, and accompanying drawings.
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION
Overview
[0030] A better understanding of different embodiments of the disclosure may be had from the following description read with the accompanying drawings in which like reference characters refer to like elements.
[0031] While the disclosure is susceptible to various modifications and alternative constructions, certain illustrative embodiments are in the drawings and are described below. It should be understood, however, there is no intention to limit the disclosure to the specific embodiments disclosed, but on the contrary, the intention covers all modifications, alternative constructions, combinations, and equivalents falling within the spirit and scope of the disclosure.
[0032] It will be understood that unless a term is expressly defined in this application to possess a described meaning, there is no intent to limit the meaning of such term, either expressly or indirectly, beyond its plain or ordinary meaning.
[0033] For the purposes of this application, in a preferred embodiment the terms “graft product,” may include “piscine skin,” “fish skin,” “acellular fish skin,” or similar include KerecisTM Omega3 Wound by Kerecis, Kerecis™ Omega3 acellular fish skin from the Atlantic cod (Gadus morhua), and any other fish skin grafts or similar to the foregoing. These graft products are subjected to processing that retains biological structure and bioactive compounds, including Omega3 polyunsaturated fatty acids (PUFAs), but removes allergenic and other unwanted components, and particularly removes tissue that would cause an immunological response from the receiving patient. Fish skins used in the preparation of graft products can vary in thickness, as can the fascia, flesh, or other unwanted material thereon, such as scales. Fish skins may have a thickness of about 0.35 mm to 2.25 mm while the thickness of unwanted materials thereon may range from about 0.15 mm to 1.75 mm. Preparation of a graft product from the fish skin may require removal of substantially all fascia, flesh, or other unwanted materials from the skin or the tissue to be used as the graft product. Although described as a preferred embodiment, the graft product is not limited to Atlantic cod (Gadus morhua), but may include any other harvested species of fish used for a skin graft or skin substitute product, such as tilapia, including Nile Tilapia (Oreochromis niloticus), or other fish species. In other embodiments, the “graft product” may be prepared from non-fish sources including, but not limited to, harvested mammalian skin, including porcine skin graft products, or human allograft or autograft, or cadaveric skin graft products. The graft products may ultimately be prepared as acellularized or decellularized graft products, or may be cellular skin graft products (i.e., a graft product with the skin cells remaining viable or intact). Further, the graft product may include a non-skin tissue to be used as a skin graft product, for example, a placental graft, wherein an undesired, second tissue or portion is to be removed from the tissue to be used as the graft product. Further, the graft product may include biological, synthetic, or hybrid skin substitutes wherein the graft product is inspected for an undesired tissue, portion, or material to be removed before being used as a graft product. Lastly, the graft product may include autograft, allograft, or xenograft products, and is not limited to skin graft products to be used on human patients, but also include autograft, allograft, or xenograft products used or prepared to be used on other non-human species, including horse, cattle, monkeys, rabbits, mice, rats, guinea pigs, or other species, mammal or non-mammal, to which a skin graft product is to be applied.
[0034] As described herein, a “convolutional neural network” refers to an artificial neural network for conducting an analysis, such as of an image, that is based on the shared-weight architecture of convolution kernels configured for pixel-by-pixel evaluation of an image. The convolutional neural network may adjust and improve kernel weights and biases through automated learning or training, for example using training datasets. In embodiments, the convolutional neural network may include a contracting path, or encoder, followed by a symmetric expanding path, or decoder, making the network an end-to-end fully convolutional neural network, containing convolutional layers and no dense layers.
Various Embodiments and Components for Use Therewith
[0035] In preparation of graft products generally, as described above, for example, from piscine skins, it has been found that certain materials, such as fascia and flesh, can lead to unwanted reactions or results, such as immunological reactions and results, or can otherwise detrimentally affect the aesthetic or quality of the product. For example, in the case of piscine skin, portions or layers of fascia and flesh on the piscine skin can be extremely difficult to reliably differentiate from the acellular matrix of the piscine skin due to the small thicknesses involved and the shared milky white color of these components. Because of the challenges of accurately classifying and localizing these distinct material components using existing imaging approaches, evaluation of the components on a graft product, such as during scraping of piscine skins or after preparation of the graft product, are manually conducted, expensive, and often inaccurate, due to being poorly adapted to accurately distinguishing between the visually similar material components of piscine skins.
[0036] Further, manual evaluations of piscine skins are not capable of yielding actionable information regarding a type, quantity, and arrangement of a material of a piscine skin in an accurate, reproducible, and rapid manner.
[0037] In view of the foregoing, there is a need for a system and method for identifying material components on graft products that addresses the problems and shortcomings of existing approaches to identifying, assessing, and determining the location, quantity, and arrangement of material components on graft products, including the limitations of existing 2D image recognition approaches, as well as the costly, time-consuming manual identification of material components. The inventors have identified a need for a system of material identification that provides increased accuracy, speed and consistency in distinguishing between material components of graft products in order to provide actionable and quantifiable insights to a technician or automated processing system.
[0038] Embodiments of the system and method for identifying material components on graft products according to the present disclosure advantageously overcome the deficiencies of existing approaches to differentiating, identifying, and localizing material components on a graft product, such as fascia and flesh on a piscine skin, which are limited in accuracy and require tremendous time and effort to carry out.
[0039] In an embodiment of the system and method, ultraviolet fluorescence imaging and an improved neural network architecture are synergistically combined to classify and localize the material components of a graft product, preferably a piscine skin. The ultraviolet fluorescence imaging may be provided using an ultraviolet light source configured to illuminate a graft product, as well as using an image capture device configured to capture, store, and process an RGB (i.e. red, green, blue) image of induced fluorescence from the material components in the visible spectrum. In various embodiments, the ultraviolet light source may be configured to irradiate the graft product with light having a wavelength of about 365 nm to 395 nm. In one aspect, an ultraviolet light source may have a fixed center band of 395 nm, for example a light emitting diode having a fixed center band of 395 nm. Preferably, the wavelength of the ultraviolet light source is configured to irradiate the graft product without substantially heating the graft product. The image capture device may be integrated with a sensor device, such as a camera or similar device, and may process the image data using an application of a computing device configured to receive, store, process, and/or transmit the captured image. The captured or transmitted images may then be assessed to classify and localize material components of the graft product.
[0040] A system and method for identifying material components of a graft product leverages an RGB image of induced fluorescence from piscine skin in combination with an artificial neural network to segment, map, and identify material components in a space, such as fascia and flesh on a piscine skin. The system and method include an image capture device configured to capture an image, for example an RGB image, and a processor, configured to process the image data using the artificial neural network, preferably a convolutional neural network, the artificial neural network being configured to localize and classify materials of the graft product from the image data. The captured image may be an RGB or truecolor image, or any other suitable type of color image capturing ultraviolet induced fluorescence of a piscine skin, as would be understood by one skilled in the art from the present disclosure. The system and method mitigate the need to manually conduct a visual evaluation of material components of a piscine skin.
[0041] The image capture device may integrate an optical filter to filter incoming light for forming the captured image. Alternatively, the optical filter may be separate from (not integrated with) the image capture device but arrange such that incoming light passes through the optical filter before entering the image capture device. In varying embodiments, the optical filter may comprise a long-pass filter that reflects short wavelengths while transmitting long wavelengths. In one aspect, a cut-on wavelength of the long-pass filter may be in the range of about 375 nm to 675 nm, 400 nm to 600 nm, 400 nm to 500 nm, 425 nm to 625 nm, or for some embodiments about 435 nm. The optical filter may have a transmittance in the selected wavelengths of about 85%. The optical filter may necessarily be present as a hardware component or a physical filter on the image capture device, as images captured without a physical optical filter are so saturated with light that distinguishing between the material components of the graft product becomes impracticable. Further, optical filtering or additional optical filtering may be provided through digital processing of image data obtained the image capture device.
[0042] The system and method embodiments may provide a pixel-by-pixel evaluation of the captured image to mark and classify a physical material at each position of the graft product. In varying examples, each pixel may be configured to correspond to an area of the graft product between about 200 and 800 .Math.m.sup.2 in size, between about 300 and 600 .Math.m.sup.2 in size, or for some embodiments about 400 .Math.m.sup.2 in size. Alternatively, a pixel resolution of at least 200 pixels/mm.sup.2 may be used, more particularly at least 400 pixels/mm.sup.2, at least 600 pixels/mm.sup.2, at least 800 pixels/mm.sup.2, at least 1000 pixels/mm.sup.2, at least 1500 pixels/mm.sup.2, or at least 2000 pixels/mm.sup.2. Advantageously, the pixel-by-pixel evaluation of the captured image according to disclosed system and embodiments enable the detection of even very small amounts of unwanted material, ensuring higher quality and accuracy in the resulting graft products.
[0043] The system and method may include a segmentation of the captured image into a plurality of image tiles for input to the artificial neural network. The image tiles may be of equal size, for example 512×512 pixels, and the captured image may be resized as required to enable the segmentation of the image tiles. Preferably, the plurality of image tiles comprises at least 20 individual tiles, at least 25 individual tiles, at least 30 individual tiles, at least 35 individual tiles, or for some embodiments 35 individual tiles. The image tiles may also be employed to train and retrain a learning artificial neural network, such as by allowing a convolutional neural network to make adjustments to kernels, kernel biases, or kernel weights in respective convolution layers based on feedback from training datasets.
[0044] In a first aspect, the system and method provide a first phase of a stepped contracting path or encoder for contextualizing each image tile, the stepped contracting path including a plurality of contracting steps. Each step of the stepped contracting path may include a first convolutional layer configured to filter each pixel of the captured image to form feature maps for input to additional layers, followed by a first rectifier layer which determines for each pixel on the feature maps from the first convolutional layer whether a feature is present. Each step may further include a second convolutional layer followed by a second rectifier layer, the first rectifier layer providing an input for the second convolutional layer. Each of the plurality of steps may also include storing a resulting contracted output or contracted feature map from the second rectifier layer, such as a resulting feature map of the material components on the graft product. A pooling layer may be provided in each step of the stepped contracting path for selecting prominent features of the stored contracted feature map as inputs for subsequent layers and/or steps.
[0045] According to varying embodiments, the described rectifier layers of the artificial neural network may comprise non-linear rectifier layers. The non-linear rectifier layers may include a threshold operation, for example which returns a zero for values less than zero but directly returns an input value for values over zero. Accordingly, the non-linear rectifier layers may comprise activation functions taking the calculated values from a preceding convolution layer and transforming the values to an output. Preferably, the rectifier layers may comprise a non-linear rectifier (ReLu) type activation.
[0046] In a second aspect, the system and method provide a second phase of a symmetric stepped expanding path or decoder for precisely localizing features of the captured image, the stepped expanding path including a plurality of expanding steps. Each step of the symmetric stepped expanding path may include a first convolutional layer followed by a first rectifier layer and a second convolutional layer followed by a second rectifier layer, the first rectifier layer providing an input for the second convolutional layer. Each step may further include an up-sampling layer following the second rectifier layer, the up-sampling layer forming an up-sampled feature map. Each step of the symmetric stepped expanding path may include a concatenation or stacking operation of the up-sampled feature map with a stored contracted feature map. The up-sampled feature map and the stored contracted feature map selected for the concatenation operation may be selected based on having a common or same size.
[0047] In a third aspect, the system and method provide a third phase of an output operation or step, comprising a first convolutional layer followed by a first rectifier layer and a second convolutional layer followed by a second rectifier layer. The output operation further includes a sigmoid layer following the second rectifier layer.
[0048] The image capture device and/or computing device may be configured to conduct the above-mentioned and other steps described herein locally. The computing device may comprise a storage, a processor, a power source, and an interface. Instructions on the storage may be executed by the processor so as to utilize one or more neural networks as described herein to capture an image, classify, and localize material components of a graft product. While in embodiments the above steps are performed locally, it will be appreciated that one or more of the steps described herein may be performed by cloud computing, with captured images transmitted to a remote server, with a processor located on the remote server configured to classify and localize material components of a graft product.
[0049]
[0050]
[0051] The conveyor 160, which may be a conveyor belt, is controlled by conveyor control device 165, which may include a conveyor drive system. Conveyor control device 165 can alter the speed and direction of the conveyor 160. Additional, conveyor control device is controlled by computing device 190. The captured image includes an induced fluorescence of the graft product 110. The image capture device 130 may comprise, or the system 150 may comprise, an optical filter 132 configured to filter incoming light below a predetermined wavelength from entering the image capture device. System 150 includes a computer or computing device 140, which will be described in further detail herein (for example, in the embodiment shown in
[0052] Computing device 190 receives image data from image capture device, for example, in the form of still image data or video image data. Computer device may also control first ultraviolet light source 121 and second ultraviolet light source 123. Although hardwire connections between computing device 190 and the first and second ultraviolet light sources 121, 123, and image capture device 130, and conveyor control device 165 may be included in various embodiments, data between these components may be transmitted via wireless communication, such as a Bluetooth. Further, although
[0053] As further described herein, computing device 190 includes an input and output, one or more processors, and a memory storage. Further, in another embodiment, computing device 190 is physically coupled to image capture device 130 such that computing device 190 and image capture device 130 are provided in an integral unit.
[0054] In the embodiment of
[0055] Upon receiving image data from image capture device 130, computing device 190 stores the image data in a memory storage of the computing device. One or more processors of computing device 190 may perform a segmentation of the captured image into a plurality of image tiles for input to the artificial neural network. The image tiles may be of equal size, for example 512×512 pixels, and the captured image may be resized as required to enable the segmentation of the image tiles. Preferably, the plurality of image tiles comprises at least 20 individual tiles, at least 25 individual tiles, at least 30 individual tiles, at least 35 individual tiles, or for some embodiments 35 individual tiles. The image tiles may also be employed to train and retrain a learning artificial neural network, such as by allowing a convolutional neural network to make adjustments to kernels, kernel biases or kernel weights in respective convolution layers based on feedback from training datasets.
[0056]
[0057] In one embodiment, image capture devices 131, 133 may each include a 12-megapixel sensor, or an equivalent imaging device. Image capture devices 131, 133 may be configured to obtain still images or video images, or both, of the graft product, such that the image data captured by each image capture device 131, 133 may be processed individually by the neural network or merged prior to processing. The use of two or more image capture devices 131, 133 may be configured to allow imaging a graft product 110 at one time, without the need to move the graft product 110.
[0058]
[0059]
[0060] In a first aspect, the artificial neural network may comprise a convolutional neural network, the method comprising inputting each image tile to a stepped contracting path 308, or encoder, of the convolutional neural network for contextualizing each image tile in a first phase. As shown in
[0061] In a second aspect, the system and method comprise a second phase of the convolutional neural network, including inputting each image tile to a symmetric stepped expanding path 310 or decoder for precisely localizing features of the captured image, the stepped expanding path 500 including a plurality of expanding steps 510, 522, 524. Each step 510 of the symmetric stepped expanding path 500 may include a first expanding convolutional layer 512 followed by a first expanding rectifier layer 514 and a second expanding convolutional layer 516 followed by a second expanding rectifier layer 518, the first expanding rectifier layer 514 providing an input for the second expanding convolutional layer 516. Each step may further include an up-sampling layer 520 following the second expanding rectifier layer 518, the up-sampling layer 520 forming an up-sampled feature map. Each step of the symmetric stepped expanding path 500 may include a concatenation or stacking operation 540 of the up-sampled feature map with a stored contracted feature map from a step of the contracting path 400. The up-sampled feature map and the stored contracted feature map selected for the concatenation operation 540 may be selected based on having a common or same size. The step 510 of the expanding path 500 may be repeated multiple times with an expanded feature map from a previous step provided as an input to the first expanding convolutional layer 512 of the following step 522, 524. As shown in
[0062] As may be seen in a comparison of
[0063] In a third aspect, the system and method provide for a third phase of the neural network including inputting the concatenated or stacked output from the up-sampled feature map and the stored contracted feature map to an output step 312. As illustrated in
[0064] The classified image may comprise the captured image with an overlay distinguishing unwanted materials, such as fascia and flesh, from a piscine skin.
[0065] Turning back to the method 300 of
[0066] Embodiments of a method and system according to the current disclosure may further include a user interface configured to display the classified images 714, 716 to a user. The classified images may be labeled by the processor to output a labeled image including, for example, bounding boxes, tags, or alterations to color configured to emphasize the location of unwanted materials for removal. In a similar manner, the user interface may provide a comparison between a series of classified images separated by steps of removing the unwanted materials from the graft product, so as to permit a user to comprehend progress made over time. Varying embodiments of a user interface may include varying input and output devices for facilitating interaction with a user, including a display screen, touch screen, speakers, audible alarms, indicator lights, or the like, including conventional control devices such as a keyboard, control panel, computer mouse or similar devices.
[0067]
[0068] The storage 848 may comprise instructions for operating a system for identifying material components on graft products stored thereon in a non-transitory form that, when executed by the processor 844, cause the processor 844 to carry out one or more of the steps described herein, in particular receiving image data and localizing and classifying materials of the graft product from the image data. The computing device 840 may comprise one or more AI modules 850 configured to apply the artificial neural network described above regarding the embodiments of
[0069] In embodiments, the computing device 840 may be configured to operate the image capture device to capture image data, such as RGB image data, and to process locally and in substantially real time the captured image data using the artificial neural network stored on the AI module 850 to output the classified and localized images, as described above.
[0070] As described above with respect to
[0071] In one embodiment, image data of a captured image may comprise an 8-megapixel image of 3280×2190 resolution. The captured image may be resized to 3584×2560 to facilitate the creation of equally sized image tiles therefrom, for example, by cropping the resized image into exactly 35 individual image tiles of 512×512 pixels in size. In this example, the individual image tiles comprise a three-dimensional input matrix having dimensions of 512×512×3, 512×512 comprising pixel locations in the image and three separate values in the third dimension being intensities of red, green, and blue color in the image data.
[0072] Turning to
[0073] An entry 932 in the convoluted result 930 for each position 940, 942, 944, 946 is calculated based on an operation between the color intensity values 912 of the input matrix 910 and the predetermined weight values 922 of the kernel 920. According to the illustrated example of
[0074] The entries 932 of the result 930 may cumulatively form feature maps, such as a contracted feature map or an expanded feature map in respective pathways, for input to additional layers of the artificial neural network. In an example according to the depicted embodiment of
[0075] In methods and systems of the current disclosure, a first contracting step may include 64 filter channels in each convolution layer, the number of filter channels doubling at each contracting step. As such, where there are six contracting steps, the convolution layers of the final contracting step may include 4,096 filter channels. A first expanding step may then halve the number of filter channels in each convolution layer until the final expanding step has the same number of filter channels in each convolution layer as the first contracting step. An output step may include the same number of filter channels as a number of classes of material desired to be identified, these filter channels serving to gather the data into images provided as the output of the neural network. For example, distinguishing between unwanted material and skin of a graft product would require only one filter, to gather the data into a single image for output as a monochrome image, while distinguishing between flesh, fascia and skin would require three filters.
[0076] According to varying aspects of the instant disclosure, an artificial neural network may adjust and improve kernel weights through automated learning or training, for example using training datasets based on user annotated images. In this manner, a configuration of the kernels 920 including the predetermined weight values 922, the pathway, and the stride in the convolution operations 900 may comprise a decision-making portion of the artificial neural network. In another aspect, the decision-making portion of the artificial neural network may include the configuration of the kernels and the rectified non-linear unit layer, with additional parameters as would be understood from consideration of the disclosed embodiments and features.
[0077]
[0078] In an example according to the depicted embodiment of
[0079]
[0080] The conveyor 1160, which may be a conveyor belt, is controlled by conveyor control device 1165, which may include a conveyor drive system. Conveyor control device 1165 can alter the speed and direction of the conveyor 1160. Additional, conveyor control device may be controlled by computing device 1190. The captured image includes an induced fluorescence of the graft product 1110. The image capture device 1130 may comprise, or the system 1150 may comprise, an optical filter 1132 configured to filter incoming light below a predetermined wavelength from entering the image capture device 1130. System 1150 includes a computer or computing device 1190.
[0081] Computing device 1190 receives image data from image capture device, for example, in the form of still image data or video image data. Computing device 1190 may be configured to process the image data using an artificial neural network, the artificial neural network being configured to localize and classify materials of the graft product from the image data as described in other embodiments herein. Based on the localization and classification of materials of the graft product 1110 from the image data, computing device 1190 may control a scraping or cutting device 1180 to remove unwanted materials from the graft product 1110. Embodiments of a scraping or cutting device 1180 may comprise a blade, reciprocating plane, cutter head, extrusion die, water jet, air jet, or other similar device for separating a thin layer of material from the graft product 1110. Preferably, the cutting device 1180 may remove material from only localized positions on the graft product 1110.
[0082] As illustrated in
[0083] In some embodiments, the graft products 1110 may be secured to the conveyor 1160 by restraining devices, such as using mechanical clamping arms, a suction device of or in the conveyor 1160, or using similar device, to facilitate operation of the cutting device 1180 on the graft product 1110. Accordingly, a position of the graft product 1110, through control of the conveyor 1160 and any restraining devices, and the cutting device 1180 may be coordinated and adjusted by the computing device 1190 to precisely remove fascia and flesh from a graft product without damaging the skin with excessive scraping, cutting or pressure.
[0084] As further described herein, automatically identifying material components of a graft product 1110 and removing unwanted materials therefrom may include an iterative system or method. Accordingly, graft product 1110 may repeatedly be input to and output from system 1150. For this purpose, the graft product 1110 may be transported from cutting device 1180 to the image capture device 1130, whether manually transported or conveyed via a conveyor. In other embodiments, the system 1150 may be replicated along a single processing path, such that the graft product 1110 passes through a plurality of image capture devices 1130 and cutting devices 1180 in order to complete processing thereof. In other embodiments, the system 1150 may be provided with a further image capture device following the cutting device 1180, such that the efficacy of the cutting or scraping operation may be evaluated prior to determining a subsequent operation.
[0085]
[0086] The method may further comprise determining whether unwanted materials are present on the graft product 1209. If no unwanted materials are identified, the method may proceed by outputting the graft product, whether by conveying the graft product or providing an indication to a user that the graft product is free of unwanted materials 1218. Where unwanted materials are identified on the graft product, the method may proceed by conveying the graft product to a cutting device 1213 and, based on the localization and classification of materials of the graft product from the artificial neural network, controlling a scraping or cutting device to remove unwanted materials from the graft product 1216, such as by using position information from the computing device to position or activate the cutting device at certain areas of the graft product classified as unwanted materials as the graft product is conveyed to or by the cutting device. Following operation of the cutting device, the method may be repeated 1218 using the same system or with the duplication of some or all components of the same system in a processing line. In some embodiments, where all unwanted materials were successfully removed in a previous step, only the steps of providing a graft product to a conveyor 1201 which conveys the graft product to an image capture device, illuminating the graft product using at least one ultraviolet light source 1203, capturing image data of the graft product 1202, transferring the image data to a computing device 1205, and processing the image data by the computing device using an artificial neural network 1207 may be repeated.
[0087] Embodiments of the present disclosure may comprise or utilize a special-purpose or general-purpose computer system that includes computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system. Computer-readable media that store computer-executable instructions and/or data structures are computer storage media. Computer-readable media that carry computer-executable instructions and/or data structures are transmission media. Thus, by way of example, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.
[0088] Computer storage media are physical storage media that store computer-executable instructions and/or data structures. Physical storage media include computer hardware, such as RAM, ROM, EEPROM, solid state drives (“SSDs”), flash memory, phase-change memory (“PCM”), optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage device(s) which can be used to store program code in the form of computer-executable instructions or data structures, which can be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the disclosure.
[0089] Transmission media can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures, and which can be accessed by a general-purpose or special-purpose computer system. A “network” may be defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer system, the computer system may view the connection as transmission media. Combinations of the above should also be included within the scope of computer-readable media.
[0090] Further, upon reaching various computer system components, program code in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.
[0091] Computer-executable instructions may comprise, for example, instructions and data which, when executed by one or more processors, cause a general-purpose computer system, special-purpose computer system, or special-purpose processing device to perform a certain function or group of functions. Computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
[0092] The disclosure of the present application may be practiced in network computing environments with many types of computer system configurations, including, but not limited to, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. As such, in a distributed system environment, a computer system may include a plurality of constituent computer systems. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
[0093] The disclosure of the present application may also be practiced in a cloud-computing environment. Cloud computing environments may be distributed, although this is not required. When distributed, cloud computing environments may be distributed internationally within an organization and/or have components possessed across multiple organizations. In this description and the following claims, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services). The definition of “cloud computing” is not limited to any of the other numerous advantages that can be obtained from such a model when properly deployed.
[0094] A cloud-computing model can be composed of various characteristics, such as on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model may also come in the form of various service models such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). The cloud-computing model may also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth.
[0095] Some embodiments, such as a cloud-computing environment, may comprise a system that includes one or more hosts that are each capable of running one or more virtual machines. During operation, virtual machines emulate an operational computing system, supporting an operating system and perhaps one or more other applications as well. In some embodiments, each host includes a hypervisor that emulates virtual resources for the virtual machines using physical resources that are abstracted from view of the virtual machines. The hypervisor also provides proper isolation between the virtual machines. Thus, from the perspective of any given virtual machine, the hypervisor provides the illusion that the virtual machine is interfacing with a physical resource, even though the virtual machine only interfaces with the appearance (e.g., a virtual resource) of a physical resource. Examples of physical resources including processing capacity, memory, disk space, network bandwidth, media drives, and so forth.
[0096] By providing a system and method for identifying material components on graft products according to disclosed embodiments, the problem of existing 2D image recognition and manual identification approaches being expensive, time consuming, and poorly adapted to the problem of differentiating visually similar materials such as fascia, flesh and skin within a graft product is addressed. The disclosed embodiments advantageously provide a system and method that identifies and provides increased accuracy, speed and consistency in distinguishing between material components of graft products in order to provide actionable and quantifiable insights to a technician or automated processing system.
[0097] Various features of the disclosure may be better understood by reference to a specific example of a method for identifying material components on graft products according to the current disclosure, as detailed in the attached Appendix, the Appendix being expressly incorporated herein by this reference. The example provided is illustrative in nature of a single application of principles according to the disclosure and is not intended to be limiting. Notably, the Appendix illustrates a severely reduced scale of neural network for processing an input image in the form of a 4×4×3 frame, and many values are assumed for simplicity.
[0098] Not necessarily all such objects or advantages may be achieved under any embodiment of the disclosure. Those skilled in the art will recognize that the disclosure may be embodied or carried out to achieve or optimize one advantage or group of advantages as taught without achieving other objects or advantages as taught or suggested.
[0099] The skilled artisan will recognize the interchangeability of various components from different embodiments described. Besides the variations described, other equivalents for each feature can be mixed and matched by one of ordinary skill in this art to construct or use a system or method for identifying material components of a graft product under principles of the present disclosure. Therefore, the embodiments described may be adapted to material identification and localization for fascia, flesh, scales, skin or any other suitable material on a graft product.
Combinability of Embodiments and Features
[0100] This disclosure provides various examples, embodiments, and features which, unless expressly stated or which would be mutually exclusive, should be understood to be combinable with other examples, embodiments, or features described herein.
[0101] In addition to the above, further embodiments and examples include the following:
[0102] 1. A system for identifying material components on graft products, the system comprising: an image capture device configured to obtain image data of a graft product; and a processor configured to process the image data using an artificial neural network, the artificial neural network being configured to localize and classify materials of the graft product from the image data.
[0103] 2. The system according to any or a combination of 1 above or 3-19 below, wherein the image capture device comprises an ultraviolet light source, an optical filter and an image sensor.
[0104] 3. The system according to any or a combination of 1-2 above or 4-19 below, wherein the ultraviolet light source is configured to emit light having a wavelength of 365 nm to 395 nm.
[0105] 4. The system according to any or a combination of 1-3 above or 5-19 below, wherein the optical filter comprises a long-pass filter configured with a cut-on wavelength of 435 nm.
[0106] 5. The system according to any or a combination of 1-4 above or 6-19 below, wherein the optical filter has a transmittance of 85% for wavelengths greater than 435 nm.
[0107] 6. The system according to any or a combination of 1-5 above or 7-19 below, wherein the processor is configured to divide the image data into a plurality of image tiles.
[0108] 7. The system according to any or a combination of 1-6 above or 8-19 below, wherein the plurality of image tiles each have an identical size.
[0109] 8. The system according to any or a combination of 1-7 above or 9-19 below, wherein the plurality of image tiles comprises 35 image tiles.
[0110] 9. The system according to any or a combination of 1-8 above or 10-19 below, wherein the graft product comprises piscine skin having unwanted materials thereon, including at least one of fascia and flesh.
[0111] 10. The system according to any or a combination of 1-9 above or 11-19 below, wherein the artificial neural network comprises a convolutional neural network.
[0112] 11. The system according to any or a combination of 1-10 above or 12-19 below, wherein the convolutional neural network comprises a stepped contracting path, each step of the stepped contracting path comprising: a first contracting convolutional layer; a second contracting convolutional layer; a first contracting rectifier layer following the first contracting convolutional layer; a second contracting rectifier layer following the second contracting convolutional layer; a storage operation that stores an output following the second contracting rectifier layer; and a pooling layer following the storage operation.
[0113] 12. The system according to any or a combination of 1-11 above or 13-19 below, wherein the convolutional neural network comprises a stepped expanding path, each step of the stepped expanding path comprising: a first expanding convolutional layer; a second expanding convolutional layer; a first expanding rectifier layer following the first expanding convolutional layer; a second expanding rectifier layer following the second expanding convolutional layer; an up-sampling layer following the second expanding rectifier layer; and a concatenation operation that stacks an output of the up-sampling layer with the stored output of the stepped contracting path.
[0114] 13. The system according to any or a combination of 1-12 above or 14-19 below, wherein the stepped contracting path and the stepped expanding path comprise a same number of steps.
[0115] 14. The system according to any or a combination of 1-13 above or 15-19 below, wherein the stepped contracting path and the stepped expanding path each comprise six steps.
[0116] 15. The system according to any or a combination of 1-14 above or 16-19 below, wherein an output step comprises: a first output convolutional layer; a second output convolutional layer; a first output rectifier layer following the first output convolutional layer; a second output rectifier layer following the second output convolutional layer; and a sigmoid layer following the second output rectifier layer.
[0117] 16. The system according to any or a combination of 1-15 above or 17-19 below, wherein the convolutional neural network is configured to output an image defining an area of each material feature of the graft product.
[0118] 17. The system according to any or a combination of 1-16 above or 18-19 below, wherein the optical filter is configured with a cut-on wavelength of 400 nm to 600 nm.
[0119] 18. The system according to any or a combination of 1-17 above or 19 below, wherein the plurality of image tiles each have equal dimensions of 512×512 pixels.
[0120] 19. The system according to any or a combination of 1-18 above, wherein the image data is resized prior to being divided into the plurality of image tiles.
[0121] 20. A method for identifying material components on graft products, the method comprising the steps of: capturing with an image capture device image data of a graft product; and using a processor to process the image data using an artificial neural network to localize and classify materials of the graft product from the image data.
[0122] 21. The method according to any or a combination of 20 above or 22-38 below, wherein capturing the image data of the graft product further comprises: irradiating the graft product with an ultraviolet light source; filtering light emitted and reflected by the graft product with an optical filter of the image capture device; and capturing the filtered light using an image sensor of the image capture device.
[0123] 22. The method according to any or a combination of 20-21 above or 23-38 below, wherein the ultraviolet light source is configured to emit light having a wavelength of 365 nm to 395 nm.
[0124] 23. The method according to any or a combination of 20-22 above or 24-38 below, wherein the optical filter comprises a long-pass filter configured with a cut-on wavelength of 435 nm.
[0125] 24. The method according to any or a combination of 20-23 above or 25-38 below, wherein the optical filter has a transmittance of 85% for wavelengths greater than 435 nm.
[0126] 25. The method according to any or a combination of 20-24 above or 26-38 below, further comprising dividing the image data into a plurality of image tiles using the processor.
[0127] 26. The method according to any or a combination of 20-25 above or 27-38 below, wherein the plurality of image tiles each have an identical size.
[0128] 27. The method according to any or a combination of 20-26 above or 28-38 below, wherein the plurality of image tiles comprises 35 image tiles.
[0129] 28. The method according to any or a combination of 20-27 above or 29-38 below, wherein the graft product comprises piscine skin having unwanted materials thereon, including at least one of fascia and flesh.
[0130] 29. The method according to any or a combination of 20-28 above or 30-38 below, wherein the artificial neural network comprises a convolutional neural network.
[0131] 30. The method according to any or a combination of 20-29 above or 31-38 below, further comprising inputting each of the plurality of image tiles to a stepped contracting path of the artificial neural network, each step of the contracting path comprising: a first contracting convolutional layer; a second contracting convolutional layer; a first contracting rectifier layer following the first contracting convolutional layer; a second contracting rectifier layer following the second contracting convolutional layer; a storage operation that stores an output following the second contracting rectifier layer; and a pooling layer following the storage operation.
[0132] 31. The method according to any or a combination of 20-30 above or 32-38 below, further comprising inputting each of the plurality of image tiles to a symmetric stepped expanding path of the artificial neural network, each step of the stepped expanding path comprising: a first expanding convolutional layer; a second expanding convolutional layer; a first expanding rectifier layer following the first expanding convolutional layer; a second expanding rectifier layer following the second expanding convolutional layer; an up-sampling layer following the second expanding rectifier layer; and a concatenation operation that stacks an output of the up-sampling layer with the stored output of the stepped contracting path.
[0133] 32. The method according to any or a combination of 20-31 above or 33-38 below, wherein the stepped contracting path and the stepped expanding path comprise a same number of steps.
[0134] 33. The method according to any or a combination of 20-32 above or 34-38 below, wherein the stepped contracting path and the stepped expanding path each comprise six steps.
[0135] 34. The method according to any or a combination of 20-33 above or 35-38 below, further comprising inputting each of the plurality of image tiles to an output step of the artificial neural network, the output step comprising: a first output convolutional layer; a second output convolutional layer; a first output rectifier layer following the first output convolutional layer; a second output rectifier layer following the second output convolutional layer; and a sigmoid layer following the second output rectifier layer.
[0136] 35. The method according to any or a combination of 20-34 above or 36-38 below, further comprising outputting an image defining an area of each material feature of the graft product following the output step of the artificial neural network.
[0137] 36. The method according to any or a combination of 20-35 above or 37-38 below, wherein the optical filter is configured with a cut-on wavelength of 400 nm to 600 nm.
[0138] 37. The method according to any or a combination of 20-36 above or 38 below, wherein the plurality of image tiles each have equal dimensions of 512×512 pixels.
[0139] 38. The method according to any or a combination of 20-37 above, wherein the image data is resized prior to being divided into the plurality of image tiles.
[0140] 39. A non-transitory hardware storage device having stored thereon computer executable instructions which, when executed by one or more processors of a computer, configure the computer to perform at least the following: capture with an image capture device image data of a graft product; and use a processor to process the image data using an artificial neural network to localize and classify materials of the graft product from the image data.
[0141] Although the system or method for identifying material components on graft products has been disclosed in certain preferred embodiments and examples, it therefore will be understood by those skilled in the art that the present disclosure extends beyond the disclosed embodiments to other alternative embodiments and/or uses of the system or method for identifying material components on graft products and obvious modifications and equivalents. It is intended that the scope of the present system or method for identifying material components on graft products disclosed should not be limited by the disclosed embodiments described above, but should be determined only by a fair reading of the claims that follow.