METHOD TO LOCATE DEFECTS IN E-COAT
20250182263 ยท 2025-06-05
Inventors
Cpc classification
B05D7/142
PERFORMING OPERATIONS; TRANSPORTING
G06T7/80
PHYSICS
G06T2207/20182
PHYSICS
International classification
G06T7/80
PHYSICS
Abstract
A method of locating a defect in an e-coat on a surface can include acquiring an image of the surface. A correction coefficient can be applied to the image to form an adjusted image. The correction coefficient can relate pixel values of the image to a calibration value. The adjusted image can be separated into a spectral component which can be modified by a block average determination to create a modified spectral component. The spectral components can be compared with the modified spectral components to form a difference image. The difference image can be dilated and eroded. A region of interest can be identified from an image region using a blob detection. The defect can be classified as a defect type. The defect can be repaired or a coding parameter can be altered based on the defect.
Claims
1. A system for locating a defect in an e-coat on a surface, comprising: a camera configured to acquire an image; and a conveyor configured to move the surface from a first location to a second location; a photo eye configured to establish the first location when tripped by the surface; a tracking device configured to track the second location with respect to the first location; wherein the system is configured to calculate a distance the surface moves along the conveyor between the image acquired via the camera, the distance calculated configured to allow the defect to be located on the surface.
2. The system of claim 1, further including a light configured to illuminate the surface to acquire the image.
3. The system of claim 1, wherein the second location is established by moving the surface along the conveyor a space away from the first location.
4. The system of claim 1, wherein the second location is established by a predetermined trigger table.
5. The system of claim 1, wherein the tracking device is a rotary encoder or a laser.
6. The system of claim 1, wherein the image is configured to receive a correction coefficient applied to the image to form an adjusted image.
7. The system of claim 6, wherein the correction coefficient is configured to relate pixel values of the image to a calibration value.
8. The system of claim 7, wherein the system is configured to convert the image into a color image format having a red value, a blue value, and a green value assigned to each pixel of a plurality of pixels of the image.
9. The system of claim 8, wherein the correction coefficient is identified using a calibration plate, the calibration plate configured to act as a normalization step to color correct the image.
10. The system of claim 9, wherein the calibration plate includes a plurality of calibration values.
11. The system of claim 10, wherein a spectral component is configured to be separated from the adjusted image.
12. The system of claim 11, further including separating the adjusted image into multiple spectral components, wherein the multiple spectral components are configured to: separate the adjusted image into a red monochrome to form a red image; separate the adjusted image into a blue monochrome to form a blue image; separate the adjusted image into a green monochrome to form a green image; and maximize at least one of the red image, the blue image, and the green image at each one of the plurality of pixels to form a grey image.
13. The system of claim 12, wherein the spectral component is configured to be modified to create a modified spectral component, the modification including a block average determination.
14. The system of claim 13, wherein the spectral component is compared with the modified spectral component to form a difference image.
15. The system of claim 14, wherein the difference image is configured to be dilated and eroded.
16. The system of claim 15, wherein a region of interest is identified from one or more image regions using a blob detection threshold value to locate the defect.
17. The system of claim 16, wherein the system is configured to perform one of: a repair of the defect; and a change of a parameter of an e-coating process.
18. The system of claim 1, wherein the camera includes a color digital camera.
19. The system of claim 18, wherein the color digital camera includes a Bayer pixel format.
20. The system of claim 1, wherein the system is configured to: acquire an image of the surface; apply a correction coefficient to the image to form an adjusted image, the correction coefficient relating pixel values of the image to a calibration value; separate a spectral component from the adjusted image each of a plurality of pixels; modify the spectral component to create a modified spectral component, the modify configured to include a block average determination; compare the spectral component with the modified spectral component to form a difference image; dilate and erode the difference image; identify a region of interest from a one or more image regions using a blob detection threshold value to locate the defect; classify the defect; and perform one of: repair the defect; and change a parameter of an e-coating process to form a modified e-coating process and coat another surface using the modified e-coating process.
Description
DRAWINGS
[0030] The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
DETAILED DESCRIPTION
[0041] The following description of technology is merely exemplary in nature of the subject matter, manufacture and use of one or more inventions, and is not intended to limit the scope, application, or uses of any specific invention claimed in this application or in such other applications as may be filed claiming priority to this application, or patents issuing therefrom. Regarding methods disclosed, the order of the steps presented is exemplary in nature, and thus, the order of the steps can be different in various embodiments, including where certain steps can be simultaneously performed, unless expressly stated otherwise. A and an as used herein indicate at least one of the item is present; a plurality of such items may be present, when possible. Except where otherwise expressly indicated, all numerical quantities in this description are to be understood as modified by the word about and all geometric and spatial descriptors are to be understood as modified by the word substantially in describing the broadest scope of the technology. About when applied to numerical values indicates that the calculation or the measurement allows some slight imprecision in the value (with some approach to exactness in the value; approximately or reasonably close to the value; nearly). If, for some reason, the imprecision provided by about and/or substantially is not otherwise understood in the art with this ordinary meaning, then about and/or substantially as used herein indicates at least variations that may arise from ordinary methods of measuring or using such parameters.
[0042] Although the open-ended term comprising, as a synonym of non-restrictive terms such as including, containing, or having, is used herein to describe and claim embodiments of the present technology, embodiments may alternatively be described using more limiting terms such as consisting of or consisting essentially of. Thus, for any given embodiment reciting materials, components, or process steps, the present technology also specifically includes embodiments consisting of, or consisting essentially of, such materials, components, or process steps excluding additional materials, components or processes (for consisting of) and excluding additional materials, components or processes affecting the significant properties of the embodiment (for consisting essentially of), even though such additional materials, components or processes are not explicitly recited in this application. For example, recitation of a composition or process reciting elements A, B and C specifically envisions embodiments consisting of, and consisting essentially of, A, B and C, excluding an element D that may be recited in the art, even though element D is not explicitly described as being excluded herein.
[0043] As referred to herein, disclosures of ranges are, unless specified otherwise, inclusive of endpoints and include all distinct values and further divided ranges within the entire range. Thus, for example, a range of from A to B or from about A to about B is inclusive of A and of B. Disclosure of values and ranges of values for specific parameters (such as amounts, weight percentages, etc.) are not exclusive of other values and ranges of values useful herein. It is envisioned that two or more specific exemplified values for a given parameter may define endpoints for a range of values that may be claimed for the parameter. For example, if Parameter X is exemplified herein to have value A and also exemplified to have value Z, it is envisioned that Parameter X may have a range of values from about A to about Z. Similarly, it is envisioned that disclosure of two or more ranges of values for a parameter (whether such ranges are nested, overlapping or distinct) subsume all possible combination of ranges for the value that might be claimed using endpoints of the disclosed ranges. For example, if Parameter X is exemplified herein to have values in the range of 1-10, or 2-9, or 3-8, it is also envisioned that Parameter X may have other ranges of values including 1-9, 1-8, 1-3, 1-2, 2-10, 2-8, 2-3, 3-10, 3-9, and so on.
[0044] When an element or layer is referred to as being on, engaged to, connected to, or coupled to another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being directly on, directly engaged to, directly connected to or directly coupled to another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., between versus directly between, adjacent versus directly adjacent, etc.). As used herein, the term and/or includes any and all combinations of one or more of the associated listed items.
[0045] Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as first, second, and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
[0046] Spatially relative terms, such as inner, outer, beneath, below, lower, above, upper, and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as below or beneath other elements or features would then be oriented above the other elements or features. Thus, the example term below can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
[0047] The present technology relates to a method 100 to locate defects in e-coat on a surface, shown in
[0048] As shown in
[0049] With continued reference to
[0050] As shown in
[0051] In certain embodiments, the correction coefficient can include a plurality of correction coefficients. Each of the plurality of correction coefficients can correspond to a color value. The red value can have a red correction coefficient. The blue value can have blue correction coefficient. The green value can have a green correction coefficient. Advantageously, the plurality of correction coefficients can minimize the error between the colors of the adjusted image and the expected red value, blue value, and green value of the calibration plate. The plurality of correction coefficients can be applied to each of the plurality of pixels to form an adjusted image.
[0052] As shown in
[0053] As shown in
[0054] As a non-limiting example, the block average determination can modify the spectral component by selecting a first block region can be selected having the block width of about 400 pixels, the block height of about 5 pixels, and the percentile limit of about 0.25. A percentile limit of about 0.25 can result in pixels with an intensity below 25% being discarded as well as pixels with an intensity about 75% being discarded. The pixels in the first block region can be sorted by pixel intensity into a histogram ranging in pixel intensity from 0-255. Advantageously, the histogram can be used to calculate the percentile of the pixel intensities. The pixels of the first block region that fall below the percentile limit of 25% intensity and fall above the percentile limit of 75% intensity can be discarded as outlier pixels. The block region can be replaced with the average pixel intensity found in the histogram and form a first background intensity image.
[0055] As another non-limiting example, the block average determination with different parameters can be used to modify the first background intensity image. A second block region can be selected having the block width of about 200 pixels, the block height of about 5 pixels, and the percentile limit of about 0.1. A percentile limit of about 0.1 can result in pixels with an intensity below 10% being discarded as well as pixels with an intensity about 90% being discarded. The pixels in the first block region can be sorted by pixel intensity into a histogram. Advantageously, the histogram can be used to calculate the percentile of the pixel intensities. The pixels of the first block region that fall below the percentile limit of 10% intensity and fall above the percentile limit of 90% intensity can be discarded as outlier pixels. The block region can be replaced with the average pixel intensity found in the histogram and form a second background intensity image. Desirably, the second block average determination can help to smooth the intensity differences in the average.
[0056] Advantageously, and as detailed in the above examples, the wide block width and short block height can be useful in determining the background intensity image where a light intensity is variable in the y-direction of the adjusted image which commonly occurs when using bright horizontal light to illuminate the surface.
[0057] As shown in
[0058] In a step 184, the difference image can be eroded to remove noise and form an eroded image. As a non-limiting example, the difference image can be eroded by about 1 pixel. The difference image can be eroded a second time by another about 1 pixel. One of ordinary skill in the art can select a suitable number of erosions within the scope of the present disclosure to remove noise from the difference image. Where the difference image is eroded to remove noise, an eroded image is formed. In a step 186, the eroded image and the difference image can be merged, according to a baseline value, to form a final eroded image. Where the pixel average intensity is less than the baseline value, the pixel value is zeroed out in the difference image. Where the pixel average intensity is equal to or greater than the baseline value or the pixel exists in the eroded image, the pixel in the difference image and pixel in the eroded image are merged in the final eroded image. One of ordinary skill in the art can select a suitable baseline value within the scope of the present disclosure.
[0059] As shown in
[0060] As shown in
[0061] As shown in
[0062] The defect region can have classification information associated with it. Classification information can include, but is not limited to, shape, position, size and the probability of noise being identified. More specifically, the classification information can include color information because different e-coat defects have different colors. Color information can be used to filter out noise because color is typically the same for the entire vehicle. Where classification information has been obtained and the defect has been classified, the frame identification of the image can be used to locate the defect.
[0063] As shown in
[0064] The method 100 of the present disclosure can be repeated multiple times as the surface moves along the conveyor in a step 280. As the surface is moves along the conveyor and several images are taken, multiple images of the same defect can be captured, resulting in multiple defect regions for the same defect in a step 282. Where defect regions are repeated in more than one image, the defect regions can have the same hit test location on the surface. In a step 284, the regions can be clustered together are assumed to be the same defect and can be studied as a group for classification.
[0065] The step 270 of changing a parameter of the e-coating to form a modified e-coating process can include qualifying the defect region in a step 272 and using the data collected to measure and improve the painting process in a step 274. As a result, another surface can be e-coated using the modified e-coating process Advantageously, collecting this data can allow for the improvement of e-coating efficiency and accuracy and, as a result, minimize the occurrence of defects. As a non-limiting example, a spray droplet size can be modified to optimize coverage. Further, a spray amount can be altered to minimize over application and/or under application.
[0066] The present disclosure also contemplates a system 300 for locating a defect in an e-coat on a surface described hereinabove. The system 300 can include a surface 302 which can be present on a chassis 304. The system 300 can include a camera 306 for acquiring an image and a conveyor 308 for moving the chassis 304 to a first location and a second location. The system can calculate the distance the chassis can move along the conveyor between each image being acquired. The system can include one of more lights to illuminate the surface and allow for a better and more accurate image to be captured. The system can include a graphic processing unit (GPU) for locating a defect in an e-coat on a surface 302.
[0067] In certain embodiments, the system can be communicatively coupled to one or more remote platforms. The communicative coupling can include communicative coupling through a networked environment. The networked environment can be a radio access network, such as LTE or 5G, a local area network (LAN), a wide area network (WAN) such as the Internet, or wireless LAN (WLAN), for example. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which one or more computing platforms and remote platforms can be operatively linked via some other communication coupling. The one or more computing platforms can be configured to communicate with the networked environment via wireless or wired connections. In addition, in an embodiment, the one or more computing platforms can be configured to communicate directly with each other via wireless or wired connections. Examples of one or more computing platforms can include, but are not limited to, smartphones, wearable devices, tablets, laptop computers, desktop computers, Internet of Things (IoT) device, or other mobile or stationary devices. In certain embodiments, a system can be provided that can also include one or more hosts or servers, such as the one or more remote platforms connected to the networked environment through wireless or wired connections. According to one embodiment, remote platforms can be implemented in or function as base stations (which can also be referred to as Node Bs or evolved Node Bs (eNBs)). In certain embodiments, remote platforms can include web servers, mail servers, application servers, etc. According to certain embodiments, remote platforms can be standalone servers, networked servers, or an array of servers.
[0068] The system can include one or more processors for processing information and executing instructions or operations, including such instructions and/or operations stored on one or more non-transitory mediums. One or more processors can be any type of general or specific purpose processor. In some cases, multiple processors can be utilized according to other embodiments. In fact, the one or more processors can include one or more of general-purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and processors based on a multi-core processor architecture, as examples. In some cases, the one or more processors can be remote from the one or more computing platforms. The one or more processors can perform functions associated with the operation of system which can include, for example, precoding of antenna gain/phase parameters, encoding and decoding of individual bits forming a communication message, formatting of information, and overall control of the one or more computing platforms, including processes related to management of communication resources.
[0069] The system can further include or be coupled to a memory (internal or external), which can be coupled to one or more processors, for storing information and instructions that can be executed by one or more processors, including any instructions and/or operations stored on one or more non-transitory mediums. Memory can be one or more memories and of any type suitable to the local application environment, and can be implemented using any suitable volatile or nonvolatile data storage technology such as a semiconductor-based memory device, a magnetic memory device and system, an optical memory device and system, fixed memory, and removable memory. For example, memory can consist of any combination of random access memory (RAM), read only memory (ROM), static storage such as a magnetic or optical disk, hard disk drive (HDD), or any other type of non-transitory machine or computer readable media. The instructions stored in memory can include program instructions or computer program code that, when executed by one or more processors, enable the one or more computing platforms to perform tasks as described herein.
[0070] In some embodiments, one or more computing platforms can also include or be coupled to one or more antennas for transmitting and receiving signals and/or data to and from one or more computing platforms. The one or more antennas can be configured to communicate via, for example, a plurality of radio interfaces that can be coupled to the one or more antennas. The radio interfaces can correspond to a plurality of radio access technologies including one or more of LTE, 5G, WLAN, Bluetooth, near field communication (NFC), radio frequency identifier (RFID), ultrawideband (UWB), and the like. The radio interface can include components, such as filters, converters (for example, digital-to-analog converters and the like), mappers, a Fast Fourier Transform (FFT) module, and the like, to generate symbols for a transmission via one or more downlinks and to receive symbols (for example, via an uplink).
[0071] Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms, and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail. Equivalent changes, modifications and variations of some embodiments, materials, compositions and methods can be made within the scope of the present technology, with substantially similar results.