Method for determining print defects in a printing operation carried out on an inkjet printing machine for processing a print job
11752775 · 2023-09-12
Assignee
Inventors
Cpc classification
B41J2/2142
PERFORMING OPERATIONS; TRANSPORTING
B41J2029/3935
PERFORMING OPERATIONS; TRANSPORTING
B41J2/2139
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A method for determining print defects in a printing operation carried out on an inkjet printing machine for processing a print job includes using a camera system to record and digitize printed products generated during the printing operation, feeding the camera image having been thus generated to a detection algorithm on the computer, alerting a machine control unit when print defects are found, and ejecting the printed product through a waste ejector if necessary. The detection algorithm separates color separations of the camera images, detects the print defects in the color separations, links images of the individual color separations to form a candidate image, filters the candidate image, enters the remaining detected print defects into a list, and forwards the list to the machine control unit of the printing machine.
Claims
1. A method for determining print defects in a printing operation carried out on an inkjet printing machine for processing a print job, the method comprising the following steps: using a camera system to record and digitize printed products generated during the printing operation; feeding a camera image generated in the camera system to a detection algorithm on a computer, using the detection algorithm to separate color separations of the camera images, detect the print defects in the color separations, link images of individual color separations to form a candidate image, filter the candidate image, enter remaining detected print defects into a list, and forward the list to a machine control unit of the printing machine; alerting the machine control unit when print defects are found; and ejecting a printed product by using a waste ejector if necessary.
2. The method according to claim 1, wherein the print defects are white or dark line defects caused by defective printing nozzles in the inkjet printing machine.
3. The method according to claim 2, which further comprises using the computer to apply a specific testing method to filter out pseudo white or dark line defects from the list of white line or dark line defects before the step of forwarding to the machine control unit of the printing machine.
4. The method according to claim 2, which further comprises using the computer to: determine the defective printing nozzles that caused the defects on the basis of the list of remaining detected white line or dark line defects; and as a function of the determined defective printing nozzles that caused the defects, to compensate for the white or dark line defects by using respective suitable compensation methods.
5. The method according to claim 4, which further comprises using the computer to: employ pre-print data of the print job to create a reference image for the specific testing method; and apply the detection algorithm to the reference image and thus either: obtain information on resultant candidates for pseudo white or dark line defects and eliminate them from the list of white or dark line defects, or obtain information on areas in the camera image with probable pseudo white or pseudo line defects and therefore not apply the detection algorithm to these areas in the camera image.
6. The method according to claim 5, which further comprises using the computer to: create the reference image in at least one of multiple sizes or resolutions; accordingly apply the detection algorithm multiple times to the different reference images; and summarize and use the obtained information.
7. The method according to claim 6, which further comprises not applying the algorithm to areas characterized by great variation of the gray values in a limited local environment in the reference image or wherein results of such areas are excluded.
8. The method according to claim 1, which further comprises using the computer to create the list of white line or dark line defects through column totals in the filtered candidate image by applying a threshold value to the respective calculated column total in the candidate image.
9. The method according to claim 1, which further comprises using the computer to link the candidate images of the individual color separations by a mathematical OR operation.
10. The method according to claim 1, which further comprises using the computer to filter the candidate image using morphological operations.
11. The method according to claim 1, which further comprises using the computer to apply the detection algorithm to the generated camera image multiple times with different parameters to detect different manifestations of dark or white line defects, and logically interlinking results of all color separations of all applications of the method.
12. The method according to claim 11, which further comprises limiting every pixel of the generated camera image in advance to a maximum gray value, for a respective one of the applications of the method with different parameters.
13. The method according to claim 1, which further comprises creating the candidate image of a color channel by: dividing the generated camera image into horizontal stripes; reducing every stripe to an image signal by a suitable averaging of every one of its columns; searching for white or dark lines in a specific search process in the image signal; and using every analyzed row as a row of the white line candidate image.
14. The method according to claim 13, which further comprises using the white or dark line search process to detect a dark or white line at a position by examining a limited vicinity about a pixel in the image signal.
15. The method according to claim 14, which further comprises using the search process to initially convolute the image signal with different kernels and convert results into logic signals by a comparison with respective potentially different threshold values, and then converting the signals into a white or dark line candidate image signal by using a logic operation.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE INVENTION
(11) Referring now in detail to the figures of the drawings, in which mutually corresponding elements have the same reference symbols, and first, particularly, to
(12) In contrast to the known methods of the prior art, the method of the invention proposes a different way of embedding the process of detecting white/dark lines 14 into the total sequence of steps of the printing operation and no longer requires any operator intervention. The sequence of steps of a first preferred embodiment is schematically shown in
(13) 1. After printing, a camera system 10 that is part of an in-line image recording system 12 digitizes the printed sheet.
2. The camera image 13 is forwarded to a white/dark line detection algorithm, which will be described in more detail below. In parallel, it may be used in further analyses.
3. When the detection algorithm detects white/dark lines 14, the image processor 9 alerts the control unit 6 of the printing machine 7 to their presence. In combination with other data from the printing machine 7, the control unit 6 then decides whether the printed sheet 2 is waste and needs to be ejected through a waste ejector.
4. The detected white/dark lines 14 may optionally be subjected to a more detailed analysis to identify the defective nozzle and to use this information to compensate for the defective nozzle.
(14) This sequence of steps illustrates that it is important for the entire system 12 that the processing of the camera images 13 keeps pace.
(15) In contrast to the prior art, the aforementioned algorithm for detecting white/dark lines is now only applied to the camera image 13.
(16) The detection algorithm is based on subdividing the recorded camera image 13 into horizontal stripes 15, 15a, 15b. The algorithm includes the following steps: 1. Separate the RGB color separations and, in a separate operation for every color separation C: 1.1 Divide the camera image 13 into stripes of a height of 1-10 mm (see
(17) ##STR00001## filters out very short white/dark lines 14. The SE level may be variable to allow a minimum length of the detected white/dark lines 14 to be preset. 4. In a further exemplary embodiment, the same analysis described in steps 1 to 3 may be applied in parallel to a potentially existing reference image, which is directly generated by the RIP as an RGB image. The resultant white/dark line candidates WLC.sub.REF(x,y) 14 mark areas in the printed image in which detected white/dark lines are probably false positives triggered by the customer's image. These areas ought to be removed from the image WLC(x,y) 21 of the camera image 13. For this purpose, the areas in WLC.sub.REF(x,y) are widened with the aid of morphological dilatation. This corresponds to a smoothing of WLC.sub.REF(x,y). Then WLC(x,y) 21 is filtered by WLC.sub.REF(x,y):
WLC(x,y)←WLC(x,y) and (not WLC.sub.REF(x,y)) 5. Finally all columns C.sub.WL in WLC(x,y) 21 that contain a white/dark line 14 are detected. This may be done using a threshold value minWLPerColumn for the column total, namely encoded as: no white/dark line=0, white/dark line=1 in WLC(x,y) 21, i.e. counting the entries in WLC(x,y) 21 marked as white/dark line 14:
CWL={|Σ.sub.yWLC(x,y)>minWLPerColumn}
In further preferred embodiments, the method of the invention may additionally be adapted: For instance, the subsequent filters may be varied. The number of white/dark line candidates 14 needs to reach a minimum number per column to be marked as a white/dark line 14. A maximum brightness value of the pixels is defined to prevent very bright pixels from falsifying the average. A white/dark line 14 does not have any very bright pixels in a 670 dpi camera image 13. I.e. all gray values in the image >50 are limited to 50. The reference image is analyzed to find out whether relevant locations in the reference image have strong structures that lead to structures similar to white/dark lines and therefore need to be excluded from the camera image 13. For this purpose, the reference image does not need to be present in full resolution because only a rough estimate is required to decide whether the reference image area has structures or is homogeneous (see step 4). The method described above may be implemented on a graphics processing unit (GPU) as a computing accelerator. The detection algorithm described above may be implemented as a component of the image recording system 12 that executes the image inspection process. The WLC(x,y) image 21 may then be used to obtain data for a report to an operator 8 or customer by recognizing coherent areas (blobs) in the image 13 and marking them in a survey image for the operator 8 in a later analysis.
Yet in most cases, these further preferred embodiments require a reference image, which affects the processing speed in addition to the disadvantages that have been indicated above. However, the use of a reference image may further improve the quality of the method of the invention because it helps to avoid false positives in the white/dark line 14 detection process.
Thus, the method of the invention has many advantages over the prior art. For instance, if there are considerable color deviations between the desired image and the camera image 13, for instance if the work flow has been wrongly calibrated in terms of cameras 10, white comparison, type of paper, short white/dark lines 14 are often submerged in the image/signal noise. The method of the invention overcomes this disadvantage. Furthermore, the prior art methods require the reference image to be supplied to the computer 9 at the full resolution of, for instance, 670 dpi. Using the technical measures that are available today, this is a very expensive process. Since the algorithm presented herein does without a reference image or at least without a high-resolution reference image, these costs are saved. After all, the detection in principle does not require any reference image, even though a reference image may be used to eliminate false positives caused by structures in the customer's image from the white/dark line detection process. Specifically, no direct comparison is required between the reference image and the camera image 13 to detect the white/dark line candidates 14.
In addition, there is a further, particularly preferred exemplary embodiment of the method of the invention that improves the method even further, proposing the following two-stage algorithm based on the previous embodiment:
Stage one is specifically to look for white/dark line candidates 14.
For this purpose, the algorithm presented in the previous exemplary embodiments is called up a number of times using different parameters. The results of these runs of the algorithm are then logically linked. In addition, the sequences of the algorithm are further improved. This is done as follows:
The algorithm is applied to the camera image 13 on the sheet 2 multiple times. For different applications, the parameters are adapted as follows: 1. The gray scales/color channel values of the camera image 13 are compressed. In the compression process, brightness values above a threshold S.sub.max are limited to the threshold S.sub.max. This effectively suppresses all structures brighter than S.sub.max in the image 13. This step detects white/dark lines 14 in dark areas in homogeneous and inhomogeneous areas very well. This compression is made before the first step of the previous exemplary embodiment. 2. In this case, too, the gray scales/color channel values of the camera image 13 are compressed. However, in this compression process, brightness values above a threshold K.sub.max (K.sub.max>S.sub.max) are limited to the threshold K.sub.max. The compression is made before the third step of the previous exemplary embodiment. In addition, the local homogeneity of the image 13 is calculated by calculating the standard deviation of the column segment when the averaging is done in the second step of the previous exemplary embodiment. Only white/dark lines 14 in relatively homogeneous areas, i.e. at a standard deviation <σ.sub.max are entered into the candidate list. This filter may be applied in the third step of the previous embodiment. This approach detects white/dark lines 14 in bright homogeneous areas very well. In bright inhomogeneous areas, the human eye has difficulties detecting white/dark lines 14 anyway; thus they are ignored.
(18) Both results are linked using an OR operation and combined to form a white/dark line candidate list. Optionally, even more complex links with further information are conceivable.
(19) Furthermore, in the second step of the previous embodiment, different averaging processes with advantageous properties other than simple averaging may be applied to an image signal that has been generated, among them, for instance: Median instead of average; the advantage being that the method is not sensitive to outliers. Average only of pixels having a brightness value which does not exceed a maximum brightness value G.sub.max,mean; the advantage being that bright outliers or paper white areas that might falsify the average are filtered out. This is shown by way of example in
(20) In the third step of the previous exemplary embodiment, white/dark lines 14 are detected by using a threshold L. In this further embodiment, two improvements for the threshold are found: 1. Two thresholds are used depending on whether the width of the detected white/dark line 14 is a single pixel or two pixels. Depending on the resolution of the camera, it may furthermore be expedient to find even white/dark lines 14 that are 3, 4, N pixels wide. In such a case, a corresponding number of thresholds need to be applied. With the two thresholds L1 and L2, the detection expression from the third step is:
WLC(x,s)=((I.sub.C,s(x)−I.sub.C,s(x−1)>L1) and (I.sub.C,s(x)−I.sub.C,s(x+1)>L1)) or (I.sub.C,s(x)−I.sub.C,s(x−1)>L2) and (I.sub.C,s(x)−I.sub.C,s(x+2)>L2)) or ((I.sub.C,s(x)−I.sub.C,s(x−2)>L2) and (I.sub.C,s(x)−I.sub.C,s(x+1)>L2)) 2. The threshold may be made to depend on the local environment of every pixel x, which means that higher thresholds are applied to find white/dark lines 14 in bright image areas than in less bright areas. As a measure for the local brightness, an average of the gray values in a close vicinity of position x may be calculated excluding any white/dark line 14 that may be present.
(21) Alternatively, a sliding median filter may be applied to I.sub.C,s(x).
(22) As a further advantageous improvement of the previous exemplary embodiment, the algorithm may not be applied to a RGB image 13. Instead, the RGB image 13 is previously converted into a gray scale image that has the best possible contrast for white/dark lines 14 using a suitable method. Suitable transformation operations for this purpose are: calculating the luminance channel from the Lab color space calculating the brightness value or saturation value from the HSB color space averaging the suitably weighted RGB color channels in a way adapted to the human eye
(23) In stage 2, one or more filters are applied to filter the pseudo white/dark lines 14b out of the white/dark line candidates 14 that have been identified in stage 1. For this purpose, there are the following improvements over the previous exemplary embodiment:
(24) By applying a column filter to the white/dark line candidate list, all white/dark line candidates 14 that do not have at least a number N.sub.col,min of further white/dark line candidates 14 in one and the same image column are removed from the white/dark line candidate list. The concept behind this filter is to eliminate very short or isolated defects. For in most realistic prints, a white/dark line 14 will have an effect on more than one area of a column whereas false positives only occur in a locally isolated way.
(25) The filter described above in step four of the previous exemplary embodiment and involving the aid of the reference image will be applied in this case, too, with all modifications described above. In this context, the size of the reference image is adapted in advance as an improvement. It may likewise be expedient to process the reference image multiple times at different resolutions and to combine the results of these stages before the filtering process. This simulates a loss of quality of the “perfect” reference image due to the camera system 10, thus effectively allowing the detection of different structures that may result in white/dark line-like structures in the camera image 13.
(26) A particular additional advantage which the particularly preferred further exemplary embodiment has over the previous exemplary embodiment is that the performance in terms of the detection of white/dark lines 14 is better while fewer pseudo white/dark lines 14b are detected at the same time. However, for this purpose, a reference image analysis is required, involving additional process steps and taking up more computing times on the computer 6, 9 that is used. Thus, a decision on which preferred exemplary embodiment is to be used ought to be made on the basis of the requirements of the specific application. For print jobs for which white/dark line detection is critical in terms of time or performance, it is the first exemplary embodiment presented herein that ought to be used, whereas for print jobs that require especially thorough white/dark line 14 detection and/or that run an increased risk of a detection of pseudo white/dark lines 14b it is the second exemplary embodiment presented herein that ought to be used.
LIST OF REFERENCE SYMBOLS
(27) 1 feeder 2 printing substrate 3 delivery 4 inkjet printing unit 5 Inkjet printing head 6 control computer of the inkjet printing machine 7 inkjet printing machine 8 operator 9 image processor 10 image sensor/camera 11 display 12 image recording system 13 recorded print image 14 white/dark line print defect 14a peak of a white/dark line in the generated image signal 14b peak of a pseudo white/dark line in the generated image signal 15 stripe of the recorded print image 15a stripe of a recorded print image with text content 15b stripe of a recorded print image at the image margin 16 detected and marked white/dark lines 17 enlarged section of the stripe of the recorded print image 18 generated image signal of the stripe of the recorded print image with text content 18a generated image signal of the stripe of the recorded print image at the image margin 18b generated image signal of the stripe of the recorded print image at the image margin 19 minimum detection threshold of a white/dark line in the generated image signal 20 marked white/dark line areas # 21 candidate image composed of stripes