Print substrate optical motion sensing and dot clock generation
11260678 · 2022-03-01
Assignee
Inventors
Cpc classification
B41J11/008
PERFORMING OPERATIONS; TRANSPORTING
B41J2/125
PERFORMING OPERATIONS; TRANSPORTING
International classification
B41J11/00
PERFORMING OPERATIONS; TRANSPORTING
B41J2/045
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A system to direct print onto print substrates has a print head, at least one optical image sensor arranged adjacent a print substrate, wherein the print substrate will move in a process direction past the print head, at least one telecentric lens arranged between the print substrate and the at least one optical image sensor, and control circuitry electrically connected to the optical at least one image sensor and the print head, the control circuitry to generate dot clocks to cause the print head to print onto the substrate based upon data from the at least one optical image sensor identifying a position of the print substrate.
Claims
1. A system to direct print onto print substrates, comprising: a print head; and at least one tracking system, each tracking system comprising: a linear optical image sensor arranged adjacent a print substrate, wherein the print substrate will move in a process direction past the print head; at least one telecentric lens arranged between the print substrate and the linear optical image sensor; and control circuitry electrically connected to the linear optical image sensor and the print head, the control circuitry to generate dot clocks to cause the print head to print onto the print substrate based upon data from the linear optical image sensor identifying a position of the print substrate, the control circuitry to determine the position of the print substrate based upon a velocity of the print substrate found by correlating pixel values from a previous frame of data previous to a current frame of data from the linear optical image sensor to pixel values of the current frame of data from the linear optical image sensor in a first coarse correlation and then a fine correlation, wherein the fine correlation for the current frame of data is centered near a coarse correlation determined for the previous frame of data.
2. The system as claimed in claim 1, further comprising at least one illumination source positioned to provide light to a region of the print substrate viewed by the linear optical image sensor.
3. The system as claimed in claim 2, wherein at least one of the at least one illumination sources is positioned at a glancing angle to the print substrate.
4. The system as claimed in claim 2, wherein the at least one illumination source consists of at least two illumination sources, one positioned at a glancing angle to the print substrate and the other positioned closer to perpendicular to the print substrate.
5. The system as claimed in claim 2, wherein the at least one illumination source is strobed.
6. The system as claimed in claim 1, wherein the at least one telecentric lens includes an aperture that is shorter in the process direction than in a cross-process direction.
7. The system as claimed in claim 1, wherein the linear optical image sensor has rectangular pixels, shorter in the process direction than in a cross-process direction.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(8) The embodiments here consists of a combination of several components. The components may include a linear optical image sensor, a lens to project an image of a print substrate onto the image sensor, illumination for a portion of the substrate viewed by the image sensor, and electronics to generate dot clocks in response to the image sensor data. The combination of image sensor, lens and illumination may be referred to as the print substrate tracking, or just tracking, system. The tracking system will typically be mounted adjacent to a print head.
(9) The “process direction” is the direction in which the substrate moves under or in front of the print head. The “cross process direction” is a direction that is substantially perpendicular to the process direction. The term ‘print substrate’ comprises any movable surface upon which ink can be deposited.
(10) The term “print head” typically comprises a drop generator, such as a jet stack. A jet stack generally consists of a stack of plates having manifolds to route ink to an array of nozzles that selectively dispense ink depending upon the image data when they receive a signal. The signals are generated by a dot clock generator. As used here, the term ‘dot clock’ is the signal sent to the jet stack or other drop generator to cause the selected nozzles in the jet stack or drop generator to dispense dots of ink.
(11) One should note that the print head, print substrate tracking system, and print substrate are shown in various orientations here. No limitation to any particular orientation is intended nor should any be implied.
(12)
(13)
(14) Generally, the tracking system 14 of
(15) The linear image sensor 28 may have high-aspect ratio pixels. As shown in
(16) The image sensor also needs a reasonably high frame (line) rate. It needs not be higher than the dot clock frequency. In one embodiment, for example, the image sensor uses a 6 kHz frame rate to generate 64 kHz dot clocks. 6 kHz will typically be fast enough to track velocity changes of the print substrate. The measured velocity feeds a rate-generator to make the 64 kHz dot clock. The dot clock will vary from 64 kHz as the velocity of the print substrate varies. This assists with maintaining printed dot positions.
(17) The lens may comprise a telecentric lens, meaning its entrance pupil is at infinity, in its object space. In this case the object space is the print substrate side of the lens, making optical magnification independent of substrate-to-lens distance. This improved tolerance to substrate warpage or placement error. Variation in substrate-to-lens distance corresponds to variation in substrate-to-print-head distance, so should be reasonably well controlled to maintain print quality. The telecentricity precision of the lens need not be too high, keeping lens cost low. As long as the image sensor to lens distance is precise, there is no need for the lens to be doubly telecentric, although that may be an option.
(18) Variations in lens-to-substrate distance will cause focus errors, known as blur, even with a telecentric lens. A slower lens, meaning a smaller aperture, will increase the depth-of-field, but gather less light. For the embodiments here, focus is important along the motion axis or the process direction, but much less so along the perpendicular axis where the image sensor pixels are elongated. To optimize both depth-of-field and light gathering, a slit aperture may work better, with the slit width along the motion axis, or process direction, shorter than the slit height perpendicular to the motion direction, or cross-process direction. As shown in
(19) The tracking system may include at least one illumination source, as shown in
(20) Illumination may be strobed at the image sensor frame rate. The substrate may move pixels from one frame to the next, up to some number. In one embodiment, up to 51 pixels moved from one frame to the next. This is excessive motion blur for accurate velocity measurement. Some degree of motion blur has advantages, such as removing spatial frequencies above the Nyquist spatial frequency of the image sensor. The illumination strobe pulse width may be adjusted based on the expected or measured velocity to result in the optimum amount of low-pass filtering from motion blur. A simple on/off pulse of illumination will result in a rectangular impulse response filter, often call a “box-car” filter. Controlling illumination brightness in a Gaussian intensity-vs-time profile will result in Gaussian image blur. This approach works efficiently, but has a higher complexity of implementation. A Gaussian filter can be approximated using simple on/off control and duty-cycle modulation.
(21) Intensity vs. time plots of these three illumination options are shown in
(22) The electronics 20 from
(23) Linear image sensors typically output pixel values sequentially after the expose time window is complete, often simultaneously with the next frame's exposure window. To minimize latency, the electronics may process each new pixel value as it comes in, providing updated position and velocity information after the last pixel of an image frame is read from the sensor. Reduced latency provides faster response to changes in print substrate velocity, minimizing printed drop placement errors.
(24)
(25) This process may encompass several sub-processes. For example, the electronics may apply compensation such as subtracting per-pixel black offset, which compensates for image sensor black-level pattern noise. The electronics may also apply per-pixel scale to compensate for image sensor sensitivity and for illumination spatial variations.
(26) Additionally, the image data may undergo filtering. For example, the process may apply a high-pass filter to the image or pixel data. In one embodiment, the filter is applied by dividing each pixel by a Gaussian-weighted average of its neighboring pixels. In one embodiment, 64 neighboring pixels are used, 32 on each side. Dividing by the weighted average rather than subtracting it reduces sensitivity to local variations in illumination intensity as might be caused by warp of the print substrate. It also cancels global illumination variation, such as from LED power supply noise.
(27) The image data from the sensor may have its resolution increased by interpolating the image data. In one embodiment the data is interpolated by 8 times using a Gaussian low-pass interpolation filter. The process then stores the integer-pixel-position values for correlating with the next frame. In one embodiment, these values are after the weighted average low-pass filter, even though they are at the non-interpolated pixel locations. This keeps the frequency response identical between the interpolated and integer pixel values.
(28) The process then calculates some predetermined number, such as 32, “coarse” correlations between the incoming pixel values and the ones stored from the previous frame. In one embodiment, the correlation is a sum of differences squared rather than a true correlation. These “coarse” correlations are done at position offsets, for example of −10 through +52 pixels, between the incoming line image and the stored previous line image. Negative offsets need not be included, but are convenient for testing.
(29) The process then finds the maximum correlation, in one embodiment meaning the minimum sum-of-differences-squared, of the coarse correlations. This may be achieved by fitting a parabola to this minimum and its neighboring two correlations, one on each side. The process can determine a coarse velocity, meaning the position change from the previous image frame to this frame, by the position of the parabola minimum.
(30) The process can then calculate a predetermined number of “fine” correlations between the incoming interpolated pixel values and the pixel values stored from the previous frame. Again, the correlation may be a sum of differences squared rather than a true correlation. These “fine” correlations may be done at sub-pixel offsets, such as ⅛th pixel position offsets, centered near the coarse correlation position determined above for the previous frame. The process does not use the current frame's coarse correlation because it is not known until all of this frame's pixel data is read. Using the previous frame's coarse correlation position allows fine correlation to process pixel data as it comes in, reducing latency.
(31) The process then repeats finding the maximum correlation using the parabola analysis above, but now applied to the fine correlations. This results in a determination of a fine velocity, position change from the previous image frame to this frame, by the position of the parabola minimum. The process may add in the coarse offset from the previous frame used to locate the pixel-offset range for this set of fine correlations. Optionally, the process may apply a low-pass filter to the fine velocity values to smooth any jitter in velocity values, but such filtering does increase latency.
(32) Using the fine velocity, or the filtered fine velocity, in a rate generator produces dot clocks to trigger print head drop ejection. For example, add the fine velocity value to an accumulator at a fixed rate, for example 72 MHz. Each time the accumulator overflows, the process outputs a dot clock and subtracts a dot-spacing constant from the accumulator. The dot-spacing constant is proportional to the desired spacing between printed dots. For example, if the image sensor frame time is N 72 MHz clocks, then a dot-spacing constant value of N would generate dot clocks at the same spacing as image sensor pixel spacing, assuming a 1:1 lens magnification. A dot-spacing constant of 2N would generate dot clocks at twice the image sensor pixel spacing.
(33) One should note that there will be fractional-pixel bits to the fine velocity. The accumulator and constant need to have the same number of fraction bits. Dot clock spacing is not restricted to integer ratios of image sensor pixel spacing.
(34) The number of coarse and fine correlations, 32 each in the embodiments above, could be smaller. The process may also skip the coarse correlations, using the previous frame's fine correlation location to center the current frame's fine correlation range. This embodiment may require a way to initialize the fine correlation range, such as a sweep-search after each new image substrate comes into view.
(35) Besides generating dot clocks, the image sensor can be used to detect the presence of a print substrate. When a substrate is not present, illumination will not align with whatever surface(s) may lie past the normal print substrate location, so little light will reach the image sensor. This dark condition indicates the absence of a print substrate. In one embodiment, this feature is being used to sense the beginning of each print substrate to trigger the start of a new image.
(36) Many variations and modifications exist and contained within the scope of the claims. For example, it is desirable to sense print substrate motion at a location close to the print head. This minimizes the unprinted margin at the end of each substrate. A small mirror could be added to the optical path close to the print substrate, allowing the remainder of the mechanism to pivot away from the print head.
(37) The optical sensor could be positioned above or below the print head(s) if there is more unprinted margin there. This is described for the case of a vertical print substrate orientation with horizontal print motion, or horizontal process direction. Other orientations are equally possible, such as a horizontal print substrate and down-shooting print head(s), as noted above.
(38) For use with thin print substrates, such as cardboard before being folded into a box, this sensor could view the print substrate from the opposite side relative to the print head. Sensing could be located immediately behind the print head(s), as long as no significant image bleed-through occurs, at least not until after the substrate moves past the sensor.
(39) Another option to reduce non-printed margins is to use two sensors, one on each side of the print head. The two sensors could each be part of their own tracking system or share the electronics and/or other components between them. Velocity inputs to the dot clock generator would switch from one sensor to the other as the substrate moves past the print head(s). The switching could be based on which sensor detects a higher average image intensity, as an indication of which sensor has a complete image of the print substrate.
(40) A SELFOC® lens array, comprising an array of microlenses, might be workable as a small low-cost alternative to a telecentric lens. It would likely have motion artifacts due to the lens element pitch, but such artifacts may be tolerable for some applications.
(41) In one embodiment the telecentric lens consisted of a pair of 35 mm slide projector lenses joined front-to-front, combined with an aperture located to make it object-side telecentric. This lens assembly may be much larger than a final system needs to use. The large lens could be advantageous in a few applications, with its 80 mm standoff from lens to print substrate, but any size lens could be employed and be within the scope of the claims.
(42) In this manner, a high efficiency, low latency printing system with print substrate tracking can be realized. The tracking system may have its electronics physically integrated with the image sensor and lens, or may share electronics with other parts of the printing system.
(43) It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.