SYSTEMS AND METHODS FOR COMPRESSING 3D PRINTER DATA
20200364907 ยท 2020-11-19
Inventors
Cpc classification
G06K15/128
PHYSICS
B29C64/386
PERFORMING OPERATIONS; TRANSPORTING
B22F10/80
PERFORMING OPERATIONS; TRANSPORTING
B33Y50/00
PERFORMING OPERATIONS; TRANSPORTING
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
Y02P10/25
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
B29C64/393
PERFORMING OPERATIONS; TRANSPORTING
G06K15/1276
PHYSICS
International classification
B29C64/393
PERFORMING OPERATIONS; TRANSPORTING
B33Y50/02
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Two dimensional layer data representing successive layers of objects to be 3D-printed is reduced in size to reduce storage requirements and transmission bottlenecks that increase costs and reduce the efficiency of 3D printers, by encoding the layer data in accordance with systems and methods that take into account 2D and 3D regularities inherent in such layer data, and allow for efficient and lossless reconstruction of the original 3D layer data when it is needed by the 3D printer.
Claims
1. A method for reducing the amount of data required by a 3D printer to print an object, said data including a plurality of bitmap layers each comprising a 2D array of pixels representing a cross sectional layer of the object, wherein said method includes the steps of: a) forming a first differential bitmap representing differences between corresponding pixels of a bitmap layer and an adjacent bitmap layer; and b) forming a second differential bitmap representing the differences between corresponding pixels in adjacent rows or columns of the first differential bitmap.
2. The method of claim 1, wherein the first differential bitmap is formed by computing a bitwise XOR between corresponding pixels of the bitmap layer and the adjacent bitmap layer.
3. The method of claim 1, wherein the second differential bitmap is formed by computing a bitwise XOR operation between corresponding pixels in adjacent rows or columns of the first differential bitmap.
4. The method of claim 3, further comprising the step of quadtree tiling the second differential bitmap to form a quadtree bitmap and a quadtree tile array.
5. The method of claim 4, wherein the quadtree tile array includes pixel values of tiles having one or more non-zero pixels.
6. The method of claim 4 wherein the quadtree bitmap represents tiles having one or more non-zero pixels by a pixel of value 1, and tiles having all zero pixels by a pixel value of 0.
7. The method of claim 4 wherein the quadtree tile array and quadtree bit map are entropy coded.
8. An encoding method for reducing the amount of data required by a 3D printer to print an object, said data including a plurality of bitmap layers, each comprising a 2D array of pixels representing a cross sectional layer of the object, wherein said method includes the steps of: a) forming a first differential bitmap representing the differences between adjacent rows of a first bitmap layer; b) forming a second differential bitmap representing the differences between adjacent rows of a second bitmap layer; and c) forming a third differential bitmap representing the differences between the first and second differential bitmaps.
9. The method of claim 8 wherein the first differential bitmap layer is formed by computing the bitwise XOR between corresponding pixels in adjacent rows of the first bitmap layer.
10. The method of claim 8 wherein the second differential bitmap layer is formed by computing the bitwise XOR between corresponding pixels in adjacent rows of the second bitmap layer.
11. The method of claim 8 wherein the third differential bitmap layer is formed by computing the bitwise XOR between corresponding pixels of the first and second differential bitmaps.
12. The method of claim 8, further comprising the steps of quadtree tiling the third differential bitmap to form a quadtree bitmap and a quadtree tile array.
13. The method of claim 12, wherein the quadtree bitmap and quadtree tile array are entropy coded.
14. The method of claim 1, further comprising the step of forming a third differential bitmap representing the differences between adjacent columns of the first differential bitmap layer.
15. A method of compressing data corresponding to a plurality of bitmap layers each comprising a 2D array of pixels, comprising the steps of: a) forming a first differential bitmap representing differences between adjacent bitmap layers, b) forming a second differential bitmap representing differences between adjacent rows or columns of the first differential bitmap, and c) quadtree tiling the second differential bitmap to form a quadtree bitmap and a quadtree tile array.
16. The method of claim 15, further including the step of entropy coding the quadtree tile array and quadtree bit map.
17. A method of compressing data corresponding to a plurality of bitmap layers each comprising a 2D array of pixels, comprising the steps of: a) forming a first differential bitmap representing differences between adjacent rows of a first bitmap layer, b) forming a second differential bitmap representing differences between adjacent rows of a second bitmap layer, c) forming a third differential bitmap representing the differences between the first and second differential bitmaps, and d) quadtree tiling the second differential bitmap to form a quadtree bitmap and a quadtree tile array.
18. The method of claim 17, further including the step of entropy coding the quadtree tile array and quadtree bit map.
19. A method of decompressing data compressed in accordance with claim 16, said method decompressing a bitmap layer back to its original content by performing the steps of: a) reversing the entropy coding to reconstruct the quadtree tile array and quadtree bit map; b) using the reconstructed quadtree tile array and quadtree bit map to reconstruct the second differential bitmap, c) using the second differential bitmap to reconstruct the first differential bitmap; and d) using the first and second differential bitmaps to reconstruct the bitmap layer.
20. A non-transitory computer-readable medium that stores program instructions capable of being executed by a processor to perform the steps of claim 15.
21. A non-transitory computer-readable medium that stores program instructions capable of being executed by a processor to perform the steps of claim 19.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The features and advantages of this disclosure can be more fully understood with reference to the following detailed description, considered in conjunction with the accompanying Figures, wherein:
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
DETAILED DESCRIPTION
[0045] As discussed above, the input to the compression process is a series of bitmap images that each represent a two-dimensional cross section of a layer of an object to be printed. The bit maps are typically 1 bit per pixel, defining whether material should be deposited by the 3D printer at that pixel (e.g., bit 1) or not (bit 0). However, the methods and systems disclosed herein may equally well be applied to printers that provide grey scale printing, in which case there could be two or more bits per pixel.
[0046] In an exemplary 3D printer, the uncompressed layer bitmaps for an object may, by way of example, have 32K pixels in the X direction and 16K pixels in the Y direction, which, assuming 1 bit per pixel, results in 64 MB of raw data per layer. In an exemplary embodiment of an object being built from 6000 layers, the total amount of raw data per print job would, in that case, be 384 GB, and 768 GB if a pixel is defined by 2 bits of data.
[0047] Before further describing methods for compressing and decompressing such 3D layer data in accordance with this disclosure,
[0048] With reference to
[0049] In the exemplary embodiment of
[0050] In the
[0051] One or more memories are included in the
[0052] In the exemplary embodiment of
[0053] While
[0054] In an exemplary embodiment, uncompressed 3d layer data for one or more print jobs may be compressed off site and transmitted to the 3D printer over an external LAN (e.g., over LAN 134 of
[0055] In either case, to process such compressed data, the 3D printer system preferably include a non-volatile computer medium that stores a program for implementing the decoding (or decompression) steps disclosed herein. This permits the 3D printer system to decompress the 3D layer data for an object to be printed, and provide it in the appropriate sequence and timing, as may be needed by the 3D printer to print the desired object.
[0056] In particular, when an operator of the 3D printer selects a particular print job having parameters stored in database 130, the compressed 3D layer data corresponding to that print job may be retrieved from the file system 128 (or another memory location where it may be stored) and decompressed, e.g., by the printer server 110 in accordance with the steps disclosed herein so that the decompressed data may be supplied as needed when the print job is being executed.
[0057] For this purpose, in the exemplary system embodiment shown in
[0058] As used herein, the term non-transient is intended to describe a computer-readable storage medium that excludes propagating electromagnetic signals, but which may otherwise include storage devices such as volatile memory (e.g. RAM) that does not necessary store information permanently. Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may further be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link.
[0059] The disclosed methods for compressing the 3D layer data may be modelled in three basic phases, which are outlined in
[0060] Referring again to
[0061] In practice, it may not be necessary to difference both rows and columns during Phase 2. For example, the inventor has empirically determined that differencing of only the adjacent rows may often suffice and that subsequent column differencing may only provide a relatively small amount of compression that may be insufficient to justify the additional processing time (which would typically double the processing time for Phase 2).
[0062] Significantly, the use of bitwise XOR operations for computing and encoding differences in Phases 1 and 2, allows one to change the order of the two phases for convenience of the implementation (since AB is the same as BA).
[0063] As further depicted in
[0064] By way of example, in a first level quadtree, the quadtree tiling method will first tile the 2D bitmap for each layer into 22 pixel squares. For each such 4 pixel square, a representative single pixel is created that represents the 22 square. This will become more evident in the following description.
[0065] In accordance with the methods and systems disclosed herein for 1 or 2 bits/pixel layer data, the value of the representative pixel in each tile is determined by applying the OR logical operator to the 4 pixels of the tile. Hence, this single pixel is 0 if all 4 pixels of the original 22 tile are all zeros, and 1 if any one of the 4 pixels is 1. The result of this step is a 4 times smaller (or coarse grained) bitmap compared to the original bitmap showing regions that have at least one non-zero pixel.
[0066] This process may be iterated by applying the same 22 tiling process to this representative smaller image, yielding an overall 16 times smaller representative bitmap. The coarse graining OR step may be similarly applied to each new representative bitmap, successively reducing the new bitmap size by a factor 4.
[0067] The reversible raw output of the entire quadtree iterative process is a 4-way tree data structure, with each parent node branching into 4 child nodes. Each node holds, besides the links to its child nodes, the pixel value of the node.
[0068] The raw 4-way tree may then be reduced in size by pruning the subtree descending from every node that has a (coarse grained) pixel value of 0, since in that case the OR operator-based coarse graining implies that all descendants of this node will also have a pixel value of 0.
[0069] For well-clustered images, this quadtree tiling procedure effectively tiles the image into variable size squares that separate the image into empty regions (pixel value equals 0) and non-empty regions (pixel value equals non-zero).
[0070] In practice, the depth of the quadtree hierarchy may be determined by taking into account a tradeoff between the amount of computation and compression processing speed. For layer data encountered in common print jobs on an exemplary 3D printer, the inventor has empirically found that a preferable tradeoff may typically be met by 3-4 levels of quadtree processing (i.e. 88 or 1616 tiling).
[0071] As depicted in
[0072] The output of the quadtree phase is a top level quadtree bitmap plus an array of non-zero tiles (e.g. 88 or 1616 pixels per tile) showing the order of pixels with value 1 (or non-zero value for multibit pixels) encountered in the coarse-grained bitmap as it is scanned from left to right.
[0073] The quadtree tiles having all pixels 0 are implied and hence do not have to be transmitted. Since they all have value 0 for all of their pixels, the decoder can regenerate them without requiring any additional information other than the quadtree bitmap.
[0074] Optionally, XOR-based bitwise differencing as described above may be applied to quadtree bitmaps for successive layers (analogous to layer differences). This further step may be useful for vary large images with regular, repetitive parts.
[0075] Other optional variations for this phase of processing may include selection of different numbers of quadtree hierarchy levels, selection of the left or right scan order, using tree encoding data types, packaging of the output components (the quadtree bitmaps and the tiles), and the like.
[0076] Although not shown in
[0077] Since these two components are of a distinct nature, with very different densities and patterns of 1's in their one-dimensional data streams, the two components may be encoded separately.
[0078] The entropy encoding empirically found to work well with such sparse, well-clustered one dimensional data, is the BZIP algorithm. This coder, which is based on the Burrows-Wheeler Transform (BWT), can be implemented using the open source library libbz2.
[0079] The compressed 3D layer data that may be sent to the 3D printer will generally consist of thousands of layers that have been efficiently encoded to take into account the 2D and 3D regularities present in such data, in accordance with the disclosed methods and systems. Advantageously, the compressed layer data for each such job containing thousands of layers may be packaged into an archive file for each job.
[0080] Each compressed layer resulting from the above three phases of compression followed by entropy coding may form a single variable size record in the archive file. A small archive header (e.g., 16 bytes) may include basic information such as the X and Y dimensions, the number of bits per pixel, an image header common to all images in the archive, as well as the number of images in the archive, the maximum buffer sizes for the compressed image records and the maximum expanded size for the quadtree tile components of each image record (i.e., the compressed quadtree bitmap and the tile arrays). The latter information is useful since it allows the decoder to allocate the input and intermediate output buffers only once.
[0081] The image records may be packed sequentially back-to-back after the archive header and the common image header. Each record may be prefixed by the record size (single 4-byte integer) followed by the compressed quadtree bitmap and tile array. It is not necessary to transmit the sizes of these two separately compressed components to the decoder since the BZIP output is self-terminating. In other words, the BZIP decoder is provided only the total size of the combined image record, and it terminates automatically when it encounters the end of the first compressed component, providing the output size and the size of the input consumed. A second call may then be issued to the BZIP decoder with the remaining compressed record data to expand the second component.
[0082] Further, for the convenience of and flexibility of archive creation and use, special sentinel records may be supported (controlled by library APIs that may be used by clients). The sentinel records are compressed image records that do not difference a layer with the previous layer. The usefulness of such records is that they allow decoding of images that follow the sentinel record without having to decode the entire archive from the beginning. They also allow building of an archive in separate runs, by appending a sentinel record at the start of each new batch, without the need to decode an entire archive up to the new append point (in order to provide the previous layer for layer differencing of the first appended image).
[0083] In order to distinguish sentinel records from regular layer-differenced records, the archiver may store the record length of the sentinel record as -RecordLength (i.e. as a negative integer), which signals the decoder to skip the layer differencing step.
[0084] Having now outlined the basic steps of the compression (encoding) process, the decoding process for decompressing the 3D layer back to its original data proceeds in the reverse order of the encoding. After reading the archive header and decoding the compressed common image header, the decoder will fetch compressed image records sequentially. This action may be initiated via library API calls.
[0085] For each compressed image record, decoding may be performed by the following sequence of steps:
1) Decompress (via the BZIP decoder) the quadtree bitmap from the record. The BZIP decoder returns the decoded bitmap, plus the amount of compressed data consumed for the decoding.
2) Decompress the rest of the compressed record to reconstruct the quadtree tiles array.
3) Initialize the output image bitmap with 0 pixels
4) Scan the quadtree bitmap (loop in y, then x coordinates) and whenever a non-zero pixel is encountered fetch the next decompressed tile from the tile array and insert it at the location ximg, yimg of the output image. The variables x, y are the loop counters for quadtree bitmap scan while x-img, yimg are output image coordinates, e.g. ximg=8*x, yimg=8*y for 88 tiles).
5) Un-difference the scan rows (and/or columns, depending on differencing type used for the archive). The un-differencing may be done by performing XOR operations in reverse order of the differencing performed by the encoder. For example if the encoder XOR-s the 2nd row into 1st, 3rd row into 2nd, . . . the n-the row into (n1) row, then the decoder will XOR the n-th row into (n1) row, n1 row into n2 row, . . . , 3rd row into 2nd second, and 2nd row into 1st row.
6) If the image is the first record in the archive or a sentinel record, the image is returned to the caller (after prepending the common image header) and retained also by the decoder as the previous layer.
7) Otherwise, the image is a layer-differenced image. Hence it is additionally XOR-ed with the previous layer to obtain the original image. The common image header is prepended and the complete image is returned to the caller. The newly decoded image is then retained as the previous layer for the next layer.
[0086] Library routines may also provide options for formatting the output image, as, for example, a raw TIFF or LZW-encoded TIFF image. This option can be applied to the decoded images before they are returned to the caller.
[0087] In practical implementations of a decoder constructed according to the methods and systems disclosed herein, and for performance reasons, the conceptually distinct un-differencing steps 5 and 7 described above may be combined in a single pass over the image rather than as separate passes (as described above in conceptual steps 5 and 7). In such case, each output image pixel would be handled only once and XOR-ed either with a corresponding pixel from the previous row (step 5) or with two pixels, one from the previous row and another from the previous layer (step 5, 7).
[0088] Still further, in some cases, pseudo-random dithering may be used to simulate the effect of grey shades using black and white (i.e., 0 and 1) pixels. Since such pseudo-random, noise-like transformation of layer images will tend to superficially increase the apparent differences between layers, it is preferable the encoder to perform row and layer differencing on un-dithered bitmaps. Any required dithering may however be applied subsequently by the decoder on the extracted un-dithered images.
[0089] Alternatively, dithering effects may be modeled in the encoder. While this may be more flexible in allowing variations of dithering type for different regions of the image, is more complex and costlier on machine resources (e.g., in memory and processing times).
[0090] The following detailed description provides a specific illustrative example of how the encoding of two layers, L1 and L2, would be done in accordance with the methods described above.
[0091] For purposes this illustrative example, and as shown in the example of
[0092] Further, the pixels in this illustrative example (e.g.,
[0093]
[0094] Row[0] XOR Row[1] .fwdarw.Row[0]
[0095] Row[1] XOR Row[2].fwdarw.Row[1]
[0096] Row[2] XOR Row[3] .fwdarw.Row[2], etc.
[0097] The bitmap that results from XORing the rows of L1 is denoted as RX1 and is shown in
[0098] A third pair of images in
[0099] Also, note that in the case of the first layer L1, which does not have a previous layer to difference it with, the final output of the Phase 1+2 processing of L1 is only its Row-XOR-ed bitmap, RX1.
[0100] As evident from
[0101] For processing efficiency, the latter path may be advantageously chosen since it reduces the storage requirements and allows combining of layer and row XOR operations into a single pass over each input image. Namely, when layer 2 is being processed, the encoder can simultaneously XORs rows of L2 and then XORs the result with the already Row-XOR-ed corresponding pixels of the previous layer RX1, to obtain pixels of LRX12.
[0102] As discussed above, in Phase 3, the output bitmap LRX12 (
[0103] As shown in
[0104] Since the images in our illustrative example are of size 3216 pixels, the 44 tiling will produce an 84 pixel quadtree bitmap (32/416/4=84), plus a variably-sized array of 44 tiles, for each tile that is represented by a pixel of value 1 in the quadtree bitmap.
[0105] The tiling procedure is illustrated in
[0106] As shown in
[0107] In this example, note that only 8 out of the 32 quadtree pixels in QB12 50 have a value of 1. Hence, Phase 3 also produces an array 56 showing the content of those 8 tiles T1 . . . T8, each tile containing 44=16 pixels as shown in
[0108] Accordingly, the final output of Phase 3 for layer 2 provides two objects: (1) the QB12 bitmap and (2) the array T[8]=Array(T1 . . . T8) representing the content of the 8 tiles that have at least one non-zero bit (i.e. pixel value 1), and showing the position within the tile of such non-zero bits.
[0109] As a final step in the encoding process, each of these two objects is then passed separately to a conventional BZIP2 encoder, yielding two BZIP2 compressed chunks which are the final output for Layer 2 that may be saved to the print job archive.
[0110] Having provided an illustrative example of the encoding process for compressing the 3D layer data, the following example illustrates the decoding process for reversing that compression. As exemplary input to the decoder, we use the compressed data of Layer 2 produced in the encoding example described above.
[0111] During the decoding process, the layers are decoded sequentially (1, 2, 3, etc.) since that is typically the order in which layer data is provided to a controller in the 3D printer that controls the print head. Therefore, we will already have available expanded Layer 1 data, the bitmaps L1 and RX1 as described in the encoder example provided above and shown in
[0112] In a first Step 1, the decoder reads the compressed record for Layer 2 and decompresses the record (by using the BZIP decoder), to obtain the quadtree bitmap QB12 from the record. The BZIP decoder returns the decoded bitmap data QB12, plus the amount of compressed data consumed (i.e. the size of the first compressed chunk). In our illustrative decoding example, this decoded quadtree bitmap QB12 is the 48 bitmap shown in
[0113] In Step 2 the decoder (by again using the BZIP decoder) decompresses the rest of the compressed Layer 2 record (i.e. the second compressed chunk), to obtain the quadtree tile array T[8]=Array(T1 . . . T8). In our illustrative decoding example, the resulting quadtree tile array is shown in flattened format in
[0114] In Step 3, the decoder reconstructs the bitmap LRX12 by starting with an empty bitmap (all pixels are 0). To do so, the QB12 quadtree bitmap is scanned from left to right, starting at the top left and proceeding in a zig-zag raster scan to the bottom right. In the exemplary case discussed herein where the tiles have 44 bits, a destination tile pointer in the LRX12 bitmap is advanced 4 pixels in a corresponding zig-zag scan that starts at the top left corner of the LRX12 bitmap for each advance of 1 pixel in the QB12. If the current QB12 pixel is 1, then the next tile from the tile array T[8] is retrieved and copied into the LRX12 bitmap at the current location of the destination tile pointer.
[0115] This method is illustrated in
[0116] This process continues until all tiles from tile array T[8] are retrieved and copied into the LRX12 bitmap, which results in reconstruction of the original LRX12 bitmap (as shown in
[0117] In Step 4, original bitmap LX12 (see,
[0118] Row[N1] XOR Row[N2] .fwdarw.Row[N2]
[0119] Row[N2] XOR Row[N3] .fwdarw.Row[N3]
[0120] . . .
[0121] Row[2] XOR Row[1].fwdarw.Row[1]
[0122] Row[1] XOR Row[0] .fwdarw.Row[0]
[0123] The resulting reconstructed LX12 bitmap is shown in
[0124] In Step 5, original Layer 2 (L2) is reconstructed from the previous layer L1 and the now reconstructed LX12 bitmap by first noting (from the above discussion of the encoder example) that LX12=L1 XOR L2. Using the properties of the XOR operator, L2=LX12 XOR L1, where L1 is the previous layer (retained by decoder after Layer 1 was decoded). The result, using the bitmap L1 from the encoder example as the previous layer is shown in
[0125] The foregoing sequence of steps may be repeated to reconstruct all of the other original layers required by the 3D printer to print the object.
[0126] As evident from this disclosure and the related Figures, the foregoing steps performed by the decoder on all encoded layers, transforms the encoded layers back to their original form, and reverses the original compression performed by the encoder in a lossless manner.
[0127] Now that exemplary embodiments of the present disclosure have been shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art, all of which are intended to be covered by the following