VISION PRODUCT INFERENCE BASED ON PACKAGE DETECT AND BRAND CLASSIFICATION WITH ACTIVE LEARNING
20220129836 · 2022-04-28
Inventors
- Peter Douglas Jackson (Alpharetta, GA, US)
- Robert Lee Martin, JR. (Kenosha, WI, US)
- Daniel James Thyer (Charlotte, NC, US)
- Justin Michael Brown (Coppell, TX, US)
Cpc classification
G06Q10/087
PHYSICS
G06N5/01
PHYSICS
International classification
G06Q10/08
PHYSICS
G06K7/14
PHYSICS
Abstract
A delivery system generates a pick sheet containing a plurality of SKUs based upon an order. A loaded pallet is imaged to identify the SKUs on the loaded pallet, which are compared to the order prior to the loaded pallet leaving the distribution center. The loaded pallet may be imaged while being wrapped with stretch wrap. At the point of delivery, the loaded pallet may be imaged again and analyzed to compare with the pick sheet.
Claims
1. A computer-implemented method for creating machine learning models, including: a) creating a plurality of brand nodes each having an associated brand, a plurality of package nodes each having an associated package and a plurality of SKU links, wherein each SKU link connects one of the plurality of brand nodes to one of the plurality of package nodes, wherein each SKU link represents a SKU having the associated brand and the associated package, wherein each of the plurality of brand nodes in a first subset of the plurality of brand nodes is connected by a first subset of the plurality of SKU links to more than one of the plurality of package nodes, and wherein each of the plurality of package nodes in a second subset of the plurality of package nodes is connected by a second subset of the plurality of SKU links to more than one of the plurality of brand nodes; b) determining a cut line to divide the plurality of SKU links into a first machine learning model and a second machine learning model, wherein the step of determining is performed based upon reducing a number of SKU links intersected by the cut line and based upon a tendency toward an equal number of SKU links in each machine learning model defined by the cut line; c) duplicating the SKU links intersected by the cut line in the first machine learning model and in the second machine learning model; and d) duplicating the brand nodes and the package nodes directly connected by the SKU links intersected by the cut line in the first machine learning model and the second machine learning model.
2. The method of claim 1 further including the step of: e) training the first machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the first machine learning model; and f) training the second machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the second machine learning model.
3. The method of claim 2 wherein the cut line is a first cut line further including the step of: during said step b), determining a second cut line to further divide the plurality of SKU links into a third machine learning model, wherein the second cut line does not intersect any SKU links, the method further including the step of training the third machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the third machine learning model.
4. The method of claim 3 wherein the brand nodes each represent a flavor of a beverage and wherein the package nodes each represent a package type containing the beverage.
5. The method of claim 4 wherein the flavors represented by the brand nodes include flavors of soft drinks and wherein the package type represented by the package nodes includes a first package type in which a certain number of cans are contained in a box.
6. A computing system for creating machine learning models including: at least one processor; and at least one non-transitory computer-readable media storing: instructions that, when executed by the at least one processor, cause the computer system to perform the following operations: a) receiving SKU information including brand and package type for each of a plurality of SKUs; b) creating a plurality of brand nodes each having an associated brand, a plurality of package nodes each having an associated package, and a plurality of SKU links, wherein each SKU link connects one of the plurality of brand nodes to one of the plurality of package nodes, wherein each SKU link represents one of the plurality of SKUs having the associated brand and the associated package, wherein each of the plurality of brand nodes in a first subset of the plurality of brand nodes is connected by a first subset of the plurality of SKU links to more than one of the plurality of package nodes, and wherein each of the plurality of package nodes in a second subset of the plurality of package nodes is connected by a second subset of the plurality of SKU links to more than one of the plurality of brand nodes; c) determining a cut line to divide the plurality of SKU links into a first machine learning model and a second machine learning model, wherein the step of determining is performed based upon reducing a number of SKU links intersected by the cut line and based upon a tendency toward an equal number of SKU links in each machine learning model defined by the cut line; d) duplicating the SKU links intersected by the cut line in the first machine learning model and in the second machine learning model; and e) duplicating the brand nodes and the package nodes directly connected by the SKU links intersected by the cut line in the first machine learning model and the second machine learning model.
7. The computing system of claim 6 wherein the operations further include: e) training the first machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the first machine learning model; and f) training the second machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the second machine learning model.
8. The computing system of claim 7 wherein the cut line is a first cut line, the operations further including the step of: during said operation b), determining a second cut line to further divide the plurality of SKU links into a third machine learning model, wherein the second cut line does not intersect any SKU links, the operations further including training the third machine learning model with a plurality of images of the plurality of SKUs represented by the SKU links in the third machine learning model.
9. The computing system of claim 8 wherein the brand nodes each represent a flavor of a beverage and wherein the package nodes each represent a package type containing the beverage.
10. The computing system of claim 9 wherein the flavors represented by the brand nodes include flavors of soft drinks and wherein the package type represented by the package nodes includes a first package type in which a certain number of cans are contained in a box.
11. A computing system for identifying SKUs in a stack of a plurality of packages of beverage containers comprising: at least one processor; and at least one non-transitory computer-readable media storing: a plurality of machine learning models that have been trained with a plurality of images of packages of beverage containers; and instructions that, when executed by the at least one processor, cause the computer system to perform the following operations: a) receiving at least one image of the stack of the plurality of packages of beverage containers; b) inferring a package type of each of the plurality of packages of beverage containers; c) based upon the package type inferred for each of the plurality of packages of beverage containers, choosing at least one of the plurality of machine learning models; and d) using the machine learning model chosen in step c) for each of the plurality of packages of beverage containers, inferring a brand of each of the plurality of packages of beverage containers based upon the at least one image.
12. The computing system of claim 11 wherein said operations further include: e) identifying at least one inferred SKU for each of the plurality of packages of beverage containers based upon the package type inferred in step b) and the brand inferred in step d).
13. The computing system of claim 12 wherein said operations further include: f) comparing the at least one inferred SKUs for each of the plurality of packages of beverage containers with a pick list representing a plurality of expected SKUs in an order.
14. The computing system of claim 13 wherein said operations further include: g) identifying an extra inferred SKU; h) identifying a missing expected SKU; i) determining whether the extra inferred SKU and the missing expected SKU are associated with one another in a SKU set; and j) based upon a determination in said step i) that the extra inferred SKU and the missing expected SKU are associated with one another in a SKU set, substituting the expected SKU for the inferred SKU or otherwise ignoring errors associated with steps g) and h).
15. The computing system of claim 11 wherein the at least one image includes a plurality of images from different sides of the stack of packages of beverage containers, wherein said operations further include associating portions of each of the plurality of images with one another corresponding to the same ones of the plurality of packages of beverage containers.
16. The computing system of claim 15 wherein said steps b) to d) are performed for each of the portions of each of the plurality of images.
17. The computing system of claim 16 wherein said operations further include generating a confidence level for the package type inferred for each of the portions of each of the plurality of images.
18. The computing system of claim 17 wherein said operations further include generating a confidence level for the brand inferred for each of the portions of each of the plurality of images.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
DETAILED DESCRIPTION
[0059]
[0060] Each distribution center 12 includes one or more pick stations 30, a plurality of validation stations 32, and a plurality of loading stations 34. Each loading station 34 may be a loading dock for loading the trucks 18.
[0061] Each distribution center 12 may include a DC computer 26. The DC computer 26 receives orders 60 from the stores 16 and communicates with a central server 14. Each DC computer 26 receives orders and generates pick sheets 64, each of which stores SKUs and associates them with pallet ids. Alternatively, the orders 60 can be sent from the DC computer 26 to the central server 14 for generation of the pick sheets 64, which are synced back to the DC computer 26.
[0062] Some or all of the distribution centers 12 may include a training station 28 for generating image information and other information about new products 20 which can be transmitted to the central server 14 for analysis and future use.
[0063] The central server 14 may include a plurality of distribution center accounts 40, including DC1-DCn, each associated with a distribution center 12. Each DC account 40 includes a plurality of store accounts 42, including store 1-store n. The orders 60 and pick sheets 64 for each store are associated the associated store account 42. The central server 14 further includes a plurality of machine learning models 44 trained as will be described herein based upon SKUs. The models 44 may be periodically synced to the DC computers 26 or may be operated on the server 14.
[0064] The machine learning models 44 are used to identify SKUs. A “SKU” may be a single variation of a product that is available from the distribution center 12 and can be delivered to one of the stores 16. For example, each SKU may be associated with a particular package type, e.g. the number of containers (e.g. 12 pack) in a particular form (e.g. can v bottle) and of a particular size (e.g. 24 ounces) optionally with a particular secondary container (cardboard vs reusuable plastic crate, cardboard tray with plastic overwrap, etc). In other words, the package type may include both primary packaging (can, bottle, etc, in direct contact with the beverage or other product) and any secondary packaging (crate, tray, cardboard box, etc, containing a plurality of primary packaging containers).
[0065] Each SKU may also be associated with a particular “brand” (e.g. the manufacturer and the specific variation, e.g. flavor). The “brand” may also be considered the specific content of the primary package and secondary package (if any) for which there is a package type. This information is stored by the server 14 and associated with the SKU along with the name of the product, a description of the product, dimensions of the product, and optionally the weight of the product. This SKU information is associated with image information for that SKU in the machine learning models 44.
[0066] It is also possible that more than one variation of a product may share a single SKU, such as where only the packaging, aesthetics, and outward appearance of the product varies, but the content and quantity/size is the same. For example, sometimes promotional packaging may be utilized, which would have different image information for a particular SKU, but it is the same beverage in the same primary packaging with secondary packaging having different colors, text, and/or images. Alternatively, the primary packaging may also be different (but may not be visible, depending on the secondary packaging). In general, all the machine learning models 44 may be generated based upon image information generated through the training module 28.
[0067] Referring to
[0068]
[0069] Workers place items 20 on the pallets 22 according to the pick sheets 64, and report the palled ids to the DC computer 26 in step 154 (
[0070] The DC computer 26 records the pallet ids of the pallet(s) 22 that have been loaded with particular SKUs for each pick sheet 64. The pick sheet 64 may associate each pallet id with each SKU.
[0071] After being loaded, each loaded pallet 22 is validated at the validation station 32, which may be adjacent to or part of the pick station 30. As will be described in more detail below, at least one still image, and preferably several still images or video, of the products 20 on the pallet 22 is taken at the validation station 32 in step 156 (
[0072] First, referring to
[0073] In one implementation, the camera 68 may be continuously determining depth while the turntable 67 is rotating. When the camera 68 detects that the two outer ends of the pallet 22 are equidistant (or otherwise that the side of the pallet 22 facing the camera 68 is perpendicular to the camera 68 view), the camera 68 records a still image. The camera 68 can record four still images in this manner, one of each side of the pallet 22.
[0074] The rfid reader 70 (or barcode reader, or the like) reads the pallet id (a unique serial number) from the pallet 22. The wrapper 66a includes a local computer 74 in communication with the camera 68 and rfid reader 70. The computer 74 can communicate with the DC computer 26 (and/or server 14) via a wireless network card 76. The image(s) and the pallet id are sent to the server 14 via the network card 76 and associated with the pick list 64 (
[0075] As an alternative, the turntable 67, camera 68, rfid reader 70, and computer 74 of
[0076] Alternatively, the validation station can include the camera 68 and rfid reader 70 (or barcode reader, or the like) mounted to a robo wrapper (not shown). As is known, instead of holding the stretch wrap 72 stationary and rotating the pallet 22, the robo wrapper travels around the loaded pallet 22 with the stretch wrap 72 to wrap the loaded pallet 22. The robo wrapper carries the camera, 68, rfid reader 70, computer 74 and wireless network card 76.
[0077] Alternatively, referring to
[0078] Other ways can be used to gather images of the loaded pallet. In any of the methods, the image analysis and/or comparison to the pick list is performed on the DC computer 26, which has a copy of the machine learning models. Alternatively, the analysis and comparison can be done on the server 14, locally on a computer 74, or on the mobile device 78, or on another locally networked computer.
[0079] As mentioned above, the camera 68 (or the camera on the mobile device 78) can be a depth camera, i.e. it also provides distance information correlated to the image (e.g. pixel-by-pixel distance information or distance information for regions of pixels). Depth cameras are known and utilize various technologies such as stereo vision (i.e. two cameras) or more than two cameras, time-of-flight, or lasers, etc. If a depth camera is used, then the edges of the products stacked on the pallet 22 are easily detected (i.e. the edges of the entire stack and possibly edges of individual adjacent products either by detecting a slight gap or difference in adjacent angled surfaces). Also, the depth camera 68 can more easily detect when the loaded pallet 22 is presenting a perpendicular face to the view of the camera 68 for a still image to be taken.
[0080] However the image(s) of the loaded pallet 22 are collected, the image(s) are then analyzed to determine the sku of every item 20 on the pallet 22 in step 158 (
[0081]
[0082] In practice, there may be hundreds or thousands of such SKUs and there would likely be two to five models 231. If there are even more SKUs, there could be more models 231.
[0083] Within each of models 231a and 231b, all of the brand nodes 232 and package nodes 234 are connected in the graph, but this is not required. In fact, there may be one or more (four are shown) SKUs that are in both models 231a and 231b. There is a cut-line 238a separating the two models 231a and 231b. The cut-line 238a is positioned so that it cuts through as few SKUs as possible but also with an aim toward having a generally equal or similar number of SKUs in each model 231. Each brand node 232 and each package node 234 of the SKUs along the cut-line 238a are duplicated in both adjacent models 231a and 231b. For the separation of model 231c from models 231a and 231b, it was not necessary for the cut line 238b to pass through (or duplicate) any of the SKUs or nodes 232, 234.
[0084] In this manner, the models 231a and 231b both learn from the SKUs along the cut 238b. The model 231b learns more about the brand nodes 232 in the overlapping region because it also learns from those SKUs. The model 231a learns more about the package types 234 in the overlapping region because it also learns from those SKUs. If those SKUs were only placed in one of the models 231a, 231b, then the other model would not have as many samples from which to learn.
[0085] In brand model 231c, for example, as shown, there are a plurality of groupings of SKUs that do not connect to other SKUs, i.e. they do not share either a brand or a package type. The model 231c may have many (dozens or more) of such non-interconnected groupings of SKUs. The model 231a and the model 231b may also have some non-interconnected groupings of SKUs (not shown).
[0086] Referring to
[0087] This process is performed initially when creating the machine learning models and again when new SKUs are added. Initially, a target number of SKUs per model or a target number of models may be chosen to determine a target model size. Then the largest subgraph (i.e. a subset of SKUs that are all interconnected) is compared to the target model size. If the largest subgraph is within a threshold of the target model size, then no cuts need to be made. If the largest subgraph is more than a threshold larger than the target model size, then the largest subgraph will be cut according to the following method. In step 240, the brand nodes 232, package nodes 234, and SKU links 236 are created. In steps 242 and 244, the cut line 238 is determined as the fewest numbers of SKU links 236 to cut (cross), while placing a generally similar number of SKUs in each model 231. The balance between these two factors may be adjusted by a user, depending on the total number of SKUs, for example. In step 246, any SKU links 236 intersected by the “cut” are duplicated in each model 231. In step 248, the brand nodes 232 and package nodes 234 connected to any intersected SKU links 236 are also duplicated in each model 231. In step 250, the models 231 a, b, c are then trained according to one of the methods described herein, such as with actual photos of the SKUs and/or with the virtual pallets.
[0088] Referring to
[0089]
[0090]
[0091] Referring to
[0092] Referring to
[0093]
[0094] Referring to
[0095] For each item (i.e. the images stitched together), the package face(s) with lower confident package types are overridden with the highest confident package type out of the package face images for that item. The package type with the highest confidence out of all the package face images for that item is used to override any different package type of the rest of the package faces for that same item.
[0096] For the two examples shown in
[0097] In step 313 of
[0098] The machine learning model (e.g. models 231a, b, or c of
[0099] Referring to
[0100] The example shown in
[0101] It should be noted that some product is sold to stores in groups of loose packages. All of the packages are counted and divided by the number of packages sold in a case to get the inferred case quantity. The case quantity is the quantity that stores are used to dealing with on orders.
[0102] The pick list that has the expected results is then leveraged to the actual inferred results. There should be high confidence that there is an error before reporting the error so there are not too many false errors. There are several example algorithms disclosed herein that leverage the known results of the pick list to make corrections so that too many false errors are not reported: 1) Override Multiple Face View; 2) Indistinguishable SKU sets; 3) Low confident brand override; 4) Unverifiable Package Type Set; 5) Unverifiable SKU; 6) Override Single Face View; 7) SKU with unverifiable quantity; 8) Multiple Face View Count Both Products. The aforementioned sequence is preferred for a particular constructed embodiment. The sequence of the algorithms flow may be important because they consume the extra and/or missing SKU from the errors such that that extra and/or missing SKU will not be available down the flow for another algorithm.
1) Override Multiple Face View Heuristic
[0103] The stitching algorithms associate all the visible faces of the same package. Sometimes one of the less confident faces of the package or the brand is the correct one. The system leverages the picklist expected SKUs and make corrections if the most confident face was not on the picklist, but a lesser confident face was.
[0104] For the following example in
[0105] Referring to the flow chart of
[0106] In step 420, it is determined whether any other package face on the pallet matches the missing expected SKU.sub.A. If not, in step 422, it is determined if a lower-confidence package face of the package (the package previously determined to be an extra SKU.sub.1) matches the missing expected SKU.sub.A. If so, then the lower-confidence package face (same as the missing expected SKU.sub.A) is used to override the SKU.sub.1 in the inferred SKU set in step 424. If not, then SKU.sub.A and SKU.sub.1 are both flagged as errors in step 426.
[0107] Optionally, steps 420 to 424 are only performed if the confidence in the extra inferred SKU.sub.1, although the highest-confidence face of that package, is below a threshold. If not, the errors are generated in step 426.
[0108] The multiple face view algorithm of
[0109]
2) Indistinguishable SKU Sets
[0110] The inference sometimes has a difficult time distinguishing between particular sets of two or more SKUs. A flowchart regarding the handling of indistinguishable SKU sets is shown in
[0111] For example, as shown in
[0112] Referring to
[0113] Another example of an Indistinguishable SKU set is the 700 ml Lifewater product, which presently looks identical to the 1 L Lifewater product with only being slightly bigger. The size is also dependent on the placement on the pallet and product further away from the camera appear smaller. These SKUs are added as an indistinguishable SKU set so that adjustments can be made so that too many false errors are not reported.
[0114] If an inferred result is updated based on the indistinguishable SKU set logic and the quantity of that SKU now matches the quantity on the pick list then a property is set for that SKU to indicate that the system cannot confirm that SKU. No error is flagged, but the SKU is labeled “unconfirmed.”
[0115] It may be a time-consuming process to identify all the required SKU Sets. Additionally, different SKUs sets need to be added and removed each time the models are trained. Further, as the active learning tool is used, some SKU Sets are no longer needed. Therefore, a SKU Set generation tool is provided that reviews the labeled pallets and automatically creates the SKU Sets when the machine learning incorrectly identifies a SKU.
[0116] The following process scales creating the best SKU sets:
[0117] Manual Detect—Every time that a new SKU set is discovered manually then the pallet is labeled and the pallet is stored into a location used to generate SKU sets.
[0118] Discover best SKU sets from Virtual Pallets—However, it takes a long time to manually label pallets. Manually labeling pallet images is also prone to errors. Therefore, thousands of virtual pallets are built with labeled images that used the tool to find all the SKUs that the inference gets mixed up. In other words, virtual pallets are generated with images of known SKUs and then those virtual pallet images are analyzed using the machine learning models as described above. It is determined which SKUs are often confused with one another by the system based upon the image having a known SKU but being inferred to have a different SKU. If that happens at a high enough rate, then those SKUs (two or more) are determined to be a SKU set. Indistinguishable SKU sets are generated automatically with those SKUs.
3) Low Confidence Brand Override
[0119] In an implemented embodiment, the package type model is more accurate than the brand models. If the package type expected from the pick list is inferred, then any brand error should not be reported unless there is sufficient confidence that there is a brand error. If the inferred package type matches the package type expected from the pick list, then the inferred brand will be overridden based on the expected brand from the pick list if the brand confidence of the inferred brand is less than the threshold.
[0120] A sample flowchart for handling the low confidence brand override is shown in
4) Unverifiable Package Type Set
[0121] Optionally, the low confidence threshold can be set based on the inferred package type, such that different package types have different low confidence thresholds. Some package types are unbranded cardboard boxes where it is impossible to infer the brand better than a guess. The threshold for these package types can be set to always override the brand inference with the expected brand from the pick list. In other words, if the inferred package type is unbranded cardboard box, and if the quantity of inferred unbranded cardboard boxes matches the expected quantity (from the pick list), then no error will be flagged, but they will be marked “unconfirmed.”
[0122] Any of the results from the inference that are updated and also match the quantity on the pick list are set to have a “cannot confirm” property (rather than “error”) so that the QA person knows that brand was unable to be confirmed.
[0123] If one or more of an inferred SKU is updated based upon the pick list, but not in the correct quantity expected from the pick list, then there will still be a confirmed error.
5) Unverifiable SKU
[0124] SKUs that the system is poor at identifying are marked as unverifiable in the database. This list should be kept really small as the logic can have negative repercussions as well.
[0125] If a SKU that is marked “unverifiable” in the database or the SKU is on the pick list but missing from the inferred results while there is at least one SKU as extra in the inferred results then the least confident extra SKU is overridden and renamed with the expected unverifiable SKU from the pick list. The SKU will still have an error if the quantity is short but if the inferred quantity matches the pick list quantity then the SKU is set to “cannot confirm” but not an error.
[0126] A sample flowchart for handling unverifiable SKUs is shown in
[0127] In step 376 it is determined whether the missing SKU.sub.A is indicated as an “unverifiable SKU.” If not, then the missing SKU.sub.A is indicated as an error in step 384. If it is, then in step 378 it is determined if there is at least one extra SKU inferred. If not, then an error is indicated in step 384. If there is at least one extra SKU inferred, then in step 380 the lowest-confidence inferred extra SKU.sub.1 is selected from the extra inferred SKU(s). In step 382, the missing expected SKU.sub.A is substituted for the lowest-confidence inferred SKU.sub.1 in the inferred set of SKUs, marked as “unconfirmed,” but not as an error.
[0128] One good way to leverage this functionality is for a new SKU that has not yet been trained in the models. The new SKU can be marked “unverifiable” in the database and/or the models. If the “missing SKU.sub.A” is the new product and if the package detector model is able to detect the presence of the product without training then it will still get the case count match with the pick list. An extra inferred SKU.sub.1 will be overridden with the new SKU.sub.A. The unverifiable SKU logic will show that SKU.sub.A as “cannot confirm,” but not show a false error. All of this can occur before any machine learning training of that new SKU.
[0129] Optionally, in step 386, the images for a new SKU.sub.A can be used to train the machine learning models so that the new SKU.sub.A could be recognized in the future. Optionally, these images for the new SKU.sub.A would not be used to train the machine learning model until confirmed by a human.
6) Single Face View Heuristic
[0130] Most of the time the stitching algorithm can connect two or more package faces together of the same item. The inference is improved when we have multiple package faces because the highest confident package type and highest confident brand are used to get the most confident package. Heuristic logic is also used in the multiple face view algorithm to make additional corrections.
[0131] The system is more likely to be wrong when we only have one package face to work with. The picker can place a package on the pallet in a position where only one package face is visible.
[0132] Referring to
[0133] If the inferred package type of a single face view package is not on the pick list, then we look at other missing SKU on the pick list with dimensions like the inferred one. In step 439, if a package type missing on the pick list is a has a very similar dimension of length and height of the extra inferred package type as determined in step 440, then the correction is made in step 442 to substitute the missing SKU for the extra inferred SKU. If there is more than one missing SKU on the pick list then the one with the greatest brand confidence will be used for the correction.
7) SKU with Unverifiable Quantity
[0134] The quantity of some SKUs on the top of the pallet cannot be determined from the images. The pallet weight is used to help determine the SKU quantity.
[0135] A sample flowchart for a SKU with unverifiable quantity is shown in
[0136] In step 390, the SKUs for all the items on the pallet (for example) are inferred according to any of the methods described herein. In step 392, the inferred SKUs are compared to the pick list. In step 394 it is determined if SKU.sub.1 (package faces 29 and 34) is on the top layer of the stack of products in the images. If not, the quantity is resolved in step 404 (i.e. there are two). If it is on the top layer, then it is determined in step 396 if SKU.sub.1 appears in the same mirror image X coordinate position in the front and back images mirror image (within a tolerance threshold). If it is not, the quantity is resolved in step 404 (i.e. there are two).
[0137] In step 398, it is determined if SKU.sub.1 is visible on a perpendicular side (here, the left or right end) image. If so, the quantity would be resolvable in one of the perpendicular images in step 404 because the perpendicular image would show the quantity (e.g. one versus two).
[0138] If the SKU.sub.1 was not recognized in a perpendicular image, then it is determined in step 400 if the inferred SKU.sub.1 has the property (e.g. dimensionally and orientationally) that it must be visible on both the front and the back pallet face. If it must, then quantity is resolved in step 404 (e.g., there is one). For example, for a product having a shorter side and a longer side, it is determined whether the shorter side or the longer side is facing the front and/or back pallet faces. If the shorter side is facing the front and/or back pallet faces, and if the longer side dimension exceeds a threshold (e.g. 10.5 inches for a half-pallet), then it is determined that the same SKU1 is visible in both the front and back pallet faces and quantity is resolved as one in step 404. The total determined quantity (i.e. including any others stacked on the pallet) is then compared to the pick list.
[0139] On the other hand, if the longer side is facing the front and/or back pallet face (as in the illustrated example), and if the shorter side is less than the threshold, then it is determined that it is possible that there are two such SKUs side-by-side and that it is possible that the system is seeing one on the front pallet face and different one on the back pallet face and the system proceeds to step 402. In step 402, weight is used to determine whether there is one or two. The weight of the plurality of products and the pallet can be compared to an expected weight of the plurality of products from the pick list (and/or the other verified SKUs) and the pallet to determine if the total weight suggests that there are two such SKUs or one such SKU. The determined quantity is then compared to the pick list.
[0140] It should also be recognized that the particular SKU may have two sides that are both greater than or both less than the threshold. If both are greater, the quantity is resolved as one in step 404. If both are less, then quantity is determined by weight in step 402.
[0141] It should also be noted that on all layers except for the top layer on the pallet, if dimensionally and orientationally possible, it is presumed that there are two items of SKU.sub.1.
[0142] Sometimes the multiple face view is needed to correct stitching errors of missing product. This can occur because of holes and other factors. This can correct a stitching error where the case count shows a missing product, and two products were stitched together reducing the count.
[0143] Unverifiable quantity logic is added to the multiple face view. If the highest inferred package face is on the pallet 22, but the lesser inferred package face is missing then also the missing product should be corrected too. The multiple face view can increase the case count on the pallet by counting both the highest confident package face and the lesser confident different package type package face.
[0144] Sometimes there could be more than one missing product on the pick list with a package type of the lesser confident package type from the multiple face view inference. For this case the brand inference is used to match to the best missing one from the pick list.
[0145] Brand is used to block the addition of additional products based on a threshold but to ignore the threshold if the missing SKU has an underperforming brand.
[0146] The weight checksum is used to block the addition of a product when the weight does not make sense.
Weight Checksum
[0147] There are many heuristics that can make corrections between package types inferred and ones that are missing from the pick list:
[0148] Indistinguishable SKU sets
[0149] Override multiple face view
[0150] Override Single face view
[0151] Unverifiable Quantity
[0152] SKUs of different brands can have different weights too. In one implementation, the system would only allow overrides by the heuristic algorithms if it makes sense from a weight perspective.
[0153] The heuristic is allowed to make the override assuming any of the following is true:
[0154] 1) Actual pallet weight (from the scale) and expected pallet weight is in tolerance. The expected weight is the sum of pallet weight and the weight from all of the product. The tolerance is scaled based on the weight of the pallet so that the heaver pallets with more weight have a greater tolerance, e.g. the tolerance could be a percentage.
[0155] 2) Is the inferred weight of the pallet in the inferred tolerance. The system sums up the weight from all the inferred product and adds in the weight of the pallet. If the inferred weight minus the expected weight is close to 0 and within a tolerance, then this indicates that the inference is close to being correct.
[0156] 3) If the inferred pallet weight after making the correction with the extra and missing product is closer to the goal weight. The goal weight is the expected weight when the actual weight and expected weight is in tolerance. The goal weight is the actual scale weight when we are out of tolerance.
[0157] 4) If the difference of weight is in a negligible weight difference threshold then the override is allowed. One example of when this rule is needed is for 24 packs can be grouped together in 4 groups of 6 in a tray or all 24 in a tray. They both pretty much weigh the same (they can visually look the same too).
[0158] If all the above conditions are false, then the override correction from the heuristic is blocked.
[0159] A sample implementation of this is shown in
[0160] Additionally, if the inferred loaded pallet weight is determined in step 458 to be within a tolerance threshold of the expected loaded pallet weight, then the correction is made in step 456.
[0161] If the actual loaded pallet weight is determined in step 460 to be within a tolerance threshold of the expected loaded pallet weight, then the correction is made in step 456.
[0162] Additionally, if the correction is determined in step 462 to represent a negligible weight difference (e.g. if the difference in weight between the two SKUs being corrected (i.e. swapped) is negligible, such as less than or equal to 0.2 lbs., then the correction is made in step 456.
[0163] The number of false errors reported is reduced with a weight heuristic. The weight heuristic is particularly useful for removing false inferred counts like seeing the tops of the package as an extra count or detecting product beside the pallet in the background that is not part of the pallet.
[0164] Referring to
[0165] 1) In step 470, it is determined that the actual pallet weight (from the scale) and expected pallet weight is in tolerance. The expected weight is the sum of pallet weight and the weight from all the product. The tolerance may be scaled based on the weight of the pallet so that the heaver pallets with more weight have a greater tolerance.
[0166] 2) In step 472, it is determined if the weight summed up from the products in the inference plus the pallet weight and the expected pallet weight is in a tolerance. (The tolerance can be adjusted to tune the heuristic to run more or less often.) If so, then no correction is made in step 474. If not, then the correction is made in step 476.
[0167] The premise around the weight heuristic is that if the actual weight is close to the expected weight then the pallet is likely to be picked correctly. If the inferred weight is then out of alignment with the expected weight while the actual weight from the scale is in alignment, then the inference likely has a false error.
[0168] In step 318 of
[0169] The stitching algorithm automatically makes the following types of corrections:
[0170] 1. Package type override—If the package type confidence from one package face is more confident than another package face on the same item then the highest confidence package type is used.
[0171] 2. Brand override—If the brand confidence from one package face is more confident than another package face on the same item then the highest confidence brand is used.
[0172] 3. Holes—Once a package face is detected for a pallet face, then the stitching algorithm understands the other pallet faces that the package face should be visible on. Sometimes the package face object detector does not detect the package face on other views of the pallet face. The geometry of the package and the stitching algorithm can be used to automatically label where the package face is in the pallet face, thus reducing the occurrence of “holes.”
[0173] 4. Ghosts—Sometimes the machine learning detects items that are not on the pallet. This most often occurs on the short side views of the pallet where there is a stair step of product visible and the images of two or more partial products are combined. The stitching algorithm determines based on the geometry of the pallet that those images are not products and labels them as ghosts. The ghosts are excised from the pallet inference.
[0174] There are some errors that stitching cannot fix and a human is needed to label the pallet faces with the error. The results from the package face object detector, brand classifier and stitching algorithms are leveraged to feed a tool for a human to help out by making quick corrections. The normal labeling tools involve much more effort and much more expert knowledgeable humans to label and draw bounding boxes around objects that they want to detect.
[0175] The image of the supervised labeling tool in
[0176] The tool corrects the brand and package type labels for all of the packages (items) on one pallet at a time across all four pallet face images. Packages are labeled and not SKUs to handle the scenarios where some SKUs have more than one package per case. Each package is loose and requires a bounding boxes and labels for the package type across the four pallet faces. These bounding boxes and labels can be used for package face detection model training and the labeling tool for brand training then segments the images at the bounding box coordinates and names the images based on the brand for brand training.
[0177] The error scenarios on each pallet are sorted so that errors where more package quantity is detected than expected are resolved first. These corrections provide the likely possibilities for the later scenarios where less package quantity is detected and it is necessary to identify the additional packages to add.
[0178] The tool also allows one to see all the detected product on the pallet and filter the product by the inferred package type and brand to help with labeling. The idea is that a non Subject Matter Expert (SME) can quickly make the bulk of corrections using this tool.
[0179] The alternative approach of using a standard open source tool would take a SME who understands the product a ton of additional time to manually make the corrections.
[0180]
[0181] As indicated in the first column, two packages of the SKU (16.9 oz 12 pk Lipton Green Tea white peach flavor) were expected. The QA person compares the “expected SKU” images to the adjacent “actual SKU” images and marks with a checkmark the correct two. Three were detected so only two of the three packages should be confirmed with a checkmark. The expected SKU images may come from previously labeled training images.
[0182] The expected images are shown next to the actual images so that the QA person can spot the differences. The QA person will notice that there are white peaches on the bottom two sets of images like the training images and the top set of actual images has watermelons. The QA person will uncheck the top watermelon because it has the wrong label. The unchecked watermelon image becomes a candidate for a later scenario where less is detected than was expected.
[0183]
[0184] Behind the scenes the tool will update the labels across the four pallet faces for each view that the package face is present.
[0185] Hovering over a package face image will pop-up a view of all of the pallet faces where that package is visible with bounding boxes around that package. This will help the QA person better understand what they are looking at.
[0186] The QA person can adjust the bounding boxes that were originally created automatically by the machine learning package detect. The QA person can also add or remove bounding boxes for that package.
[0187] As indicated above, it is currently preferred in the implemented embodiment that the packaging type is determined first and is used to limit the possible brand options (e.g. by selecting one of the plurality of brand models 231). However, alternatively, the branding could be determined and used to narrow the possible packaging options to be identified. Alternatively, the branding and packaging could be determined independently and cross-referenced afterward for verification. In any method, if one technique leads to an identification with more confidence, that result could take precedence over a contrary identification. For example, if the branding is determined with low confidence and the packaging is determined with high confidence, and the identified branding is not available in the identified packaging, the identified packaging is used and the next most likely branding that is available in the identified packaging is then used.
[0188] After individual items 20 are identified on each of the four sides of the loaded pallet 22, based upon the known dimensions of the items 20 and pallet 22 duplicates are removed, i.e. it is determined which items are visible from more than one side and appear in more than one image. If some items are identified with less confidence from one side, but appear in another image where they are identified with more confidence, the identification with more confidence is used.
[0189] For example, if the pallet 22 is a half pallet, its dimensions would be approximately 40 to approximately 48 inches by approximately 20 to approximately 24 inches, including the metric 800 mm×600 mm Standard size beverage crates, beverage cartons, and wrapped corrugated trays would all be visible from at least one side, most would be visible from at least two sides, and some would be visible on three sides.
[0190] If the pallet 22 is a full-size pallet (e.g. approximately 48 inches by approximately 40 inches, or 800 mm by 1200 mm), most products would be visible from one or two sides, but there may be some products that are not visible from any of the sides. The dimensions and weight of the hidden products can be determined as a rough comparison against the pick list. Optionally, stored images (from the SKU files) of SKUs not matched with visible products can be displayed to the user, who could verify the presence of the hidden products manually.
[0191] The computer vision-generated sku count for that specific pallet 22 is compared against the pick list 64 to ensure the pallet 22 is built correctly in step 162 of
[0192] If the loaded pallet 22 is confirmed, positive feedback is given to the worker (e.g.
[0193] After the loaded pallet 22 has been validated, it is moved to a loading station 34 (
[0194] Referring to
[0195] At each store 16 the driver's mobile device 50 indicates which of the loaded pallets 22 (based upon their pallet ids) are to be delivered to the store 16 (as verified by gps on the mobile device 50). The driver verifies the correct pallet(s) for that location with the mobile device 50 that checks the pallet id (rfid, barcode, etc). The driver moves the loaded pallet(s) 22 into the store 16 with the pallet sled 24.
[0196] At each store, the driver may optionally image the loaded pallets with the mobile device 50 and send the images to the central server 14 to perform an additional verification. More preferably, the store worker has gained trust in the overall system 10 and simply confirms that the loaded pallet 22 has been delivered to the store 16, without taking the time to go SKU by SKU and compare each to the list that he ordered and without any revalidation/imaging by the driver. In that way, the driver can immediately begin unloading the products 20 from the pallet 22 and placing them on shelves 54 or in coolers 56, as appropriate. This greatly reduces the time of delivery for the driver.
[0197]
[0198] In one possible implementation of training station 28, shown in
[0199] Whichever method is used to obtain the images of the items, the images of the items are received in step 190 of
[0200] The virtual pallets are built based upon a set of configurable rules, including, the dimensions of the pallet 22, the dimensions of the products 20, number of permitted layers (such as four, but it could be five or six), layer restrictions regarding which products can be on which layers (e.g. certain bottles can only be on the top layer), etc. The image of each virtual pallet is sized to be a constant size (or at least within a particular range) and placed on a virtual background, such as a warehouse scene. There may be a plurality of available virtual backgrounds from which to randomly select.
[0201] The API creates thousands of images of randomly-selected sku images on a virtual pallet. The API uses data augmentation to create even more unique images. Either a single loaded virtual pallet image can be augmented many different ways to create more unique images, or each randomly-loaded virtual pallet can have a random set of augmentations applied. For example, the API may add random blur (random amount of blur and/or random localization of blur) to a virtual pallet image. The API may additionally introduce random noise to the virtual pallet images, such as by adding randomly-located speckles of different colors over the images of the skus and virtual pallet. The API may additionally place the skus and virtual pallet in front of random backgrounds. The API may additionally place some of the skus at random (within reasonable limits) angles relative to one another both in the plane of the image and in perspective into the image. The API may additionally introduce random transparency (random amount of transparency and/or random localized transparency), such that the random background is partially visible through the virtual loaded pallet or portions thereof. Again, the augmentations of the loaded virtual pallets are used to generate even more virtual pallet images.
[0202] The thousands of virtual pallet images are sent to the machine learning model 138 along with the bounding boxes indicating the boundaries of each product on the image and the SKU associated with each product. The virtual pallet images along with the bounding boxes and associated SKUs constitute the training data for the machine learning models.
[0203] In step 196, the machine learning model is trained in step 138 based upon the images of the virtual pallets and based upon the location, boundary, and sku tag information. The machine learning model is updated and stored in step 140. The machine learning model is deployed in step 142 and used in conjunction with the validation stations 32 (
[0204] It should be understood that each of the computers, servers or mobile devices described herein includes at least one processor and at least one non-transitory computer-readable media storing instructions that, when executed by the at least one processor, cause the computer, server, or mobile device to perform the operations described herein. The precise location where any of the operations described herein takes place is not important and some of the operations may be distributed across several different physical or virtual servers at the same or different locations.
[0205] In accordance with the provisions of the patent statutes and jurisprudence, exemplary configurations described above are considered to represent preferred embodiments of the inventions. However, it should be noted that the inventions can be practiced otherwise than as specifically illustrated and described without departing from its spirit or scope. Alphanumeric identifiers on method steps are solely for ease in reference in dependent claims and such identifiers by themselves do not signify a required sequence of performance, unless otherwise explicitly specified.