METHOD FOR IDENTIFYING BINARY DOT MATRIX CODE ON MOLD SURFACE

Abstract

The present disclosure relates to a process for identifying a binary dot matrix code on a mold surface, comprising: capturing a plurality of binary dot matrix code images of the mold surface; inputting the acquisition data into the YOLOv8 network model for training; recapturing the images to be tested and inputting them into the upgraded YOLOv8 network model for detection and output; calculating positional coordinates of vertices at the outermost periphery of the four bounding boxes based on the output results to obtain the distorted quadrilateral, which is converted into the square pattern by the perspective transformation; inputting the square pattern into the upgraded YOLOv8 network model for detection and outputting the results, and calculating the counterclockwise rotation angle ; obtaining the standard pattern by rotating the square pattern by the angle ; dividing the standard pattern into the image blocks with the same size in equal portions, and identifying the circular code points of the image blocks, concatenating the identification results to obtain the binary-encoded sequence data; and decoding the binary-encoded sequence data based on binary-encoded decoding rules, and deriving corresponding binary-encoded character information.

Claims

1. A method for identifying a binary dot matrix code on a mold surface, comprising: step S1, data acquisition: capturing a plurality of binary dot matrix code images of the mold surface by a camera, wherein a fiducial marker is arranged in each of four corners of each of the plurality of binary dot matrix code images, and a training set is constructed through annotating bounding boxes for each fiducial marker; step S2, model training: inputting data from the training set into a YOLOv8 network model to train the YOLOv8 network model to obtain optimal weight data of the YOLOv8 network model, and obtaining a trained and upgraded YOLOv8 network model by loading the optimal weight data into the YOLOv8 network model again; step S3, fiducial marker detection: capturing binary dot matrix code images of a mold surface to be tested by the camera to form a test set, inputting images of the test set to the trained and upgraded YOLOv8 network model, and outputting corner points of four bounding boxes of data of the test set by model training, each of the four bounding boxes is provided with 4 corner points, respectively, totaling 16 corner points; step S4, calculating peripheral vertices: filtering the 16 corner points by a filtering strategy, and calculating positional coordinates of vertices at the outermost periphery of the four bounding boxes; step S5, perspective correction: obtaining a distorted quadrilateral based on the positional coordinates of the vertices at the outermost periphery of the four bounding boxes, and converting the distorted quadrilateral into a standard LL square pattern by perspective transformation; step S6, calculating a counterclockwise rotation angle: inputting the square pattern obtained in the step S5 into the trained and upgraded YOLOv8 network model of the step S2 for detection, and outputting four corner points of bounding boxes corresponding to a fiducial marker of the square pattern; obtaining a coordinate of a center point G of the bounding boxes based on the four corner points of the bounding boxes; comparing the coordinate of the center point G of the bounding boxes with a coordinate of a center point O ( L 2 , L 2 ) of the square pattern obtained in the step S5 to obtain a counterclockwise rotation angle of the square pattern; step S7, rotational correction: rotating the square pattern of the step S5 counterclockwise by an angle by perspective transformation to align the square pattern to a correct position to obtain a standard pattern of binary dot matrix code images of the test set; step S8, code point identification: dividing the standard pattern into image blocks with a same size in equal portions based on a dimension of a binary code dots; and inputting the image blocks into the trained and upgraded YOLOv8 network model of the step S2 sequentially in the order of top to bottom and left to right for identification; in response to determining that a code point exists in the image blocks, outputting 1, and in response to determining that there is no code point in the image blocks, outputting 0; concatenating identification results to obtain binary-encoded sequence data; and step S9, decoding: decoding the binary-encoded sequence data based on binary-encoded decoding rules and deriving corresponding binary-encoded character information.

2. The method of claim 1, wherein the filtering strategy is categorized into two types, and the filtering strategy includes a first filtering strategy and a second filtering strategy, wherein: a filtering process of the first filtering strategy is as follows: step S4-a1: constructing the 16 corner points of the four bounding boxes output by the YOLOv8 network model in the Step S3 into a set P, wherein each corner point is denoted as a.sub.ij, a coordinate of each corner point is denoted as (x.sub.ij,y.sub.ij), i denotes numbers of the bounding boxes, which takes a range of i{1,2,3,4}, and j denotes the jth point of the four corner points of the bounding boxes, which takes a range of j{1,2,3,4}; step S4-a2: coordinate information of four corner points of a first bounding box including: a coordinate of a.sub.11 being denoted as (x.sub.11,y.sub.11)), a coordinate of a.sub.12 being denoted as (x.sub.12,y.sub.12), a coordinate of a.sub.13 being denoted as (x.sub.13,y.sub.13), a coordinate of a.sub.14 being denoted as (x.sub.14,y.sub.14); step S4-a3: assuming that positional coordinates of vertices at the outermost periphery of the four bounding boxes are Q.sub.1(x.sub.Q1,y.sub.Q1), Q.sub.2(x.sub.Q2,y.sub.Q2), Q.sub.3(x.sub.Q3,y.sub.Q3), Q.sub.4(x.sub.Q4,y.sub.Q4), and Q.sub.1 is located at a topmost part of a binary dot matrix code pattern, Q.sub.2 is located at a bottom part of the binary dot matrix code pattern, Q.sub.3 is located at a leftmost part of the binary dot matrix code pattern, and Q.sub.4 is located at a rightmost part of the binary dot matrix code pattern, coordinates of individual vertices being calculated as follows: Q.sub.1 is located at the topmost part of the binary dot matrix code pattern, which with a vertical coordinate y.sub.Q.sub.1 is a point with a smallest vertical coordinate in the set P, as shown in equations (1) and (2) below: y Q 1 = min ( y ij ) , i , j [ 1 , 4 ] ( 1 ) Q 1 = { ( x ij , y ij ) P , y ij = y Q 1 } , i , j [ 1 , 4 ] ( 2 ) Q.sub.2 is located at the bottom part of the binary dot matrix code pattern, which with a longitudinal coordinate y.sub.Q.sub.2 is a point with a largest longitudinal coordinate in the set P, as shown in equations (3) and (4) below: y Q 2 = max ( y ij ) , i , j [ 1 , 4 ] ( 3 ) Q 2 = { ( x ij , y ij ) P , y ij = y Q 2 } , i , j [ 1 , 4 ] ( 4 ) Q.sub.3 is located at the leftmost part of the binary dot matrix code pattern, which with a transverse coordinate x.sub.Q.sub.3 is a smallest transverse coordinate in the set P, as shown in equations (5) and (6) below: x Q 3 = min ( x ij ) , i , j [ 1 , 4 ] ( 5 ) Q 3 = { ( x ij , y ij ) P , x ij = x Q 3 } , i , j [ 1 , 4 ] ( 6 ) Q.sub.4 is located at the rightmost part of the binary dot matrix code pattern, which a transverse coordinate x.sub.Q.sub.4 is a largest transverse coordinate in the set P, as shown in equations (7) and (8) below: x Q 4 = max ( x ij ) , i , j [ 1 , 4 ] ( 7 ) Q 4 = { ( x ij , y ij ) P , x ij = x Q 4 } , i , j [ 1 , 4 ] . ( 8 )

3. The method of claim 1, wherein a filtering process of the second filtering strategy is as follows: constructing the 16 corner points of the four bounding boxes outputted by the YOLOv8 network model in the step S3 into the set W, wherein each coordinate of each point w.sub.m is (x.sub.m,y.sub.m); wherein for any point w.sub.m, a distance between the point w.sub.m from the other 15 corner points w.sub.n(mn) is defined as d(w.sub.m,w.sub.n), which is calculated using a Euclidean distance equation (9): d ( w m , w n ) = ( x m - x n ) 2 + ( y m - y n ) 2 , m , n [ 1 , 1 6 ] , m n ( 9 ) calculating a sum of distances between the point w.sub.m and the other 15 corner points using a following equation (10): S w m = .Math. n = 1 , m n 1 6 d ( w m , w n ) , m , n [ 1 , 1 6 ] ( 10 ) by comparing all S.sub.w.sub.m, selecting points corresponding to a first 4 largest S.sub.w.sub.m as 4 peripheral vertices of the binary dot matrix code, wherein an expression equation is as follows: assuming that all S.sub.w.sub.m are sorted in descending order to obtain a sorted sequence as follows (11): S w 1 S w 2 .Math. S w 16 ( 11 ) selecting a set S.sub.w.sub.m corresponding to points of the first 4 largest P as: P = { w 1 , w 2 , w 3 , w 4 } . ( 12 )

4. The method of claim 1, wherein a transformation equation of the perspective transformation is a transformation process of converting a pixel coordinate (u,v) to a point on a three-dimensional world coordinate system, and transforming the point to another pixel coordinate (x,y), a transformation equation (13) of the perspective transformation is as follows: [ x y z ] = [ b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 ] [ u v 1 ] ( 13 ) wherein a coordinate of an original image in a two-dimensional plane is (u,v), (x,y,z) is a coordinate of a converted three-dimensional world; the perspective transformation matrix [ b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 ] is split into 3 parts, ( b 11 b 12 b 21 b 22 ) is configured as linear transformation, ( b 13 b 23 ) is configured as the perspective transformation, (b.sub.31, b.sub.32) is configured as translation operation.

5. The method of claim 3, wherein in the perspective transformation, a pixel coordinate (u,v) is transformed to a point on the three-dimensional world coordinate system, and transformed to a new two-dimensional plane to obtain a corrected coordinate of (x,y), and equations (14) and (15) is shown as follows: x = x z = b 11 u + b 21 v + b 31 b 13 u + b 23 v + b 33 ( 14 ) y = y z = b 12 u + b 22 v + b 32 b 13 u + b 23 v + b 33 . ( 15 )

6. The method of claim 1, wherein the obtaining the counterclockwise rotation angle of the square pattern in the step S6 includes: step S6-1: assuming that the four corner points of the bounding boxes corresponding to the fiducial marker of the square pattern are denoted as T.sub.r and coordinates are denoted as (x.sub.r,y.sub.r), wherein r[1,4], coordinates of the center point G(x.sub.G,y.sub.G) of the bounding boxes and the center point G of the bounding boxes is calculating by equations (16) and (17): x G = .Math. r = 1 4 x r 4 ; ( 16 ) y G = .Math. r = 1 4 y r 4 ; ( 17 ) step S6-2: determining the counterclockwise rotation angle of the square pattern by comparing the coordinate of the center point G(x.sub.G,y.sub.G) of the bounding boxes with a coordinate of a center point O ( L 2 , L 2 ) of the square pattern of the step S5, wherein a corresponding relationship is as follows: = { 0 , x G > L 2 and y G > L 2 90 , x G > L 2 and y G > L 2 180 , x G < L 2 and y G < L 2 270 , x G > L 2 and y G < L 2 . ( 18 )

7. The method of claim 6, wherein the obtaining the standard pattern of binary dot matrix code images of the test set in the step S7 includes: assuming that a point U(x.sub.u,y.sub.u) on the binary dot matrix code pattern after the perspective correction in the step S5 is rotated counterclockwise by an angle ; wherein a rotation process is as follows: calculating a coordinate (x.sub.c,y.sub.c) of the point U(x.sub.u,y.sub.u) with respect to the center point O ( L 2 , L 2 ) : x c = x u - L 2 ( 19 ) y c = y u - L 2 ( 20 ) applying a rotation matrix R to rotate (x.sub.c,y.sub.c) to obtain a coordinate (x.sub.h,y.sub.h): R = [ cos ( ) - sin ( ) sin ( ) cos ( ) ] ( 21 ) [ x h y h ] = R .Math. [ x c y c ] ( 22 ) translating (x.sub.h,y.sub.h) back to an original position to obtain a final rotation point U(x.sub.new,y.sub.new): x new = x h + L 2 ( 23 ) y new = y h + L 2 ( 24 ) by combining the above steps, obtaining a coordinate U(x.sub.new,y.sub.new) by rotating the point U(x.sub.u,y.sub.u) about the center point by the : x new = cos ( ) .Math. ( x u = L 2 ) - sin ( ) .Math. ( y u - L 2 ) + L 2 ( 25 ) y new = sin ( ) .Math. ( x u = L 2 ) - cos ( ) .Math. ( y u - L 2 ) + L 2 . ( 26 )

8. The method of claim 1, wherein the code point identification in the step S8 includes: step S8-1, dividing a binary dot matrix code standard pattern into the image blocks: based on the dimension of the binary code dots, dividing the binary dot matrix code standard pattern rotated to the correct position in the step S7 into the image blocks with the same size in equal portions; and step S8-2, image blocks identification code point: identifying circular binary code points of the image blocks sequentially in the order of top to bottom and left to right by using the trained and upgraded YOLOv8 network model in the step S2, in response to determining that a code point exists in the image blocks, outputting 1, and in response to determining that there is no code point in the image blocks, outputting 0, and identifying and obtaining a binary coding sequence 01.

9-12. (canceled)

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0058] The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are not limiting, and in these embodiments, the same numbering denotes the same structure, wherein:

[0059] FIG. 1 is a flowchart illustrating an exemplary process for identifying a binary dot matrix code on a mold surface according to some embodiments of the present disclosure;

[0060] FIG. 2 is a schematic diagram illustrating a standard pattern of a binary dot matrix code according to some embodiments of the present disclosure;

[0061] FIG. 3 is a schematic diagram illustrating target detection for a fiducial marker according to some embodiments of the present disclosure;

[0062] FIG. 4 is a schematic diagram illustrating vertices of a binary dot matrix code according to some embodiments of the present disclosure;

[0063] FIG. 5 is a schematic diagram illustrating a pixel coordinate system of a digital image according to some embodiments of the present disclosure; and

[0064] FIG. 6 is a schematic diagram illustrating a process for dividing image blocks according to some embodiments of the present disclosure.

DETAILED DESCRIPTION

[0065] In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. Obviously, the accompanying drawings in the following description are only some examples or embodiments of the present specification, and it is possible for a person of ordinary skill in the art to apply the present specification to other similar scenarios in accordance with these drawings without creative labor. Unless obviously obtained from the context or the context illustrates otherwise, the same numeral in the drawings refers to the same structure or operation.

[0066] Flowcharts are used in this specification to illustrate operations performed by a system in accordance with embodiments of this specification. It should be appreciated that the preceding or following operations are not necessarily performed in an exact sequence. Instead, steps may be processed in reverse order or simultaneously. Also, it is possible to add other operations to these processes or remove a step or steps from them.

[0067] In some embodiments, preparing rules of binary dot matrix code images on a mold surface include encoding using a mold encode process based on a binary system, the process includes character encode: converting identification (ID) characters used to mark the mold into character binary codes through a character encoding table; error correction codes generation: generating the error correction codes based on ReedSolomon encoding principle, and converting the error correction condes into error correction binary codes; marking pattern generation: arranging the error correction binary codes in sequence after the character binary codes to generate a combined binary code, and generating an N*M rectangular marking pattern; code marking: engraving the coding pattern onto the mold surface; code identification and decoding: using image identification technology to identify the marking pattern, converting it to binary coding, and decoding.

[0068] Embodiments of the present disclosure provide a process for identifying a binary dot matrix code on the mold surface by encoding on the basis of the above.

[0069] FIG. 1 is a flowchart illustrating an exemplary process for identifying a binary dot matrix code on a mold surface according to some embodiments of the present disclosure.

[0070] In some embodiments, as shown in FIG. 1, a process for identifying the binary dot matrix code on the mold surface includes:

[0071] Step S1, data acquisition: capturing a plurality of binary dot matrix code images of the mold surface by a camera, wherein a fiducial marker is arranged in each of four corners of each of the plurality of binary dot matrix code images, and a training set is constructed through annotating bounding boxes for each fiducial marker.

[0072] The binary dot matrix code image is an image of the binary dot matrix code captured by the camera. More descriptions of preparing the binary dot matrix code images on the mold surface may be found in the above descriptions.

[0073] FIG. 2 is a schematic diagram illustrating a standard pattern of a binary dot matrix code according to some embodiments of the present disclosure.

[0074] In some embodiments, as shown in FIG. 2, a fiducial marker is arranged in each of four corners of each of a plurality of binary dot matrix code images. The fiducial marker refers to a fiducial symbol characterized by specific geometry and high visual contrast, which is fixedly arranged in each of the four corners of the binary dot matrix code, and serves as a spatial reference and a basis for orientation calibration for image identification. In some embodiments, the fiducial marker may be L-shaped, square, concentric rings, zigzags, or the like.

[0075] FIG. 3 is a schematic diagram illustrating target detection for a fiducial marker according to some embodiments of the present disclosure.

[0076] In some embodiments, a process of constructing a training set through annotating bounding boxes for each fiducial marker mainly includes the following operations: image preprocessing, tool annotation operation, and training set construction. The original image preprocessing refers to uniformly scaling the captured binary dot matrix code images to the same size and removing the unclear images. The annotation tool operation refers to using an annotation tool to annotate the images after the image preprocessing, as shown in FIG. 3, a bounding box is drawn along the outer edge of the fiducial marker. The annotation tool includes Labellmg, CVAT, or the like. The training set construction refers to performing data augmentation (e.g., simulating oil stain occlusion, rotating images) after storing the image preprocessed images and annotation files, and converting the annotations into a target framework format (e.g., YOLO txt), finally forming a structured directory to support the model training.

[0077] FIG. 4 is a schematic diagram illustrating vertices of a binary dot matrix code according to some embodiments of the present disclosure.

[0078] For example, as shown in FIG. 3 and FIG. 4, a gray box represents a bounding box. The small black dots within the bounding box represent corner points of the bounding box. 0 represents ordinary corner points, while 1 represents vertices at the outermost periphery of the binary dot matrix code image.

[0079] Step S2, model training: inputting data from the training set into a YOLOv8 network model to train the YOLOv8 network model to obtain optimal weight data of the YOLOv8 network model, and obtaining a trained and upgraded YOLOv8 network model by loading the optimal weight data into the YOLOv8 network model again.

[0080] It should be noted that the YOLOv8 network model may be an existing network model. The YOLOv8 Network model is a State of the Art (SOTA) model, which builds on the success of previous YOLO versions and introduces new features and improvements, including a new backbone network, a new Anchor-Free detection head, and a new loss function. The model runs on various hardware platforms, from CPU to GPU. The YOLOv8 model supports a full range of visual AI tasks, including detection, segmentation, pose estimation, tracking, and classification.

[0081] Step S3, fiducial marker detection: capturing binary dot-matrix code images of a mold surface to be tested by a camera to form a test set, inputting images of the test set to the trained and upgraded YOLOv8 network model, and outputting corner points of four bounding boxes of data of the test set by model training, each of the four bounding boxes is provided with 4 corner points, respectively, totaling 16 corner points.

[0082] Exemplarily, as shown in FIG. 3, the points labeled as 0 and 1 in the figure are the corner points of the bounding box.

[0083] In some embodiments, a process for identifying the binary dot matrix code on the mold surface further includes: capturing a mold to be tested by the camera at a preset shooting angle, and obtaining the binary dot matrix code images of the mold surface to be tested.

[0084] The mold to be tested refers to a mold for which the binary dot matrix code needs to be identified.

[0085] The shooting angle refers to a deviation angle between a shooting plane of the camera and the mold surface to be tested. The deviation angle refers to an angle between the shooting plane of the camera and the plane where the mold surface to be tested is located or an angle between a normal vector of the shooting plane of the camera and a normal vector of the plane where the mold surface to be tested is located. For example, if the camera is directly facing the mold to be tested, it may be understood that the shooting plane of the camera is parallel to the plane where the mold surface to be tested is located, and thus the preset shooting angle is 0.

[0086] The preset shooting angle refers to a shooting angle that is set in advance. In some embodiments, a value of the preset shooting angle is controlled within a preset angle range, which may ensure that the obtained binary dot matrix code images of the mold surface to be tested completely contain corner point regions used for the subsequent perspective correction and rotational correction. The corner point regions refer to regions covered by the corner points of the fiducial marker in the binary dot matrix code images. For example, the corner point regions include the four corner points of the bounding box of each fiducial marker. In some embodiments, the value of the preset shooting angle may be not greater than 30.

[0087] In some embodiments, the preset shooting angle is determined by querying a first preset table based on a mold material of the mold to be tested, a surface roughness degree, and a dot matrix code engraving process.

[0088] The mold material refers to a material used for the mold surface. For example, the mold materials include stainless steel, aluminum alloy, blackened steel, or the like. The mold material may be determined by a technician or obtained from a third party (e.g., a mold manufacturer).

[0089] The surface roughness degree refers to a quantitative index of microscopic unevenness of the mold surface. The surface roughness degree may be obtained by a third-party inspection or a stylus profiler.

[0090] The dot matrix code engraving process refers to a forming process of binary dot matrix codes on the mold. For example, the dot matrix code engraving process includes laser etching, mechanical etching/stamping, inkjet/thermal transfer printing, or the like. The dot matrix code engraving process may be determined by a skilled person or obtained from a third party.

[0091] The first preset table includes a plurality of reference data sets, each set of the plurality of reference data sets includes a reference mold material, a reference surface roughness degree, a reference dot matrix code engraving process, and a plurality of corresponding reference shooting angles and a priority of each of the plurality of reference shooting angles.

[0092] The first preset table may be determined in various ways. For example, the first preset table may be predetermined by a skilled person based on experience. As another example, the first preset table may be determined experimentally.

[0093] In some embodiments, the processor designates a corresponding reference shooting angle of a reference mold material, a reference surface roughness degree, and a reference dot matrix code engraving process corresponding to a current mold material, a current mold surface roughness degree, and a current dot matrix code engraving process in the first preset table as the preset shooting angle.

[0094] In some embodiments of the present disclosure, the processor performs a process for identifying the binary dot matrix code on the mold surface. The processor is configured to process data/information related to the identification of the binary dot matrix code for the mold surface. In some embodiments, the processor is also configured to control the mold to perform work, for example, the processor is configured to control the mold to perform die-casting, injection molding, and other tasks. Exemplarily, the processor may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an image processor (GPU), a physical operations processing unit (PPU), a digital signal processor (DSP), etc. or any combination thereof.

[0095] In some embodiments, the process for identifying the binary dot matrix code on the mold surface further includes determining the plurality of the reference shooting angles and a priority for each of the plurality of reference shooting angles of each set of the plurality of reference data sets.

[0096] In some embodiments, the determining the plurality of the reference shooting angles and the priority for each of the plurality of reference shooting angles of each set of the plurality of reference data sets includes: for the reference mold corresponding to the reference mold material, the reference surface roughness degree, and the reference dot matrix code engraving process, obtaining a plurality of the binary dot matrix images of the reference mold at a plurality of shooting angles, and determining a detection result of the binary dot matrix code images; dividing a plurality of successful connectivity domains and a plurality of failed connectivity domains based on the detection result; determining, for each of the plurality of successful connectivity domains, an effective identification value of each of the plurality of successful connectivity domains, based on an area of an inner circle of the successful connectivity domain and an area of a plurality of failed connectivity domains adjacent to the successful connectivity domain; determining a target successful connectivity domain based on the effective identification value; determining the plurality of the reference shooting angles and the priority for each of the plurality of reference shooting angles based on the target successful connectivity domain.

[0097] The reference mold refers to a mold used for experiments for determining the first preset table.

[0098] The reference data refers to data associated with the reference mold. For example, the reference mold material, the reference surface roughness degree, and the reference dot matrix code engraving process corresponds to the mold material, the surface roughness degree, and the dot matrix code engraving process of the reference mold, respectively.

[0099] The plurality of the reference shooting angles may be randomly selected.

[0100] In some embodiments, the fiducial markers on the four corners of the binary dot matrix code images are detected, and the validity of the encoded content is verified by a checksum or verification mechanism in the detection result. If the encoded content is valid, the detection result is successful, and the camera position at the shooting angle corresponding to the binary dot matrix code image is taken as a successful position. If the coded content is invalid, the detection result is a failure, and the camera position under the shooting angle corresponding to the binary dot matrix code image is taken as the failed position.

[0101] Based on the detection result, the processor may divide multiple successful connectivity domains and multiple failed connectivity domains by multiple algorithms. For example, a connectivity domain analysis algorithm using the two-pass algorithm may divide a plurality of successful positions and a plurality of failed positions into a plurality of successful connectivity domains and a plurality of failed connectivity domains, respectively.

[0102] An effective identification value of the successful connectivity domains may be expressed as a ratio of the area of an inner circle of the successful connectivity domains to a sum of the areas of the plurality of failed connectivity domains adjacent to the successful connectivity domains. The larger the area of the inner circle of the successful connectivity domains, and the smaller the area of the plurality of failed connectivity domains adjacent to the successful connectivity domains, the larger the effective identification value of the successful connectivity domains.

[0103] In some embodiments, the processor may regard the successful connectivity domains with the largest effective identification value as the target successful connectivity domain.

[0104] Based on the target successful connectivity domain, the processor may determine the plurality of reference shooting angles in various ways. For example, for each target successful connectivity domain, a plurality of positions within the target successful connectivity domain are randomly selected, and the camera is placed at each of the plurality of positions, respectively. The deviation angle between the shooting plane of the camera corresponding to the plurality of positions and the reference mold surface is taken as the reference shooting angle.

[0105] In some embodiments, the processor may determine a geometric center point of the target successful connectivity domain and calculate the distance between the position of the reference shooting angle and the geometric center point. The priority may be determined based on the distance between the position of the preset shooting angle and the geometric center point. The priority of the preset shooting angle corresponding to the closest distance is set to the highest, and the priority of the preset shooting angle corresponding to the furthest distance is set to the lowest.

[0106] In some embodiments, when using the first preset table, a reference shooting angle with the highest priority may be preferentially selected.

[0107] Based on factors affecting reflection such as the mold material and the surface roughness degree, and the dot matrix code engraving process, controlling the shooting angle when capturing the binary dot matrix code images can ensure the identification effect, which helps to improve the effectiveness of the binary dot matrix code images and the stability of the model identification from the source.

[0108] In some embodiments, the process for identifying the mold surface binary dot matrix code also includes: in response to obtaining the binary dot matrix code images captured by the camera, determining the detection result of the binary dot matrix code images; in response to the detection result being a failure, recording the current shooting angle into an invalid data set; querying the first preset table for the preset shooting angles not in the invalid data set based on the mold material of the mold to be tested, the mold surface roughness degree, and the dot matrix code engraving process; in response to the first preset table containing a reference shooting angle not in the invalid data set, obtaining a reference shooting angle with the highest priority corresponding to the reference mold material, the reference surface roughness degree, and the reference dot matrix code engraving process in the first preset table as an updated shooting angle; controlling the camera to capture the binary dot matrix code images of the mold surface to be tested at the updated shooting angle; in response to the first preset table not containing a reference shooting angle not in the invalid data set, clearing the invalid data set and controlling the manipulator to grasp the mold to a preset area for re-inspection; in response to the detection result being a success, clearing the invalid data set and executing subsequent operations.

[0109] The invalid data set refers to a data set of shooting angles that have already been used. For example, in the first preset table, a plurality of reference shooting angles corresponding to the reference mold material, the reference surface roughness degree, and the reference dot matrix code engraving process of each reference mold. If the reference shooting angles in the first preset table include shooting angles A, B, and C, and shooting angle A has already been used, then the invalid data set includes shooting angle A, and shooting angles B and C (the priority corresponding to shooting angle B is higher than that corresponding to shooting angle C) are the reference shooting angles not in the invalid data set.

[0110] In some embodiments, the processor, in response to the first preset table containing a reference shooting angle not in the invalid data set (e.g., shooting angles B and C), takes the shooting angle C with the highest priority as the updated shooting angle, and controls the camera to capture the binary dot matrix code images of the mold surface to be tested at the updated shooting angle (e.g., shooting angle C).

[0111] If the reference shooting angles in the first preset table include shooting angles A, B, and C, and shooting angles A, B, and C have all been used, then the invalid data set includes shooting angles A, B, and C. At this time, there is no reference shooting angle that is not in the invalid data set.

[0112] In some embodiments, the processor, in response to the first preset table not containing the preset shooting angles not in the invalid data set, clears the invalid data set and controls the manipulator to grasp the mold to the preset area for re-inspection.

[0113] The preset area may be determined by a technician based on experience.

[0114] The re-inspection is performed by a technician. For example, a technician identifies the binary dot matrix code on the mold surface by scanning.

[0115] By determining the shooting angle based on the mold material, the surface roughness degree, and the dot matrix code engraving process, when the identification accuracy of the binary dot matrix code images captured at this shooting angle is insufficient, a secondary calibration of the shooting angle or mechanical sorting may be automatically triggered, which can further prevent low-quality data from flowing into the production line.

[0116] Step S4, calculating peripheral vertices: filtering the 16 corner points by a filtering strategy, and calculating positional coordinates of vertices at the outermost periphery of the four bounding boxes.

[0117] Exemplarily, as shown in FIG. 4, the points marked as 1 are vertices at the outermost periphery of the four bounding boxes. The positions of the vertices at the outermost periphery of the four bounding boxes correspond to the topmost, bottommost, leftmost, and rightmost points of an entire binary dot matrix code pattern, respectively.

[0118] In some embodiments, a filtering strategy is categorized into two types, and the filtering strategy includes a first filtering strategy and a second filtering strategy.

[0119] A filtering process of the first filtering strategy is as follows: [0120] Step S4-a1: constructing the 16 corner points of the four bounding boxes outputted by the YOLOv8 network model in the Step S3 into a set P, [0121] wherein each corner point is denoted as a.sub.ij, a coordinate of each corner point is denoted as (x.sub.ij,y.sub.ij), i denotes numbers of the bounding boxes, which takes a range of i{1,2,3,4}, and j denotes the j point of the four corner points of the bounding boxes, which takes a range of j{1,2,3,4}.

[0122] It should be noted that the coordinates mentioned in the present disclosure refer to digital image coordinates. In this system, the coordinate origin (0, 0) is at the top-left corner. This is different from conventional two-dimensional coordinates, where the origin is at the bottom-left corner.

[0123] The coordinates mentioned in the present disclosure are pixel coordinates of a digital image. In the digital image, the origin of the pixel coordinates (0,0) is usually located in an upper left corner of the image. The coordinate x represents a horizontal position of the pixel: x=0 is a leftmost side of the image, and the pixel moves to the right as the value of x increases. The coordinate y represents a vertical position of the pixel: y=0 is a topmost side of the image, and the pixel moves downward as the value of y increases.

[0124] FIG. 5 is a schematic diagram illustrating a pixel coordinate system of a digital image according to some embodiments of the present disclosure.

[0125] Exemplarily, as shown in FIG. 5, the gray area is a digital image. The point labeled as R is a pixel coordinate (0,0). X indicates the X axis (the horizontal direction), and Y indicates the Y axis (the vertical direction).

[0126] Step S4-a2: coordinate information of four corner points of a first bounding box including: [0127] a coordinate of a.sub.11 being denoted as (x.sub.11,y.sub.11), a coordinate of a.sub.12 being denoted as (x.sub.12,y.sub.12), a coordinate of a.sub.13 being denoted as (x.sub.13,y.sub.13), a coordinate of a.sub.14 being denoted as (x.sub.14,y.sub.14); [0128] Step S4-a3: assuming that the positional coordinates of the vertices at the outermost periphery of the four bounding boxes are Q.sub.1(x.sub.Q1,y.sub.Q1), Q.sub.2(x.sub.Q2,y.sub.Q2), Q.sub.3(x.sub.Q3,y.sub.Q3), and Q.sub.4(x.sub.Q4,y.sub.Q4). Q.sub.1 is located at the topmost part of the binary dot matrix code pattern, Q.sub.2 is located at the bottom part of the binary dot matrix code pattern, Q.sub.3 is located at the leftmost part of the binary dot matrix code pattern, and Q.sub.4 is located at the rightmost part of the binary dot matrix code pattern, calculating coordinates of individual vertices as shown in equations (1)-(8) below: [0129] Q.sub.1 is located at the topmost part of the binary dot matrix code pattern, which with a vertical coordinate y.sub.Q.sub.1 is a point with a smallest vertical coordinate in the set P, as shown in equations (1) and (2) below:

[00023] y Q 1 = min ( y ij ) , i , j [ 1 , 4 ] ( 1 ) Q 1 = { ( x ij , y ij ) P , y ij = y Q 1 } , i , j [ 1 , 4 ] ( 2 ) [0130] Q.sub.2 is located at the bottom part of the binary dot matrix code pattern, which with a longitudinal coordinate y.sub.Q.sub.2 is a point with a largest longitudinal coordinate in the se P, as shown in equations (3) and (4) below:

[00024] y Q 2 = max ( y ij ) , i , j [ 1 , 4 ] ( 3 ) Q 2 = { ( x ij , y ij ) P , y ij = y Q 2 } , i , j [ 1 , 4 ] ( 4 ) [0131] Q.sub.3 is located at the leftmost part of the binary dot matrix code pattern, which with a transverse coordinate x.sub.Q.sub.3 is a smallest transverse coordinate in the set P, as shown in equations (5) and (6) below:

[00025] x Q 3 = min ( x ij ) , i , j [ 1 , 4 ] ( 5 ) Q 3 = { ( x ij , y ij ) P , x ij = x Q 3 } , i , j [ 1 , 4 ] ( 6 ) [0132] Q.sub.4 is located at the rightmost part of the binary dot matrix code pattern, which a transverse coordinate x.sub.Q.sub.4 is the point with the largest transverse coordinate in the set of points P, as shown in equations (7) and (8) below:

[00026] x Q 4 = max ( x ij ) , i , j [ 1 , 4 ] ( 7 ) Q 4 = { ( x ij , y ij ) P , x ij = x Q 4 } , i , j [ 1 , 4 ] ( 8 )

[0133] In some embodiments, a filtering process of a second filtering strategy is as follows: [0134] constructing the 16 corner points of the bounding boxes frames outputted by the YOLOv8 network model in the Step S3 into the set W, wherein each coordinate of each point w.sub.m are (x.sub.m,y.sub.m).

[0135] For any point w.sub.m, a distance between the points w.sub.m from the other 15 corner points w.sub.n(mn) is defined as d(w.sub.m,w.sub.n), which is calculated using a Euclidean distance equation (9)

[00027] d ( w m , w n ) = ( x m - x n ) 2 + ( y m - y n ) 2 , m , n [ 1 , 1 6 ] , m n ( 9 ) [0136] calculating a sum of distances between the point w.sub.m and the other 15 corner points using a following equation (10):

[00028] S w m = .Math. n = 1 , m n 1 6 d ( w m , w n ) , m , n [ 1 , 1 6 ] ( 10 )

[0137] By comparing all S.sub.w.sub.m, selecting points corresponding to a first 4 larges S.sub.w.sub.m as 4 peripheral vertices of the binary dot matrix code, wherein an expression equation is as follows: [0138] assuming that all S.sub.w.sub.m are sorted in descending order to obtain a sorted sequence as follows (11):

[00029] S w 1 S w 2 .Math. S w 1 6 ( 11 ) [0139] selecting a set P corresponding to points of the first 4 largest S.sub.w.sub.m as:

[00030] P = { w 1 , w 2 , w 3 , w 4 } ( 12 )

[0140] The vertices at the outermost periphery of the four bounding boxes may be selected by both of above processes.

[0141] Step S5, perspective correction: obtaining a distorted quadrilateral based on the positional coordinates of the vertices at the outermost periphery of the four bounding boxes, and converting the distorted quadrilateral into a standard LL square pattern by perspective transformation.

[0142] In some embodiments, a transformation equation of the perspective transformation is a transformation process of converting a pixel coordinate (u,v) from a point on a three-dimensional world coordinate system to a two-dimensional plane pixel coordinate (x,y).

[0143] The transformation equation (13) for perspective transformation is as follows:

[00031] [ x y z ] = [ b 11 b 12 b 13 b 2 1 b 2 2 b 2 3 b 31 b 32 b 33 ] [ u v 1 ] ( 13 ) [0144] wherein a coordinate of an original image in a two-dimensional plane is (u,v). (x,y,z) is a coordinate of a converted three-dimensional world.

[0145] The perspective transformation matrix

[00032] [ b 11 b 12 b 13 b 2 1 b 2 2 b 2 3 b 31 b 32 b 33 ]

is split into 3 parts,

[00033] ( b 1 1 b 1 2 b 21 b 2 2 )

is configured as linear transformation,

[00034] ( b 13 b 2 3 )

is configured as the perspective transformation, and (b.sub.31,b.sub.32) is configured as translation operation.

[0146] A pixel coordinate (u,v) is transformed to a point on the three-dimensional world coordinate system, and transformed to a new two-dimensional plane to obtain a corrected coordinate (x,y), and equations (14) and (15) is shown as follows:

[00035] x = x z = b 11 u + b 21 v + b 31 b 13 u + b 23 v + b 33 ( 14 ) y = y z = b 12 u + b 22 v + b 32 b 13 u + b 23 v + b 33 ( 15 )

[0147] Assuming that the size of the standard binary dot matrix code is L, the 4 peripheral vertices of the standard binary dot matrix code defined on the new two-dimensional plane are Q.sub.1(0,0), Q.sub.2(L,0), Q.sub.3(0,L), Q.sub.4(L,L); through the step S4, the 4 vertices Q.sub.1, Q.sub.2, Q.sub.3, Q.sub.4 of the binary dot matrix code are obtained, and the corresponding points after perspective transformation are Q.sub.1, Q.sub.2, Q.sub.3, Q.sub.4.

[0148] Here is an embodiments of perspective transformation: assuming that the size of the standard binary dot matrix code pattern is L=300, then the specification of the binary dot matrix code is 300300, and the coordinates of the 4 vertices are Q.sub.1(0,0), Q.sub.2(300, 0), Q.sub.3(0,300), Q.sub.4(300,300), respectively.

[0149] Assuming that the coordinates of the peripheral vertices Q.sub.1, Q.sub.2, Q.sub.3, Q.sub.4 of the original binary dot matrix code pattern are: [0150] Q.sub.1(23.3625,10.4983), Q.sub.2(185.0730,206.0371), Q.sub.3(2.3291,189.7386), and Q.sub.4(211.6296,26.8318).

[0151] By substituting the peripheral vertices Q.sub.1, Q.sub.2, Q.sub.3, and Q.sub.4 of the original binary dot matrix code pattern and the peripheral vertices Q.sub.1, Q.sub.2, Q.sub.3, and Q.sub.4 of the standard binary dot matrix code pattern after perspective transformation into the equation (13), the perspective transformation matrix may be obtained as follows:

[00036] [ b 11 b 12 b 13 b 2 1 b 2 2 b 2 3 b 31 b 32 b 33 ] = [ 1.5798 0.18538 - 38.854 - 0.13943 1.6072 - 13.615 2.715 e - 05 - 0.0001584 1 ] .

[0152] Step S6, calculating a counterclockwise rotation angle: inputting the square pattern obtained in the step S5 into the trained and upgraded YOLOv8 network model of the step S2 for detection, and outputting four corner points of bounding boxes corresponding to a fiducial marker of the square pattern; [0153] obtaining a coordinate of a center point G of the bounding boxes based on the four corner points of the bounding boxes; [0154] comparing the coordinate of the center point G of the bounding boxes with a coordinate of a center point

[00037] O ( L 2 , L 2 )

of the square pattern obtained in the step S5 to obtain a counterclockwise rotation angle of the square pattern.

[0155] In some embodiments, the process for obtaining the counterclockwise rotation angle are as follows: [0156] Step S6-1: assuming that the four corner points of the bounding boxes corresponding to the fiducial marker of the square pattern are denoted as T.sub.r and coordinates are denoted as(x.sub.r,y.sub.r), wherein r[1,4], coordinates of the center point G(x.sub.G,y.sub.G) of the bounding boxes and the center point G of the bounding boxes is calculating by equations (16) and (17):

[00038] x G = r = 1 4 x r 4 ( 16 ) y G = r = 1 4 y r 4 ( 17 ) [0157] Step S6-2: determining the counterclockwise rotation angle of the square pattern by comparing the coordinate of the center point G(x.sub.G,y.sub.G) of the bounding boxes with a coordinate of a center point

[00039] O ( L 2 , L 2 )

of the square pattern of the step S5, wherein a corresponding relationship is as follows:

[00040] = { 0 , x G > L 2 and y G > L 2 90 , x G < L 2 and y G > L 2 180 , x G < L 2 and y G < L 2 270 , x G > L 2 and y G < L 2 ( 18 )

[0158] Here is an embodiment of calculating a rotation angle. Assuming that a specification of the square pattern of the coordinate of the bounding box center point G(40,80) is 100100,

[00041] L 2 = 5 0 ,

and there exists the following relationship for the coordinate:

[00042] x G < L 2 y G > L 2

[0159] Therefore, it may be obtained that =90.

[0160] Step S7, rotational correction: rotating the square pattern of the step S5 counterclockwise by an angle by affine transformation to align the square pattern to a correct position to obtain a standard pattern of binary dot matrix code images of the test set.

[0161] In some embodiments, the operations for obtaining the standard pattern of binary dot matrix code images of the test set are as follows: [0162] Assuming that a point U(x.sub.u,y.sub.u) on the binary dot matrix code pattern after the perspective correction in the step S5 is rotated counterclockwise by an angle ; wherein a rotation process is as follows: [0163] calculating a coordinate (x.sub.c,y.sub.c) of the point U(x.sub.u,y.sub.u) with respect to the center point

[00043] O ( L 2 , L 2 ) .

[00044] x c = x u - L 2 ( 19 ) y c = y u - L 2 ( 20 ) [0164] applying a rotation matrix R to rotate (x.sub.c,y.sub.c) to obtain a coordinate (x.sub.h,y.sub.h):

[00045] R = [ cos ( ) - sin ( ) sin ( ) cos ( ) ] ( 21 ) [ x h y h ] = R .Math. [ x c y c ] ( 22 ) [0165] translating (x.sub.h,y.sub.h) back to an original position to obtain a final rotation point U(x.sub.new,y.sub.new):

[00046] x new = x h + L 2 ( 23 ) y new = y h + L 2 ( 24 ) [0166] by combining above steps, obtaining a coordinate U(x.sub.u,y.sub.u) by rotating the point U(x.sub.new,y.sub.new) about the center point by the angle :

[00047] x new = cos ( ) .Math. ( x u - L 2 ) - sin ( ) .Math. ( y u - L 2 ) + L 2 ( 25 ) y new = sin ( ) .Math. ( x u - L 2 ) - cos ( ) .Math. ( y u - L 2 ) + L 2 ( 26 )

[0167] Step S8, code point identification: dividing the standard pattern into image blocks with a same size in equal portions based on a dimension of binary code dots; and inputting the image blocks into the trained and upgraded YOLOv8 network model of the step S2 sequentially in the order of top to bottom and left to right for identification;

[0168] in response to determining that a code point exists in the image blocks, outputting 1, and in response to determining that there is no code point in the image blocks, outputting 0; [0169] finally, concatenating the identification results to obtain a binary-encoded sequence data;

[0170] In some embodiments, the code point identification includes the following steps:

[0171] FIG. 6 is a schematic diagram of dividing image blocks according to some embodiments of the present disclosure.

[0172] Step S8-1, dividing a binary dot matrix code standard pattern into image blocks: based on the dimension of the binary code dots, the binary dot matrix code standard pattern rotated to the correct position in step S7 is divided into the image blocks with the same size in equal portions, as shown in FIG. 6. The dimension of the binary code dots refers to a quantity distribution of the binary code dots in the horizontal and vertical directions, which is the matrix dimension formed by the code points. When dividing a standard pattern into the image blocks with the same size in equal portions, the quantity of the binary code points in the horizontal and vertical directions may determine the overall size and structure of the pattern, which in turn affects the image block division process.

[0173] Step S8-2, image blocks identification code point: identifying circular binary code points of the image blocks sequentially in the order of top to bottom and left to right by using the trained and upgraded YOLOv8 network model in the step S2, in response to determining that a code point exists in the image blocks, outputting 1, and in response to determining that there is no code point in the image blocks, outputting 0, and identifying and obtaining a binary coding sequence 01.

[0174] Step S9, decoding: decoding the binary-encoded sequence data based on binary-encoded decoding rules, and deriving corresponding binary-encoded character information.

[0175] In some embodiments, the mold surface binary dot matrix code identification process further includes: querying a second preset table based on the binary-encoded character information in the step S9 to determine working parameters; the working parameters including a heating rate of a heating unit and a hydraulic pressure of a stamping press hydraulic valve; and based on the working parameters, controlling the heating unit to heat the mold at the heating rate, and adjusting the stamping press hydraulic valve to the hydraulic pressure.

[0176] The binary-encoded character information refers to readable character data obtained by decoding the binary dot matrix code, which is used for the full lifecycle management of the mold (e.g., the working parameters such as controlling heating and stamping).

[0177] The second preset table includes a corresponding relationship between the binary-encoded character information and the working parameters. In some embodiments, the second preset table is preset based on experience.

[0178] The working parameters refer to parameters related to the work execution of the mold. The working parameters may include the heating rate of the heating unit and the hydraulic pressure of the stamping press hydraulic valve. The working parameters may also include the heating temperature of the mold, the molding frequency, or the like.

[0179] The heating unit refers to a device used for heating the mold. For example, the heating unit includes a heating plate, a ceramic heater, an oil temperature machine, or the like.

[0180] The stamping press refers to a device that performs stamping and forming of molds driven by hydraulic pressure. In some embodiments, according to binary-encoded character information, the working parameters of the stamping press (e.g., hydraulic pressure, stroke) may be adjusted to adapt to the production needs of different molds.

[0181] The stamping press hydraulic valve refers to a component that adjusts the hydraulic pressure applied to the mold and is the core control element of the stamping press power system. For example, the stamping press hydraulic valve includes a proportional relief valve, a servo flow valve, a high-frequency directional valve, or the like.

[0182] In some embodiments, the processor, based on the working parameters, controls the heating unit to heat the mold at the heating rate corresponding to the working parameter, and adjusts the stamping press hydraulic valve to the hydraulic pressure corresponding to the working parameter.

[0183] The embodiments of present disclosure, based on the binary-encoded character information, determine the working parameter, and then, based on the working parameter, control the mold to perform production operations, which can avoid damage to the mold caused by inappropriate working parameters.

[0184] The specific embodiments of the two types of filtering processes in step S6 of the embodiments of present disclosure are as follows:

[0185] For the first filtering process, assuming that after the fiducial marker detection of the binary dot matrix code image, the YOLOv8 network outputs coordinates of the 16 corner points from 4 bounding boxes as follows:

[0186] The first bounding box: a.sub.11(2.3291, 189.7386), a.sub.12(31.8235, 190.5836), a.sub.13(32.6759,160.8312), a.sub.14(3.1815,159.9861); the second bounding box: a.sub.21(55.0101,42.9026), a.sub.22(56.1172,11.6590), a.sub.23(23.3625,10.4983), a.sub.24(22.2553,41.7419); the third bounding box: a.sub.31((180.4171,57.0028), a.sub.32(210.5101,58.0810), a.sub.33(211.6296,26.8318), a.sub.34(181.5367,25.7537); the fourth bounding box: a.sub.41(185.0730,206.0371), a.sub.42(185.9736,177.9098), a.sub.43(154.1791,176.8919), a.sub.44(153.2786,205.0192).

[0187] The vertex filtering rule is applied to the coordinates of the 16 corner points to obtain: [0188] Q.sub.1 is located at a topmost part of the binary dot matrix code pattern, which with a vertical coordinate y.sub.Q.sub.1 is a point with a smallest vertical coordinate in the set P, then a corresponding corner point is a.sub.23(23.3625,10.4983). [0189] Q.sub.2 is located at a bottom part of the binary dot matrix code pattern, which with a longitudinal coordinate y.sub.Q.sub.2 is a point with a largest longitudinal coordinate in the set P, then a corresponding corner point is a.sub.41(185.0730,206.0371). [0190] Q.sub.3 is located at a leftmost part of the binary dot matrix code pattern, which with a transverse coordinate x.sub.Q.sub.3 is a smallest transverse coordinate in the set P, and a corresponding corner point is a.sub.11(2.3291, 189.7386). [0191] Q.sub.4 is located at a rightmost part of the binary dot matrix code pattern, which a transverse coordinate x.sub.Q.sub.4 is a largest transverse coordinate in the set P, then a corresponding corner point is a.sub.33(211.6296,26.8318).

[0192] For the second filtering process, an embodiments is given for detailed description: assuming that after the fiducial marker detection of the binary dot matrix code image, the YOLOv8 network outputs the coordinates of 16 corner points of four bounding boxes as follows: w.sub.1(9,3), w.sub.2(6,9), w.sub.3(1,5), w.sub.4(7,10), w.sub.5(3,7), w.sub.6(3,10), w.sub.7(9,2), w.sub.8(7,6), w.sub.9(5,0), w.sub.10(2,6), w.sub.11(1,10), w.sub.12(3,6), w.sub.13(8,2), w.sub.14(9,7), w.sub.15(3,5), w.sub.16(7,8).

[0193] A sum of distances S.sub.w.sub.m between point w.sub.m and other points w.sub.n (retaining two decimal places) is shown as follows:

[0194] A sum of distances between point w.sub.1(9,3) and other points is: 90.35; a sum of distances between point w.sub.2(6,9) and other points is: 72.77; a sum of distances between point w.sub.3(1,5) and other points is: 84.92; a sum of distances between point w.sub.4(7,10) and other points is: 86.08; a sum of distances between point w.sub.5(3,7) and other points is: 66.07; a sum of distances between point w.sub.6(3,10) and other points is: 86.36; a sum of distances between point w.sub.7(9,2) and other points is: 97.78; a sum of distances between point w.sub.8(7,6) and other points is: 66.12; a sum of distances between point w.sub.9(5,0) and other points is: 108.03; a sum of distances between point w.sub.10(2,6) and other points is: 71.95; a sum of distances between point w.sub.11(1,10) and other points is: 101.11; a sum of distances between point w.sub.12(3,6) and other points is: 64.81; a sum of distances between point w.sub.13(8,2) and other points is: 90.86; a sum of distances between point w.sub.14(9,7) and other points is: 82.82; a sum of distances between point w.sub.15(3,5) and other points is: 67.90; a sum of distances between point w.sub.16(7,8) and other points is: 70.17.

[0195] By sorting all the S.sub.w.sub.i, the top 4 largest results of S.sub.w.sub.i are selected as follows:

[0196] A sum of the distances between point w.sub.9(5,0) and the other points is: 108.03; a sum of distances between point w.sub.11(1,10) and other points is: 101.11; a sum of distances between point w.sub.7(9,2) and other points is: 97.78; a sum of distances between point w.sub.13(8,2) and other points is: 90.86; therefore, P={w.sub.9, w.sub.11, w.sub.7, w.sub.13}.

[0197] The embodiments of the present disclosure, through the YOLOv8 network model, perform fiducial marker detection on the mold surface binary dot matrix code images, and then, through a filtering strategy, calculate the outermost vertices of the bounding box. Subsequently, through perspective correction, calculating the counterclockwise rotation angle, rotational correction, code point identification, and decoding, the identification of mold binary codes with different lighting, rotation, and size variations is achieved, which can greatly improve the identification rate of mold surface binary codes.

[0198] The basic concepts have been described above, and it is apparent to those skilled in the art that the foregoing detailed disclosure is intended only as an example and does not constitute a limitation of the present disclosure. While not expressly stated herein, a person skilled in the art may make various modifications, improvements, and amendments to the present disclosure. Those types of modifications, improvements, and amendments are suggested in the present disclosure, so those types of modifications, improvements, and amendments remain within the spirit and scope of the exemplary embodiments of the present disclosure.

[0199] In addition, the order of processing elements and sequences, the use of numerical letters, or the use of other names described in the present disclosure are not intended to qualify the order of the processes and methods of the present disclosure, unless expressly stated in the claims. While some embodiments of the invention that are currently considered useful are discussed in the foregoing disclosure by way of various examples, it should be appreciated that such details serve only illustrative purposes, and that additional claims are not limited to the disclosed embodiments; rather, the claims are intended to cover all amendments and equivalent combinations that are consistent with the substance and scope of the embodiments of the present disclosure.

[0200] For each patent, patent application, patent application disclosure, and other materials cited in the present disclosure, such as articles, books, manuals, publications, documents, etc., the entire contents of which are hereby incorporated by reference herein. Application history documents that are inconsistent with or conflict with the contents of the present disclosure are excluded, as are documents (currently or hereafter appended to the present disclosure) that limit the broadest scope of the claims of the present disclosure. It should be noted that in the event of any inconsistency or conflict between the descriptions, definitions, and/or use of terms in the materials appended to the present disclosure and those set forth herein, the descriptions, definitions, and/or use of terms in the present disclosure shall prevail.

[0201] Finally, it should be understood that the embodiments described in the present disclosure are used only to illustrate the principles of the embodiments of the present disclosure. Other variations may also fall within the scope of the present disclosure. As such, alternative configurations of embodiments of the present disclosure may be viewed as consistent with the teachings of the present disclosure as an example, not as a limitation. Correspondingly, the embodiments of the present disclosure are not limited to the embodiments expressly presented and described herein.