METHOD OF CAPTURING AND RECONSTRUCTING COURT LINES
20170337714 · 2017-11-23
Inventors
Cpc classification
G06T11/005
PHYSICS
G06V10/48
PHYSICS
G06V10/507
PHYSICS
International classification
Abstract
A method of extracting and reconstructing court lines includes the steps of binarizing a court image of a court including court lines to form a binary image; performing horizontal projection for the binary image; searching for plural corners in the binary image and defining a court line range by the corners; forming plural linear segments from images within the court line range by linear transformation; defining at least one first cluster and at least one second cluster according to the characteristics of the linear segments and categorizing the linear segments into plural groups; taking an average of each group as a standard court line and creating a linear equation of the standard court line to locate the point of intersection of the standard court lines; and reconstructing the court lines according to the point of intersection. This method is capable of extracting the image of a portion of the court line from a dynamic or static image having a court line quickly to eliminate interference caused by noises coming from a portion other than the court line such as the background color, ambient brightness, people or advertisement, and reconstructing the court lines quickly and accurately to facilitate the determination of the boundary of a court line or the computation of data.
Claims
1. A method of extracting and reconstructing court lines, comprising the steps of: binarizing a court image of a court including a court line to form a binary image; searching for a plurality of corners in the binary image and defining a court line range by the corners; forming a plurality of linear segments from an image within the court line range by linear transformation; defining at least one first cluster and at least one second cluster according to the characteristics of the linear segments, and categorizing the linear segments into a plurality of groups according to the first cluster and the second cluster; taking an average of each group as a standard court line, and creating a linear equation of the standard court line to locate the position of a point of intersection of the standard court lines; and reconstructing the court line according to the point of intersection.
2. The method of extracting and reconstructing court lines according to claim 1, further comprising the sub-steps of performing a gradient computation of the court image to produce a horizontal gradient image and a vertical gradient image, and combining the horizontal gradient image and the vertical gradient image to form the binary image.
3. The method of extracting and reconstructing court lines according to claim 2, further comprising the sub-steps of defining a threshold according to the color of the court line, and forming the binary image by the threshold screening when the horizontal gradient image and the vertical gradient image are combined.
4. The method of extracting and reconstructing court lines according to claim 1, further comprising the sub-steps of: performing a horizontal projection of the binary image to form a first horizontal projection image; defining a range of the first horizontal projection image with a horizontal cumulative value greater than a cumulative threshold to be a search range, and searching for the corners in the search range.
5. The method of extracting and reconstructing court lines according to claim 4, further comprising the sub-steps of: using Equation 1 to filter out the noise of the first horizontal projection image to form a second horizontal projection image:
6. The method of extracting and reconstructing court lines according to claim 4, wherein the first horizontal projection image is formed by performing a horizontal projection at a middle third of the binary image.
7. The method of extracting and reconstructing court lines according to claim 1, further comprising the sub-steps of: setting the court line as a quadrilateral, and creating a binary search image and its search coordinates, and dividing the search image into an upper left blank, an upper right blank, a lower left blank and a lower right blank through the search coordinates; slidably searching the search image in the binary image, and performing a convolution of the binary image; and defining the maximum after convolution takes place as the corner.
8. The method of extracting and reconstructing court lines according to claim 7, wherein the convolution of the search image and the binary image is carried out by Equation 3:
O(i,j)=Σ.sub.s=−4.sup.4Σ.sub.t=−4.sup.4m(s,t)×p(i+s,j+t);(i*,j*)=argmax O(i,j) (Equation 3) wherein O(i,j) is a corner; m(s,t) is a search image; and p(i,j) is a binary image.
9. The method of extracting and reconstructing court lines according to claim 1, further comprising the sub-steps of performing a thinning process after the image in the court line range is processed by a closing process, and then forming the linear segment by linear transformation.
10. The method of extracting and reconstructing court lines according to claim 1, further comprising a sub-step of performing a Hough transform of the court line in the court line range to form the linear segment.
11. The method of extracting and reconstructing court lines according to claim 1, further comprising the sub-steps of: using the first cluster to classify a horizontal segment in the linear segment according to the slope and the position of the Y-axis coordinate; and using the second cluster to classify a vertical segment in the linear segment according to the slope and the intercept.
12. The method of extracting and reconstructing court lines according to claim 11, wherein the court line is a tennis court line, and six first clusters and five second clusters are defined according to the characteristics of the linear segment.
13. The method of extracting and reconstructing court lines according to claim 1, wherein the point of intersection is used to reconstruct the court line according to the court line position by a line function.
14. The method of extracting and reconstructing court lines according to claim 1, wherein the court line is a tennis court line, and the standard court line is provided for computing 30 points of intersection.
15. The method of extracting and reconstructing court lines according to claim 14, further comprising the sub-steps of: setting the court image as a dynamic continuous image, defining a first constant value, a second constant value and a computing value, computing the distance value between the point of intersection of the current court image position and the point of intersection of the previous court image at the corresponding position, and increasing the computing value if the distance value is smaller than the first constant value, and computing an error threshold T.sub.e by Equation 4 if the computing value is greater than the second constant value:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0050] The technical contents of this disclosure will become apparent with the detailed description of preferred embodiments accompanied with the illustration of related drawings as follows. It is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than restrictive.
[0051] With reference to
[0052] S001: Capture a court image including a court line 1. It is noteworthy that the court image may include interference caused by a complicated background including audience, advertising signs, referees, a net, and players. To reduce the interference of the site background and the distortion of a video after being compressed and eliminate the possibility of losing a part of the court line which is stepped by the players or worn out, an embodiment of the present invention adopts the Sobel algorithm for gradient computation to obtain a high-quality binary image 2 while converting the court image into the binary image 2. In Steps S101 to S110 as shown in
[0053] S002: Perform a horizontal projection of the binary image 2 to form a first horizontal projection image 3 as shown in
[0054] Define a cumulative threshold, and define a range of the first horizontal projection image 3 having a horizontal cumulative value greater than the cumulative threshold as a search range 5, and locate a corner in the search range 5.
[0055] In a preferred embodiment, the noises of the first horizontal projection image 3 are filtered by the following Mathematical Equation 1 to form a second horizontal projection image 4 as shown in
[0056] Wherein, F.sub.i is a second horizontal projection image 4; p.sub.i is a horizontal cumulative value of the corresponding first horizontal projection image 3; μ is an average of the horizontal cumulative values of the first horizontal projection image 3; and σ is a standard deviation.
[0057] The cumulative threshold is defined by Mathematical Equation 2 as follows:
[0058] wherein, ρ is a magnification constant;
[0059] Define a range of the second horizontal projection image 4 having a horizontal cumulative value greater than the cumulative threshold as the search range 5.
[0060] In
[0061] In an embodiment, even if the binary image 2 is clear, and the background and the court line 1 can be distinguished, or there is the image of the court line 1 only, or the corner of the court line 1 can be searched directly from the binary image 2, it is preferable to use horizontal projection for processing, since it is always difficult to distinguish the background and the court line 1.
[0062] In another embodiment, the horizontal projection of the first horizontal projection image 3 is performed at the middle third of the binary image 2, since the court line 1 just occupies the middle third of the court image in most court images including audience and advertising, so as to expedite the computation. However, the present invention is not limited by such arrangement only.
[0063] S003: Search a corner of the court line 1 from the search range 5 of the binary image 2 after the search range 5 is created, and use the corners to surround and define a court line range.
[0064] In a preferred embodiment, the plurality of court lines 1 is arranged into a rectangular shape. Due to a possible deviation of angle of a camera occurred while capturing an image, the court line 1 may be distorted into a trapezium or a quadrilateral. In this embodiment, a binary search image 6 and its search coordinates are created, and the search image 6 is divided into an upper left blank 61, an upper right blank 62, a lower left blank 63 and a lower right blank 64 by the search coordinates. For example, the search image 6 of this embodiment is “” shaped.
[0065] In the binary image 2, the white pixel is represented by “1” and the black pixel is represented by “0”, and the white pixel above the pattern (which is the search image 6) is represented by “1” and the black pixel below the pattern is represented by “−1”. The search image 6 is slidably searched in the binary image 2, and a convolution is performed with the binary image 2 as shown in
O(i,j)=Σ.sub.s=−4.sup.4Σ.sub.t=−4.sup.4m(s,t)×p(i+s,j+t);
(i*,j*)=argmax O(i,j); <Mathematical Equation 3>
Wherein, O(i,j) is a corner; m(s,t) is a search image 6; and p(i,j) is a binary image 2.
[0066] Therefore, the maximum obtained after the convolution is defined as the corner, and the corners are used to define a court line range, so as to eliminate the noise outside the court line range. Now, the image within the court line range is just an image including the court line 1.
[0067] S004: Search for a point of intersection 9 of the court line in order to reconstruct the court line 1. Since the photographed court line 1 of the original court image may be covered by dust or the court line 1 in the court image is blocked due to the light factor varied with time in the process of taking the photos. Therefore, a closing process of the image within the court line range as shown in the Steps S201 to S206 of
[0068] S005: Due to the depression of the net and the fisheye effect of the camera, the image of the court line 1 is distorted, so that segments with repetitions, superimpositions, or noises are produced after the Hough transform takes place, and a filtering process is required. Therefore, the present invention defines at least one first cluster and at least one second cluster according to the characteristics of the linear segment 7 by K-means clustering, and the linear segment 7 is classified into a plurality of groups according to the first cluster and the second cluster, so that the tennis court line just includes vertical lines and horizontal lines only, and the first cluster is used for classifying the horizontal segment in the linear segment 7 according to the slope and the position of the Y-axis coordinate, and the second cluster is used to classify the vertical segment in the linear segment 7 according to the slope and the intercept.
[0069] In the linear segment 7, the court line 1 of the tennis court includes six transverse lines and five vertical lines, so that six first clusters and five second clusters are defined, and the horizontal segments and the vertical segments marked in
[0070] S006: Take an average of each group as a standard court line 8, and create linear equations of the standard court line 8 as shown in
[0071] S007: With reference to
[0072] However, errors may occur in some situations while reconstructing the court line 1, so that an error filtering step is required. If the court image is a dynamic continuous image, and a reconstruction error occurs, the distance value of the points of intersection 9 compared with the distance value created by the adjacent previous frame will be computed by Mathematical Equation 4 as follows:
|P.sub.i(k+1)−P.sub.i(k)|<T.sub.1,i=1,2, . . . ,30 <Mathematical Equation 4>
[0073] Wherein, T.sub.1 is a first constant value.
[0074] If the point of intersection 9 P.sub.i fits the Mathematical Equation 4, then a computing value is accumulated, and if the computing value is greater than the second constant value, then an error threshold T.sub.e will be computed. In this embodiment, the second constant value is set to 2 as shown in Mathematical Equation 5:
[0075] Wherein, α is an error magnification constant;
[0076] If the distance value of the points of intersection 9 at the positions of any previous and current frames are greater than the error threshold (or matches Mathematical Equation 6), then the court line 1 of the current frame with a reconstruction error is replaced by the court line 1 reconstructed by the previous frame.
|P.sub.i(k+1)−P.sub.i(k)|>T.sub.e <Mathematical Equation 6>
[0077] In summation, this embodiment as shown in
[0078] In this embodiment, the accuracy of reconstructing the court line 1 before performing the error filtering step is approximately 98.4% to 99.7%, and the accuracy of reconstructing the court line 1 after performing the error filtering step reaches 100% (such statistics are obtained from ten thousand videos of open competitions). Obviously, the present invention can locate the point of intersection 9 in the court image accurately to facilitate the reconstruction of the court line 1.
[0079] While this disclosure has been described by means of specific embodiments, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope and spirit of this disclosure set forth in the claims.