IMAGE STITCHING METHOD
20220414825 · 2022-12-29
Assignee
Inventors
Cpc classification
G06T3/4038
PHYSICS
International classification
Abstract
An image stitching method is proposed to include: A) acquiring a plurality of segment images for a target scene, each of the segment images containing a part of a target scene; B) for two adjacent segment images, which are two of the segment images that have overlapping fields of view, comparing the two adjacent segment images to determine a stitching position for the two adjacent segment images from a common part of the overlapping fields of view; and C) stitching the two adjacent images together based on the stitching position thus determined.
Claims
1. An image stitching method, comprising steps of: A) acquiring a plurality of segment images for a target scene, each of the segment images containing a part of a target scene; B) for two adjacent segment images, which are two of the segment images that have overlapping fields of view, comparing the two adjacent segment images to determine a stitching position for the two adjacent segment images from a common part of the overlapping fields of view; and C) stitching the two adjacent images together based on the stitching position thus determined.
2. The image stitching method of claim 1, wherein the segment images are captured line by line in sequence along a first direction and are classified into first to M.sup.th groups according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); and wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; said image stitching method further comprising a step of D) for each of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing steps B) and C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to stitch the first to N.sup.th images together in the second direction to form a stitch image for said each of the first to M.sup.th groups.
3. The image stitching method of claim 2, further comprising steps of: E) for a specific value of i and for each value of m, performing step B) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve the two adjacent segment images, so as to obtain the stitching position for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; and F) for the specific value of i and each value of m, stitching the stitch images of the m.sup.th group and the (m+1).sup.th group together in the first direction based on the stitching position obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to obtain a full image of the target scene.
4. The image stitching method of claim 1, wherein the segment images are classified into first to N.sup.th groups, where N is a positive integer greater than one; wherein each of the first to N.sup.th groups includes M number of the segment images, which are referred to as first to M.sup.th images and which are captured one by one in sequence along a first direction, where M is a positive integer greater than one; wherein the first to N.sup.th groups are captured line by line in sequence along a second direction transverse to the first direction, and the segment images are classified into the first to N.sup.th groups according to an order in which the segment images are captured; wherein, for each of the first to N.sup.th groups, an m.sup.th image and an (m+1).sup.th image have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1); and wherein a j.sup.th image of an n.sup.th group of the segment images and a j.sup.th image of an (n+1).sup.th group of the segment images have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1), and j is a variable that takes a positive integer value ranging from one to M; said image stitching method further comprising steps of: D) for each of the first to N.sup.th groups, after each of the first to M.sup.th images is captured, rotating said each of the first to M.sup.th images by 90 degrees in a rotational direction, so as to obtain rotated first to M.sup.th images; E) for each of the first to N.sup.th groups and for each value of m, after the rotated m.sup.th image and the rotated (m+1).sup.th image are obtained, performing steps B) and C) on the rotated m.sup.th image and the rotated (m+1).sup.th image that serve as the two adjacent segment images, so as to stitch the rotated first to M.sup.th images together in the second direction to form a stitch image for said each of the first to N.sup.th groups.
5. The image stitching method of claim 4, further comprising steps of: F) for a specific value of j and for each value of n, performing step B) on the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images that serve the two adjacent segment images, so as to obtain the stitching position for the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images; and G) for the specific value of j and for each value of n, stitching the stitch images of the n.sup.th group and the (n+1).sup.th group together in the first direction based on the stitching position obtained for the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images, so as to obtain a full image of the target scene.
6. The image stitching method of claim 1, wherein the segment images are captured line by line in sequence along a first direction, and are classified into first to M.sup.th groups according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; and wherein, for each of the first to M.sup.th groups of the segment images, the common parts of the overlapping fields of view of the n.sup.th image and the (n+1).sup.th image for different values of n have a same size; said image stitching method further comprising a step of: D) for each of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing step B) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to obtain a relative stitching position for the n.sup.th image and the (n+1).sup.th image; E) for a specific value of i and for each value of m, performing step B) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a relative stitching position for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group; F) for the specific value of i, correcting the relative stitching positions obtained for the segment images based on a reference segment image that is one of the i.sup.th images of the first to M.sup.th groups of the segment images, so as to obtain, for each of the segment images, an absolute stitching position relative to the reference segment image; G) for each of the first to M.sup.th groups of the segment images and for each value of n, performing step C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images based on the absolute stitching positions of the n.sup.th image and the (n+1).sup.th image, and, for each value of i and for each value of m, performing step C) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images based on the absolute stitching positions of the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to stitch the segment images together to form a full image of the target scene.
7. The image stitching method of claim 1, wherein the segment images are captured line by line in sequence along a first direction and are classified into first to M.sup.th lines according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; and wherein, for each of the first to M.sup.th groups of the segment images, the common parts of the overlapping fields of view of the n.sup.th image and the (n+1).sup.th image for different values of n have a same size; said image stitching method further comprising a step of: D) for a specific one of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing step B) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to obtain a relative stitching position for the n.sup.th image and the (n+1).sup.th image; E) for a specific value of i and for each value of m, performing step B) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a relative stitching position for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group; F) for the specific value of i, correcting, based on a reference segment image that is the i.sup.th image of the specific one of the first to M.sup.th groups, the relative stitching positions obtained for the first to N.sup.th images of the specific one of the first to M.sup.th groups, and the relative stitching positions obtained for the i.sup.th images of the first to M.sup.th groups of the segment images, so as to obtain, for each of the first to N.sup.th images of the specific one of the first to M.sup.th groups and the i.sup.th images of the first to M.sup.th groups of the segment images, an absolute stitching position relative to the reference segment image; G) determining, for the specific value of i, for each value of a variable k, which takes a positive integer value ranging from one to N except for said specific value of i, and for each value of j, which is a variable that takes a positive integer value ranging from one to M, an absolute stitching position relative to the reference segment image for a k.sup.th image of a j.sup.th group of the segment images based on the k.sup.th image of the specific one of the first to M.sup.th groups and the i.sup.th image of the j.sup.th group of the segment images, where the j.sup.th group is different from the specific one of the first to M.sup.th groups; and H) for each of the first to M.sup.th groups of the segment images and for each value of n, performing step C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images based on the absolute stitching positions of the n.sup.th image and the (n+1).sup.th image, and, for each value of i and for each value of m, performing step C) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images based on the absolute stitching positions of the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to stitch the segment images together to form a full image of the target scene.
8. The image stitching method of claim 1, wherein step B) includes sub-steps of: B-1) obtaining a convolution kernel from one of the two adjacent segment images, and defining a convolution region in the other one of the two adjacent segment images, wherein the convolution kernel includes, at least in part, data of the common part of the overlapping fields of view, and the convolution region includes, at least in part, data of the common part of the overlapping fields of view; and B-2) using the convolution kernel to perform convolution on the convolution region to obtain a plurality of convolution scores for different sections of the convolution region; and step C) includes stitching the two adjacent segment images together based on the convolution scores.
9. The image stitching method of claim 8, wherein the segment images are captured line by line in sequence along a first direction, and are classified into first to M.sup.th groups according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); and wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; said image stitching method further comprising a step of D) for each of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing steps B) and C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to stitch the first to N.sup.th images together in the second direction to form a stitch image for said each of the first to M.sup.th groups.
10. The image stitching method of claim 9, further comprising steps of: E) for a specific value of i and for each value of m, performing sub-steps B-1) and B-2) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; and F) for the specific value of i and for each value of m, stitching the stitch images of the m.sup.th group and the (m+1).sup.th group together in the first direction based on the convolution scores obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to obtain a full image of the target scene.
11. The image stitching method of claim 9, wherein, for each of the first to M.sup.th groups of the segment images, the common parts of the overlapping fields of view of the n.sup.th image and the (n+1).sup.th images vary in size for different values of n; and wherein, in step D), sub-steps B-1) and B-2) are repeatedly performed on the n.sup.th image and the (n+1).sup.th image of said each of the first to M.sup.th groups, and, for each of the repetitions of sub-steps B-1) and B-2), at least one of the convolution kernel or the convolution region is different in size from that of another repetition.
12. The image stitching method of claim 9, wherein step D) further includes, before step C), normalizing the convolution scores obtained in each of the repetitions of sub-steps B-1) and B-2) based on a size of the convolution kernel used in the repetition; and wherein the stitching in step C) is performed based on the convolution scores thus normalized for all of the repetitions of sub-steps B-1) and B-2).
13. The image stitching method of claim 8, wherein the segment images are classified into first to N.sup.th groups, where N is a positive integer greater than one; wherein each of the first to N.sup.th groups includes M number of the segment images, which are referred to as first to M.sup.th images and which are captured one by one in sequence along a first direction, where M is a positive integer greater than one; wherein the segment images are captured line by line in sequence along a second direction transverse to the first direction, and are classified into the first to N.sup.th groups according to an order in which the segment images are captured; wherein, for each of the first to N.sup.th groups, an m.sup.th image and an (m+1).sup.th image have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1); and wherein a j.sup.th image of an n.sup.th group of the segment images and a j.sup.th image of an (n+1).sup.th group of the segment images have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1), and j is a variable that takes a positive integer value ranging from one to M; said image stitching method further comprising steps of: D) for each of the first to N.sup.th groups, after each of the first to M.sup.th images is captured, rotating said each of the first to M.sup.th images by 90 degrees in a rotational direction, so as to obtain rotated first to M.sup.th images; E) for each of the first to N.sup.th groups and for each value of m, after the rotated m.sup.th image and the rotated (m+1).sup.th image are obtained, performing steps B) and C) on the rotated m.sup.th image and the rotated (m+1).sup.th image that serve as the two adjacent segment images, so as to stitch the rotated first to M.sup.th images together in the second direction to form a stitch image for said each of the first to N.sup.th groups.
14. The image stitching method of claim 13, further comprising steps of: F) for a specific value of j and for each value of n, performing sub-steps B-1) and B-2) on the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images; and G) for the specific value of j and for each value of n, stitching the stitch images of the n.sup.th group and the (n+1).sup.th group together in the first direction based on the convolution scores obtained for the j.sup.th images of the n.sup.th group and the (n+1).sup.th group of the segment images, so as to obtain a full image of the target scene.
15. The image stitching method of claim 8, wherein the segment images are captured line by line in sequence along a first direction, and are classified into first to M.sup.th groups according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; and wherein, for each of the first to M.sup.th groups of the segment images, the common parts of the overlapping fields of view of the n.sup.th image and the (n+1).sup.th image for different values of n have a same size; said image stitching method further comprising a step of: D) for each of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing sub-steps B-1) and B-2) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the n.sup.th image and the (n+1).sup.th image of said each of the first to M.sup.th groups; E) for each of the first to M.sup.th groups and for each value of n, determining relative stitching coordinates for the n.sup.th image and the (n+1).sup.th image based on the convolution scores obtained for n.sup.th image and the (n+1).sup.th image; F) for a specific value of i and for each value of m, performing sub-steps B-1) and B-2) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; G) for the specific value of i and for each value of m, determining relative stitching coordinates for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images based on the convolution scores obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; H) for the specific value of i, correcting the relative stitching coordinates obtained for the segment images based on a reference segment image that is one of the i.sup.th images of the first to M.sup.th lines of the segment images, so as to obtain, for each of the segment images, a stitching coordinate set relative to the reference segment image; and I) for each of the first to M.sup.th groups of the segment images and for each value of n, performing step C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images based on the stitching coordinate sets of the n.sup.th image and the (n+1).sup.th image, and, for each value of i and for each value of m, performing step C) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images based on the stitching coordinate sets of the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to stitch the segment images together to form the full image of a target scene.
16. The image stitching method of claim 8, wherein the segment images are captured line by line in sequence along a first direction, and are classified into first to M.sup.th groups according to an order in which the segment images are captured, where M is a positive integer greater than one; wherein each of the first to M.sup.th groups includes N number of the segment images, which are referred to as first to N.sup.th images and which are captured one by one in sequence along a second direction transverse to the first direction, where N is a positive integer greater than one; wherein, for each of the first to M.sup.th groups, an n.sup.th image and an (n+1).sup.th image have overlapping fields of view, where n is a variable that takes a positive integer value ranging from one to (N−1); wherein an i.sup.th image of an m.sup.th group of the segment images and an i.sup.th image of an (m+1).sup.th group of the segment images have overlapping fields of view, where m is a variable that takes a positive integer value ranging from one to (M−1), and i is a variable that takes a positive integer value ranging from one to N; and wherein, for each of the first to M.sup.th groups of the segment images, the common parts of the overlapping fields of view of the n.sup.th image and the (n+1).sup.th image for different values of n have a same size; said image stitching method further comprising a step of: D) for a specific one of the first to M.sup.th groups and for each value of n, after the n.sup.th image and the (n+1).sup.th image are captured, performing sub-steps B-1) to B-2) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the n.sup.th image and the (n+1).sup.th image of the specific one of the first to M.sup.th groups; E) for the specific one of the first to M.sup.th groups and for each value of n, determining relative stitching coordinates for the n.sup.th image and the (n+1).sup.th image based on the convolution scores obtained for n.sup.th image and the (n+1).sup.th image; F) for a specific value of i and for each value of m, performing sub-steps B-1) and B-2) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; G) for the specific value of i and for each value of m, determining relative stitching coordinates for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images based on the convolution scores obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images; H) for the specific value of i, correcting, based on a reference segment image that is the i.sup.th image of the specific one of the first to M.sup.th groups, the relative stitching coordinates obtained for the first to N.sup.th images of the specific one of the first to M.sup.th groups, and the relative stitching coordinates obtained for the i.sup.th images of the first to M.sup.th groups of the segment images, so as to obtain, for each of the first to N.sup.th images of the specific one of the first to M.sup.th groups and the i.sup.th images of the first to M.sup.th groups of the segment images, a stitching coordinate set relative to the reference segment image; I) determining, for each value of a variable k, which takes a positive integer value ranging from one to N except for said specific value of i, and for each value of a variable j, which takes a positive integer value ranging from one to M, a stitching coordinate set relative to the reference segment image for a k.sup.th image of a j.sup.th group of the segment images based on the k.sup.th image of the specific one of the first to M.sup.th groups and the i.sup.th image of the j.sup.th group of the segment images, where the j.sup.th group is different from the specific one of the first to M.sup.th groups; and J) for each of the first to M.sup.th groups of the segment images and for each value of n, performing step C) on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images based on the stitching coordinate sets of the n.sup.th image and the (n+1).sup.th image, and, for each value of i and for each value of m, performing step C) on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images based on the stitching coordinate sets of the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to stitch the segment images together to form a full image of the target scene.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Other features and advantages of the disclosure will become apparent in the following detailed description of the embodiment(s) with reference to the accompanying drawings, of which:
[0007]
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DETAILED DESCRIPTION
[0020] Before the disclosure is described in greater detail, it should be noted that where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar characteristics.
[0021] Referring to
[0022] The moving mechanism 2 is controlled by the computer device 3 to move the camera device 1 to capture segment images of a target scene 100. In the illustrative embodiment, the target scene 100 is a planar scene such as a semiconductor circuit formed on a wafer. In other embodiments, the target scene 100 may be, for example, a wide view or a 360-degree panorama of a landscape, and this disclosure is not limited in this respect.
[0023] Reference is further made to
[0024]
[0025] In step S31, the computer device 3 obtains a convolution kernel from one of the two adjacent segment images, and defines a convolution region in the other one of the two adjacent segment images. The convolution kernel includes, at least in part, data of a common part of the overlapping fields of view, and the convolution region includes, at least in part, data of the common part of the overlapping fields of view. Usually, the convolution region is greater than the convolution kernel in size.
[0026]
[0027] Referring to
[0028] Briefly, in steps S31 and S32, the computer device compares the two adjacent segment images to determine a stitching position (e.g., the stitching section) for the two adjacent segment images from the common part of the overlapping fields of view of the two adjacent segment images.
[0029] In step S33, the computer device 3 stitches the two adjacent segment images together based on the stitching position determined in step S32 by, for example but not limited to, aligning a section of the right segment image from which the convolution kernel is obtained with the stitching section that is determined based on the convolution scores.
[0030] The flow introduced in
[0031]
[0032] In the first embodiment, for each of the first to M.sup.th groups (corresponding to the first to M.sup.th rows in
[0033] However, for each of the first to M.sup.th groups, since the first to N.sup.th images are captured using continuous shooting while the camera device 1 is moving, the common parts of the overlapping fields of view may vary in size for different pairs of adjacent segment images (e.g., the n.sup.th image and the (n+1).sup.th image in the same row) because of mechanical errors and/or tolerances. Accordingly, multiple convolution kernels of different sizes and multiple convolution regions of different sizes may be obtained and defined herein for use in the following steps. The convolution kernels may be obtained to have a size that is equal to different predetermined kernel ratios of a size of the segment images. For example, assuming that the segment images have a resolution of 1000×1000 and the predetermined kernel ratios are 10%, 20% and 40% of a side length of the segment images, the convolution kernels could be 800×100, 800×200 and 800×400 in size (noting that the heights of the convolution kernels may be predetermined by users, and can be different for different convolution kernels in some embodiments). Similarly, the convolution regions may be defined to have a size that is equal to different predetermined region ratios of a size of the segment images. In the above examples, assuming that the predetermined region ratios for the convolution regions are 80%, 90% and 100% of the side length of the segment images, the convolution regions could be 1000×800, 1000×900 and 1000×1000 (i.e., the entire segment image) in size (noting that the heights of the convolution regions may be predetermined by users, and can be different for different convolution regions in some embodiments). Then, for each pair of adjacent segment images, convolution may be performed several times using different region-kernel combinations constituted by the convolution regions of different sizes and the convolution kernels of different sizes. In other words, steps S31 and S32 may be repeatedly performed on each pair of adjacent segment images (e.g., the n.sup.th image and the (n+1).sup.th image of the same row), and, for each of the repetitions of steps S31 and S32, at least one of the convolution kernel or the convolution region is different in size from that of another repetition.
[0034] For each combination of the convolution region and the convolution kernel (i.e., for each region-kernel combination), multiple convolution scores may be obtained for multiple sections of the convolution region used in the combination. However, a larger convolution kernel may lead to higher convolution scores. Therefore, in step S61, the computer device 3 may normalize the convolution scores obtained in each of the repetitions of steps S31 and S32 based on the size of the convolution kernel used in the repetition, so as to eliminate the influence of the size of the convolution kernel. Then, the computer device 3 performs step S33 based on the convolution scores thus normalized for all of the repetitions of steps S31 and S32. In one implementation, the computer device 3 may make a section that corresponds to the highest normalized convolution score among the normalized convolution scores serve as the stitching section.
[0035] In step S62, a plurality of convolution scores are obtained for each pair of segment images that have the same ordinal number but are in two consecutive groups (simply, a pair of segment images that are adjacent in a specific column from the perspective of the target scene 100, such as the first images of the first and second rows in
[0036] In step S63, the stitch images of the groups are combined together in the Y-direction based on the convolution scores to form a full image of the target scene 100. Specifically, for the case depicted in
[0037] It is noted that, in a case that requires real time processing, for any value of m, steps S62 and S63 may be performed once the stitch images of the m.sup.th row and the (m+1).sup.th row (i.e., S.sub.m, S.sub.(m+1)) are obtained. In a case that does not require real time processing, steps S62 and S63 may be performed after all of the stitch images S.sub.1 to S.sub.M are obtained. In some cases, steps S61-S63 may be performed after all of the segment images are captured, and this disclosure is not limited in this respect.
[0038] Referring to
[0039] In practice, without altering the flow literally described in
[0040] In order to fit the prescribed flow, the segment images that are captured column by column are rotated by 90 degrees, and the rotated segment images could be treated as if they were captured row by row, as illustrated in
[0041] In some cases where the prescribed flow is designed to combine the stitch images in the specific sequence of from top to bottom, the computer device 3 may number the stitch images for the first to N.sup.th rows of the rotated segment images in
[0042]
[0043] In step S101, for each of the first to N.sup.th groups (corresponding to first to N.sup.th columns in
[0044] In step S102, for each of the first to N.sup.th groups and for each value of m, once the rotated m.sup.th image and the rotated (m+1).sup.th image are obtained, steps S31 to S33 are performed on the rotated m.sup.th image and the rotated (m+1).sup.th image that serve as the two adjacent segment images (i.e., this operation is performed (M−1) times, each with the variable m being a corresponding integer (from 1 to M−1)), so as to stitch the rotated first to M.sup.th images together in the X-direction to form a stitch image for the corresponding group of the segment images (i.e., the corresponding row of the rotated segment images in
[0045] In step S103, the computer device 3 performs, for a specific value of j (e.g., j=1) and for each value of n, the computer device 3 performs steps S31 and S32 on the rotated j.sup.th image of the n.sup.th group and the rotated j.sup.th image of the (n+1).sup.th group (noting that the rotated j.sup.th images of the n.sup.th group and the (n+1).sup.th group are two adjacent segment images in the same column in
[0046] In step S104, for the specific value of j and for each value of n, the computer device 3 stitches the stitch image of the n.sup.th group (corresponding to the n.sup.th row in
[0047] In step S105, the computer device 3 rotates the rotated full image in another rotational direction (e.g., the clockwise direction) by 90 degrees, so as to obtain a full image of the target scene 100.
[0048] In some cases, steps S101-S105 may be performed after all of the segment images are captured when real time operation is not required.
[0049] According to the flow in
[0050] Referring to
[0051] In step S111, for each of the first to M.sup.th groups and for each value of n, once the n.sup.th image and the (n+1).sup.th image are captured, the computer device 3 performs steps S31 and S32 on the n.sup.th image and the (n+1).sup.th image (i.e., two adjacent segment images in the same row in
[0052] In step S112, for each of the first to M.sup.th groups and for each value of n, the computer device 3 determines relative stitching coordinates (relative stitching position) for the n.sup.th image and the (n+1).sup.th image based on the convolution scores obtained for n.sup.th image and the (n+1).sup.th image.
[0053] In step S113, for a specific value of i (e.g., i=1) and for each value of m, the computer device 3 performs steps S31 and S32 on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images (i.e., two adjacent segment images of the same column in
[0054] In step S114, for the specific value of i and for each value of m, the computer device 3 determines relative stitching coordinates (a relative stitching position) for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images based on the convolution scores obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images.
[0055] In step S115, the computer device 3 corrects the relative stitching coordinates obtained for the segment images based on a reference segment image, so as to obtain, for each of the segment images, a stitching coordinate set relative to the reference segment image, where the stitching coordinate set serves as an absolute stitching position. The stitching coordinate sets (absolute stitching positions) obtained in step S115 include those corrected from the relative stitching coordinates, and a stitching coordinate set that is predefined for the reference segment image. The reference segment image is one of the i.sup.th images of the first to M.sup.th groups of the segment images. Referring to
[0056] In step S116, for each of the first to M.sup.th groups of the segment images and for each value of n, the computer device 3 performs step S33 on the n.sup.th image and the (n+1).sup.th image (i.e., two adjacent segment images in the same row in
[0057] In some cases, steps S111-S116 may be performed after all of the segment images are captured when real time operation is not required.
[0058] In the case where the camera device 1 captures the segment images along the route as shown in
[0059] It is noted that, in the second embodiment, generation of the stitch images (see step S61 in
[0060] Referring to
[0061] In step S131, for a specific one of the first to M.sup.th groups and for each value of n, once the n.sup.th image and the (n+1).sup.th image are captured, the computer device 3 performs steps S31 and S32 on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the n.sup.th image and the (n+1).sup.th image of the specific one of the first to M.sup.th groups.
[0062] In step S132, for the specific one of the first to M.sup.th groups and for each value of n, the computer device 3 determines relative stitching coordinates (a relative stitching position) for the n.sup.th image and the (n+1).sup.th image based on the convolution scores obtained for the n.sup.th image and the (n+1).sup.th image. As an example, the computer device 3 may determine the relative stitching coordinates for each pair of adjacent segment images of the first row in steps S131 and S132 (i.e., the specific one of the first to M.sup.th groups is the first group, which corresponds to the first row in
[0063] In step S133, for a specific value of i and for each value of m, the computer device 3 performs steps S31 and S32 on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images, so as to obtain a plurality of convolution scores for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images.
[0064] In step S134, for the specific value of i and for each value of m, the computer device 3 determines relative stitching coordinates for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images based on the convolution scores obtained for the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images. As an example, the computer device 3 may determine the relative stitching coordinates for each pair of adjacent segment images of the first column (i.e., i=1) in
[0065] In step S135, for the specific value of i, based on a reference segment image that is the i.sup.th image of the specific one of the first to M.sup.th groups, the computer device 3 corrects the relative stitching coordinates obtained for the first to N.sup.th images of the specific one of the first to M.sup.th groups, and the relative stitching coordinates obtained for the i.sup.th images of the first to M.sup.th groups of the segment images, so as to obtain, for each of the first to N.sup.th images of the specific one of the first to M.sup.th groups and the i.sup.th images of the first to M.sup.th groups of the segment images, a stitching coordinate set (absolute stitching position) relative to the reference segment image. Taking
[0066] In step S136, for the specific value of i (a specific positive integer selected from one to N), for each value of a variable k, which takes a positive integer value ranging from one to N except for said specific value of i, and for each value of j (recall that j is a variable that takes a positive integer value ranging from one to M), the computer device 3 determines a stitching coordinate set relative to the reference segment image for a k.sup.th image of the j.sup.th group of the segment images based on the k.sup.th image of the specific one of the first to M.sup.th groups and the i.sup.th image of the j.sup.th row of the segment images, where the j.sup.th row is different from the specific one of the first to M.sup.th rows. As an example, assuming that the reference segment image is the first image (the specific value of i is 1) of the first group (corresponding to the first row in
[0067] In step S137, for each of the first to M.sup.th groups of the segment images and for each value of n, the computer device 3 performs step S33 on the n.sup.th image and the (n+1).sup.th image that serve as the two adjacent segment images based on the stitching coordinate sets of the n.sup.th image and the (n+1).sup.th image, and for each value of i and for each value of m, the computer device 3 performs step S33 on the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images that serve as the two adjacent segment images based on the stitching coordinate sets of the i.sup.th images of the m.sup.th group and the (m+1).sup.th group of the segment images, so as to stitch the segment images together to form a full image of the target scene 100.
[0068] In this variation, convolution is performed on only one row and one column (from the perspective of the target scene 100) of the segment images, and the stitching coordinate sets of the other segment images can be acquired using simple elementary arithmetic (e.g., addition and subtraction), so the computation load is reduced.
[0069] In summary, an image stitching method is proposed to include several embodiments. In the first embodiment, the segment images in the same line (row or column) are stitched together to form multiple stitch images of the lines, and the stitch images are stitched together to form the full image. As an example, convolution is performed to determine a stitching position for two adjacent images. In the second embodiment, the stitching coordinate sets of the segment images are calculated, and the segment images are stitched together based on the stitching coordinate sets at the end, so as to save memory capacity. In a variation of the second embodiment, the stitching coordinate sets are calculated only for the segment image of one row and one column, so as to reduce computation load. Furthermore, with the option of allowing the user to define the convolution region, in the segment image where the convolution region is to be defined, some parts of the segment image that the user deems impossible to include the stitching position can be excluded from the convolution region by the user, thereby reducing chances of misjudging the stitching position, so the embodiments of this disclosure are suitable for a target scene that has duplicated features.
[0070] In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiment(s). It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. It should also be appreciated that reference throughout this specification to “one embodiment,” “an embodiment,” an embodiment with an indication of an ordinal number and so forth means that a particular feature, structure, or characteristic may be included in the practice of the disclosure. It should be further appreciated that in the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects, and that one or more features or specific details from one embodiment may be practiced together with one or more features or specific details from another embodiment, where appropriate, in the practice of the disclosure.
[0071] While the disclosure has been described in connection with what is (are) considered the exemplary embodiment(s), it is understood that this disclosure is not limited to the disclosed embodiment(s) but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.