METHOD FOR CALIBRATING IMAGE DISTORTION, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
20220392027 · 2022-12-08
Inventors
Cpc classification
International classification
Abstract
A method for calibrating image distortion, apparatus, electronic device, and computer readable storage medium are disclosed. The method for calibrating image distortion includes: obtaining an original image captured by a camera; calculating a deformation degree of foreground objects when the original image includes the foreground objects; and performing a distortion calibration and a spherical projection on the original image to obtain a target image when the deformation degree of the foreground objects is greater than a predetermined threshold. The method for calibrating image distortion provided by the disclosure can realize fast distortion calibration of ultra-wide-angle image with low computational complexity and obtain better calibration effect.
Claims
1. A method for calibrating image distortion, comprising: obtaining an original image captured by a camera; calculating a deformation degree of foreground objects when the original image includes the foreground objects; and performing a distortion calibration and a spherical projection on the original image to obtain a target image when the deformation degree of the foreground objects is greater than a predetermined threshold.
2. The method of claim 1, further comprising: performing the distortion calibration on the original image to obtain the target image when the original image does not include the foreground objects; or performing the distortion calibration on the original image to obtain a target image when the deformation degree of the foreground objects is not greater than the predetermined threshold.
3. The method of claim 1, wherein the step of calculating the deformation degree of the foreground objects when the original image includes foreground objects comprises: obtaining a foreground object border in the original image, a position parameter of the foreground object border, and a size parameter of the foreground object border; and calculating the deformation degree of the foreground object border based on the position parameter of the foreground object border and the size parameter of the foreground object border.
4. The method of claim 3, wherein the position parameter of the foreground object border comprises: a distance between the foreground object border and a center point of the original image in the original image; the size parameter of the foreground object border comprises: a width of the foreground object border and a height of the foreground object border, the deformation degree of the foreground object is calculated based on
S=w.sub.1×l.sub.1+w.sub.2×l.sub.2 wherein the S is the deformation degree of the foreground object, the l.sub.1 is the distance between the foreground object border and the center point of the original image in the original image, the l.sub.2 is a larger value of the width of the foreground object border and the height of the foreground object border, the w.sub.1 is a first weight value, and the w.sub.2 is a second weight value.
5. The method of claim 1, wherein the step of performing the distortion calibration and a spherical projection on the original image to obtain the target image when the deformation degree of the foreground objects is greater than a predetermined threshold comprises: calculating a corresponding relationship between pixel points of the target image and pixel points of the original image based on a spherical projection transformation formula and a distortion calibration transformation formula; and assigning pixel values of the pixel points of the original image to the pixel points of the target image corresponding to the pixel points of the original image to obtain pixel values of the pixel points in the target image.
6. The method of claim 5, wherein the step of calculating a corresponding relationship between pixel points of the target image and pixel points of the original image based on a spherical projection transformation formula and a distortion calibration transformation formula comprises: calculating coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image corresponding to coordinates (u.sub.i, v.sub.i) of the pixel points of the target image based on the spherical projection transformation formula, wherein the pixel points of the target image, the corresponding pixel points after performing distortion calibration on the original image, and a center point of the target image are on a same straight line; calculating coordinates (u.sub.i″, v.sub.i″) of the pixel points of the original image corresponding to coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image based on the distortion calibration transformation formula; wherein the spherical projection transformation formula is:
7. An image calibration apparatus, comprising: an image obtaining module, configured to obtain an original image captured by a camera; a deformation calculation module, configured to calculate a deformation degree of foreground objects when the original image includes the foreground objects; and a calibration calculation module, configured to perform a distortion calibration and a spherical projection on the original image when the deformation degree of the foreground objects is greater than a predetermined threshold to obtain a target image.
8. The image calibration apparatus of claim 7, wherein the calibration calculation module is further configured to: perform the distortion calibration on the original image to obtain the target image when the original image does not include the foreground objects; or perform the distortion calibration on the original image to obtain a target image when the deformation degree of the foreground object is not greater than the predetermined threshold.
9. The image calibration apparatus of claim 7, wherein the deformation calculation module is further configured to: obtain a foreground object border in the original image, a position parameter of the foreground object border, and a size parameter of the foreground object border; and calculate a deformation degree of the foreground object border based on the position parameter of the foreground object border and the size parameter of the foreground object border.
10. The image calibration apparatus of claim 9, wherein the position parameter of the foreground object border comprises: a distance between the foreground object border and a center point of the original image in the original image; the size parameter of the foreground object border comprises: a width of the foreground object border and a height of the foreground object border; the deformation degree of the foreground object is calculated based on
S=w.sub.1×l.sub.1+w.sub.2×l.sub.2 wherein the S is the deformation degree of the foreground object, the l.sub.1 is the distance between the foreground object border and the center point of the original image in the original image, the l.sub.2 is a larger value of the width of the foreground object border and the height of the foreground object border, w.sub.1 is a first weight value, and the w.sub.2 is a second weight value.
11. The image calibration apparatus of claim 7, wherein the calibration calculation module comprises: a mapping calculation unit, configured to calculate a corresponding relationship between pixel points of the target image and pixel points of the original image based on a spherical projection transformation formula and a distortion calibration transformation formula; and a pixel assignment unit, configured to assign pixel values of the pixel points of the original image to the pixel points of the target image corresponding to the pixel points of the original image to obtain pixel values of the pixel points in the target image.
12. The image calibration apparatus of claim 11, wherein the mapping calculation unit is configured to: calculate coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image corresponding to coordinates (u.sub.i, v.sub.i) of the pixel points of the target image based on the spherical projection transformation formula, wherein the pixel points of the target image, the corresponding pixel points after performing distortion calibration on the original image, and a center point of the target image are on a same straight line; and calculate coordinates (u.sub.i″, v.sub.i″) of the pixel points of the original image corresponding to coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image based on the distortion calibration transformation formula; wherein the spherical projection transformation formula is:
13. An electronic device, comprising a memory and a processor, wherein the memory is connected to the processor; the memory stores a computer program; and the processor implements the method of claim 1 when executing the computer program.
14. A computer readable storage medium having stored therein a computer program, wherein the method of claim 1 is implemented when the computer program is executed by a processor.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0067] In order to make the objectives, technical solutions, and advantages of the present disclosure clearer, the present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely intended to explain the present disclosure and are not intended to limit the present disclosure.
[0068] The original image captured by the ultra-wide-angle camera module generally has image distortion.
[0069] In order to eliminate the distortion in the image, the original image may be subjected to distortion calibration by using the internal parameters of the ultra-wide-angle camera module, and the distortion calibrated image is shown in
[0070] In some embodiments, the original image may be distortion calibrated using a mesh point optimization method based on the least square method. The calculation amount of the mesh point optimization method based on the least square method is large, the calculation capability of the computing platform is required to be high, the time consumption is long, and it usually takes several seconds to complete the calibration. In other embodiments, a method based on face keypoint detection may be adopted, and when deformation of a face is detected, shape adjustment is performed on a face area. The method based on face keypoint detection may be erroneous, resulting in poor image calibration effect.
[0071] The method for calibrating image distortion provided in the present disclosure can be applied to the electronic device 300 shown in
[0072] In an embodiment, a method for calibrating image distortion is provided, which can be applied to the electronic device 300 shown in
[0073] S420: obtaining an original image captured by a camera.
[0074] In this embodiment, the camera may be an ultra-wide-angle camera, and the lens in the ultra-wide-angle camera may be an ultra-wide-angle lens. In various embodiments of the present disclosure, the cameras may include various types of device capable of capturing images, such as a camera, a camera module, etc.
[0075] The original image is an unprocessed image captured by a camera. In this embodiment, taking the method applied to the electronic device 300 as an example, the camera 301 of the electronic device 300 captures the original image in real time and transmits it to the processor of the electronic device 300, so that the electronic device 300 acquires the original image. In other embodiments, the original images may also be downloaded from the network or transmitted from other terminal device to the electronic device 300, or the electronic device 300 may also read the original image or the like from its own memory.
[0076] S440: calculating a deformation degree of the foreground objects when the original image includes the foreground objects.
[0077] The original image may include foreground objects or may not include foreground objects. The foreground objects refer to, for example, a target object that is captured within the field of view of the camera, for example, a human image, an animal, a food, and the like. In the original image, a portion other than the foreground objects is a background. The background refers to other content other than the target object photographed within the field of view of the camera, such as a far mountain, a sky, a building, an indoor or outdoor environment, and the like. Compared to the foreground object, the background is generally farther away from the camera in the object space. Accordingly, compared to the background, the foreground objects are generally closer to the camera in the object space.
[0078] The deformation degree of the foreground objects refers to the deformation degree of the form of the foreground objects presented in the original image relative to the original form of the foreground object (for example, the form presented by photographing the foreground object with a standard lens).
[0079] S460: performing distortion calibration and spherical projection on the original image to obtain a target image when the deformation degree of the foreground objects is greater than a predetermined threshold.
[0080] The distortion calibration refers to calibrating the distortion of the captured image due to camera lens distortion. The distortion mainly includes radial distortion and tangential distortion. The internal parameters of camera of the camera module can be used to correct the distortion of the original image. The internal parameters of the camera are inherent parameters of the camera, and after the manufacture of the camera is completed, the internal parameters of the camera are determined. The internal parameters of camera may be obtained from a manufacturer or may be obtained by calibrating the camera.
[0081] The camera may be calibrated by using a linear calibration method, a nonlinear optimization calibration method, a calibration method proposed by Zhengyou Zhang, or other common calibration methods, and the calibration method is not limited in the present disclosure as long as the internal parameters parameter of the camera can be acquired. After acquiring the internal parameters, according to the internal parameters, the deformation of the captured original image caused by the radial distortion and tangential distortion of the lens of the camera itself may be calibrated. An existing distortion calibration technology may be used to perform distortion calibration on the original image, and an algorithm for distortion calibration is not limited in this embodiment.
[0082] Spherical projection is to deform the image to obtain the visual effect after projecting the plane image onto the spherical surface. It refers to the use of spherical perspective projection model to correct the image. It is a common image processing method.
[0083] In this step, the distortion calibration and the spherical projection are performed on all regions of the original image. For example, the distortion calibration and the spherical projection may be performed on all pixel points in the original image. In this way, the foreground objects and the background in the original image do not need to be distinguished, and the image calibration speed is accelerated. Further preferably,
[0084] In the above embodiment, when foreground objects are present in the original image and the deformation degree of the foreground objects is greater than a predetermined threshold, the distortion calibration and the spherical projection are performed on the original image, so as to avoid deformation of the foreground object due to the distortion calibration, so that the calibration effect of the foreground object in the target image is good, and the imaging is beautiful and natural. In addition, since the processing method of distortion calibration and spherical projection has a smaller calculation amount, the calculation requirement of the calculation platform is low, and the target image can be previewed in real time.
[0085] For example, taking the method for calibrating image distortion applied to the electronic device 300 shown in
[0086] Referring to
[0087] S520: obtaining an original image captured by a camera.
[0088] Step S520 is the same as step S420 described above and is not described herein again.
[0089] S530: determining whether the foreground object is included in the original image.
[0090] Taking a human image of the foreground objects as an example, a face detection technology is used for the original image to detect whether a face is included in the original image. The face detection technology is, for example. Adaboost+haar detection, depth model detection, etc. If it is detected that the original image includes a face, it is determined that the original image includes a human image, otherwise, it is determined that the original image does not include a human image.
[0091] In other embodiments, the foreground objects may be other target objects, such as animals, foods, etc., and may be detected using corresponding neural network recognition techniques. It should be understood that the original image may or may not include one or more foreground objects.
[0092] When it is determined that the original image includes the foreground objects, the processing proceeds to S540; otherwise, the processing proceeds to S545.
[0093] S540: calculating the deformation degree of the foreground object.
[0094] In an embodiment, calculating the deformation degree of the foreground object may include the following steps.
[0095] A foreground object border in the original image, a position parameter of the foreground object border, and a size parameter of the foreground object border are obtained. And
[0096] The deformation degree of the foreground object border based on the position parameter of the foreground object border and the size parameter of the foreground object border is calculated.
[0097] Taking a human image of the foreground objects as an example, the foreground object border may be a face border. Exemplarily, the face border may be obtained by a method based on depth learning. After the foreground object border is acquired, the coordinates of the pixel points of the foreground object border may be acquired, thereby obtaining the position parameter of the foreground object border and the size parameter of the foreground object border. It should be understood that when the original image includes multiple foreground objects, multiple foreground object border corresponding to the multiple foreground objects are respectively obtained.
[0098] The coordinates of the pixel points refer to the coordinates of each pixel point in the image. For example, the coordinates of the pixel point at the leftmost upper corner of the image may be set as (0, 0), the coordinates of the pixel points adjacent to the right side of the pixel point at the leftmost upper corner are set as (1, 0), and the coordinates of the pixel points adjacent to the lower side of the pixel point at the leftmost upper corner are set as (0,1), and so on. It should be understood that the coordinates of the pixel points may also be set according to other rules, for example, the coordinates of the center point of the image may be set as (0, 0), etc.
[0099] In a preferred embodiment, referring to
[0100] The size parameter of the foreground object border 602 includes a width w of the foreground object border 602 and a height h of the foreground object border 602. It should be understood that the above dimensional parameters may also be determined by the coordinates of the pixel points of the foreground object border 602. For example, the height h of the foreground object border 602 is obtained by subtracting the minimum value of the ordinate from the maximum value of the ordinate in the coordinates of the pixel points of the foreground object border 602. The width w of the foreground object border 602 is obtained by subtracting the minimum value of the abscissas from the maximum value of the abscissas in the coordinates of the pixels of the foreground object border 602.
[0101] In an embodiment, the deformation degree of the foreground object is calculated based on
S=w.sub.1×l.sub.1+w.sub.2×l.sub.2
[0102] Where the S is the deformation degree of the foreground object. The l.sub.1 is the distance between the foreground object border and the center point of the original image in the original image. The l.sub.2 is a larger value of the width of the foreground object border and the height of the foreground object border. The w.sub.1 is a first weight value. And the w.sub.2 is a second weight value.
[0103] The w.sub.1 and the w.sub.2 involve the effects of the l.sub.1 and the l.sub.2 on the deformation degree, respectively. It should be understood that the values of the w.sub.1 and the w.sub.2 are associated with the values of the predetermined threshold and can be set according to the actual situation. In a preferred embodiment, the w.sub.2 may be greater than the w.sub.1. As shown in
[0104] When it is detected in step S530 that multiple foreground objects are included in the original image, the above formula can be applied to the foreground object border corresponding to the multiple foreground objects to calculate the deformation degree of the multiple foreground objects respectively.
[0105] S545: performing distortion calibration on the original image to obtain a target image.
[0106] When the foreground image is not included in an original image, since the process of distortion calibration does not cause serious distortion of the content of the original image, only the distortion calibration of the original image may be performed to obtain the target image. Thus, the time for performing the image calibration processing is saved.
[0107] S550: determining whether the deformation degree is greater than a predetermined threshold.
[0108] As can be seen from the foregoing steps, when multiple foreground objects are detected in step S530, the deformation degrees of the multiple foreground objects are calculated in step S540 respectively. In this case, the foreground object having the largest deformation degree among the multiple foreground objects is acquired, and it is determined whether the deformation degree of the foreground object having the largest deformation degree is greater than the predetermined threshold.
[0109] If it is determined that the degree of deformation is greater than the predetermined threshold, the processing proceeds to step S560: otherwise, the processing proceeds to step S565.
[0110] S560: performing the distortion calibration and the spherical projection on the original image to obtain a target image.
[0111] This step is similar to step S460 in the foregoing embodiment and is not described herein again.
[0112] S565: performing the distortion calibration on the original image to obtain a target image.
[0113] When the deformation degree of the current foreground objects does not exceed the predetermined threshold, since the processing of distortion calibration does not cause serious distortion of the foreground object, only the original image may be distortion calibrated to obtain the target image. Thus, the time for performing the image calibration processing is saved.
[0114] For example, referring to
[0115] Referring to
[0116] S720: based on a spherical projection transformation formula and a distortion calibration transformation formula, calculating a corresponding relationship between pixel points of the target image and pixel points of the original image.
[0117] Referring further to
[0118] After distortion calibration on the original image is performed, the coordinates (u.sub.i″, v.sub.i″) of the pixel points of the original image are converted into the coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image. After distortion calibration on the original image is performed, the coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration are convened into the coordinates (u.sub.i, v.sub.i) of the pixel points of the target image by spherical projection. (u.sub.i, v.sub.i) corresponds to (u.sub.i′, v.sub.i′) by the spherical projection transformation formula, and (u.sub.i′, v.sub.i′) corresponds to (u.sub.i″, v.sub.i″) by the distortion calibration transformation formula. Referring to
[0119] In brief, the pixel points (u.sub.i, v.sub.i) in the target image are converted into pixel points (u.sub.i″, v.sub.i″) of the original image by the distortion calibration transformation formula and the spherical projection transformation formula. The pixel value of the pixel points represented by (u.sub.i, v.sub.i) in the target image corresponds to the pixel value of the pixel points represented by (u.sub.i″, v.sub.i″) in the original image. Each pixel point in the target image is mapped to a pixel point in the original image.
[0120] After the corresponding relationship between the pixel points of the target image and the pixel points of the original image is calculated, the pixel values of the pixel points of the original image may be acquired. However, the coordinates of the pixel points in the original image corresponding to the coordinates of the pixel points in the target image calculated by the spherical projection transformation formula and the distortion calibration transformation formula are generally not integers, i.e., u.sub.i″ and v.sub.i″ are generally not integers. Therefore, the “pixel point of the original image” calculated according to the present disclosure may not be a standard pixel in an image and may be considered as a point in the original image. At this time, the pixel values of the pixel points of the original image whose coordinates are not integers may be obtained by using an interpolation algorithm (for example, a bilinear interpolation algorithm, a bicubic interpolation algorithm, and a nearest neighboring interpolation algorithm). Taking a bilinear interpolation algorithm as an example, if the coordinates of the pixel points in the corresponding original image calculated by using the spherical projection transformation formula and the distortion calibration transformation formula are (1.1, 2.3), bilinear interpolation calculation is performed using four pixel points with coordinates (1, 2), (2, 3) and (1, 3) being integers in the original image, to obtain pixel values with coordinates (1.1, 2.3) of the pixel points in the original image. Calculating a pixel value by using an interpolation algorithm belongs to a common technique for image processing, and the specific calculation method is not described herein again. It should be understood that various interpolation algorithms can be used to calculate pixel values, which is not limited in the present disclosure.
[0121] In some embodiments, all pixel points in the target image are traversed, and a spherical projection transformation and a distortion calibration transformation are applied for coordinates of all pixel points in the target image to calculate coordinates of pixel points in its corresponding original image.
[0122] In other embodiments, preferably, spherical projection transformation formula and distortion calibration transformation formula may be applied only to the coordinates of some pixel points in the target image. In this case, the target image may be divided into multiple rectangular blocks according to a certain width interval and height interval, and the spherical projection transformation formula and the distortion calibration transformation formula are applied to vertices of the multiple rectangular blocks in the target image, so as to calculate coordinates of pixel points in the original image corresponding thereto. For the vertices of the rectangular block, this process is similar to that of the above-described embodiments and is not described herein.
[0123] For other pixel points (not vertex pixel points) in the target image, the coordinates of four-pixel points in the original image obtained by mapping the coordinates of the four vertices closest to the pixel point are used to calculate the coordinates of the pixel point in the original image corresponding to the pixel point by using a bilinear interpolation algorithm.
[0124] As shown in
[0125] In this way, by applying spherical projection transformation formula and distortion calibration transformation formula to some pixel points and bilinear interpolation algorithm to other pixel points, a corresponding relationship between all pixel points in the target image and pixel points in the original image is obtained, that is, the coordinates of pixel points in the original image corresponding to all pixels in the target image are obtained, and then, the pixel values of the pixel points of the original image are obtained by interpolation algorithm. In this embodiment, the spherical projection transformation formula and distortion calibration transformation formula need not be applied to the coordinates of all the pixel points in the target image, and the calculation amount is further reduced.
[0126] S740: assigning pixel values of the pixel points of the original image to the pixel points of the target image corresponding to the pixel points of the original image to obtain pixel values of the pixel points in the target image.
[0127] In this step, distortion calibration and spherical projection are performed on the original image to obtain a target image. In practical processing, generally, reverse calculation is performed. That is, for each pixel point in the target image, the pixel point of the original image corresponding to the pixel point is obtained by using the spherical projection transformation formula and the distortion calibration transformation formula, and a pixel value of the pixel point of the original image is assigned to the pixel point of the target image corresponding to the pixel point of the original image, and a pixel value of each pixel point in the target image is obtained, thereby obtaining a target image with the pixel values. In other words, when the inverse calculation is not performed by the spherical projection transformation formula and the distortion calibration transformation formula, the pixel points in the target image do not have pixel values. The pixel values are assigned to the pixel points in the target image by reverse calculation, thereby obtaining a target image with the pixel values.
[0128] For example, for the pixel points (u.sub.0, v.sub.0) of the target image, the coordinates of the corresponding pixel points of the original image are (u.sub.0″, v.sub.0″) calculated by distortion calibration transformation formula and spherical projection transformation formula. The pixel value (also known as the color value) of the pixel point with coordinates (u.sub.0″, v.sub.0″) in the original image is obtained, and then the pixel value is assigned to the pixel point (u.sub.0, v.sub.0) of the target image so that the pixel value corresponding to the pixel point (u.sub.0, v.sub.0) of the target image is the same as the pixel value corresponding to the pixel point (u.sub.0″, v.sub.0″) of the original image.
[0129] In an embodiment, the above-mentioned spherical projection transformation formula is:
[0130] Wherein, the d is a smaller value in the width and height of the original image. The f is a focal length of the camera; the r.sub.1 is a distance from the pixel point of the target image to the center point of the target image. The r.sub.2 is a distance from the pixel point of the distortion calibration image to the center point of the target image. The pixel points (u.sub.i, v.sub.i) of the target image, the corresponding pixel points (u.sub.i′, v.sub.i′) after performing distortion calibration on the original image, and a center point of the target image are on a same straight line.
[0131] In an embodiment, the above-mentioned distortion calibration transformation formula is:
[0132] Wherein, the f.sub.x is a first focal length of the camera, the f.sub.y is a second focal length of the camera. The c.sub.x is a lateral offset of an image origin relative to an optical center imaging point. The c.sub.y is a longitudinal offset of the image origin relative to the optical center imaging point. The k.sub.1 is a first radial distortion coefficient of the camera. The k.sub.2 is a second radial distortion coefficient of the camera. The k.sub.3 is a third radial distortion coefficient of the camera. The k.sub.4 is a fourth radial distortion coefficient of the camera. The k.sub.5 is a fifth radial distortion coefficient of the camera. The k.sub.6 is a sixth radial distortion coefficient of the camera. The p.sub.1 is a first tangential distortion coefficient of the camera. The p.sub.2 is a second tangential distortion coefficient of the camera. f.sub.x, f.sub.y, c.sub.x and c.sub.y are the internal parameters of the camera, k.sub.1, k.sub.2, k.sub.3, k.sub.4, k.sub.5, k.sub.6, p.sub.1, and p.sub.2 are the distortion coefficients of the camera, which are the inherent parameters of the camera and are obtained by calibrating the camera.
[0133] Referring again to
[0134] In addition, since the processing method for distortion calibration and spherical projection has a smaller calculation amount, the calculation requirement of the calculation platform is low, and the target image can be previewed in real time. Exemplarily, the method for calibrating image distortion according to the present disclosure may be applied to the electronic device 300 shown in
[0135] The method for calibrating image distortion for ultra-wide-angle image provided by the disclosure can realize fast distortion calibration for ultra-wide-angle image with low computational complexity and obtain good calibration effect.
[0136] Referring to
[0137] In an embodiment, the calibration calculation module 90 is further configured to perform the distortion calibration on the original image to obtain a target image when the original image does not include a foreground object or perform the distortion calibration on the original image to obtain a target image when the deformation degree of the foreground object is not greater than the predetermined threshold.
[0138] In an embodiment, the deformation calculation module 940 is further configured to acquire a foreground object border in the original image, a position parameter of the foreground object border, and a size parameter of the foreground object border and calculate a deformation degree of the foreground object border based on the position parameter of the foreground object border and the size parameter of the foreground object border.
[0139] In an embodiment, the position parameter of the foreground object border includes a distance between the foreground object border and a center point of the original image in the original image. The size parameter of the foreground object border comprises: a width of the foreground object border and a height of the foreground object border.
[0140] The deformation calculation module 940 is further configured to calculate the deformation degree of the foreground object based on
S=w.sub.1×l.sub.1+w.sub.2×l.sub.2.
[0141] Wherein, the S is the deformation degree of the foreground object. The l.sub.1 is the distance between the foreground object border and the center point of the original image in the original image. The l.sub.2 is a larger value of the width of the foreground object border and the height of the foreground object border. The w.sub.1 is a first weight value. And the w.sub.2 is a second weight value.
[0142] In an embodiment, the calibration calculation module 960 includes a mapping calculation unit 962 configured to calculate a corresponding relationship between pixel points of the target image and pixel points of the original image based on a spherical projection transformation formula and a distortion calibration transformation formula, and a pixel assignment unit 94 configured to assign pixel values of the pixel points of the original image to the pixel points of the target image corresponding to the pixel points of the original image to obtain pixel values of the pixel points in the target image.
[0143] In an embodiment, the mapping calculation unit 962 is configured to calculate coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image corresponding to coordinates (u.sub.i, v.sub.i) of the pixel points of the target image based on the spherical projection transformation formula, wherein the pixel points of the target image, the corresponding pixel points after performing distortion calibration on the original image, and a center point of the target image are on a same straight line: and calculate coordinates (u.sub.i″, v.sub.i″) of the pixel points of the original image corresponding to coordinates (u.sub.i′, v.sub.i′) of the pixel points after performing distortion calibration on the original image based on the distortion calibration transformation formula.
[0144] The spherical projection transformation formula is:
[0145] Wherein the d is a smaller value in the width and height of the original image. The f is a focal length of the camera. The r.sub.1 is a distance from the pixel point of the target image to the center point of the target image. The r.sub.2 is a distance from the pixel point of the distortion calibration image to the center point of the target image.
[0146] The distortion calibration transformation formula is:
[0147] Wherein the/r is a first focal length of the camera, the f.sub.y is a second focal length of the camera. The c.sub.x is a lateral offset of an image origin relative to an optical center imaging point. The c.sub.y is a longitudinal offset of the image origin relative to the optical center imaging point. The k.sub.1 is a first radial distortion coefficient of the camera. The k.sub.2 is a second radial distortion coefficient of the camera. The k.sub.3 is a third radial distortion coefficient of the camera. The k.sub.4 is a fourth radial distortion coefficient of the camera. The k.sub.5 is a fifth radial distortion coefficient of the camera. The k.sub.6 is a sixth radial distortion coefficient of the camera. The p.sub.1 is a first tangential distortion coefficient of the camera. And the p.sub.2 is a second tangential distortion coefficient of the camera.
[0148] The image calibration apparatus of the disclosure corresponds to the method for calibrating image distortion of the disclosure one by one. It is hereby claimed that the technical features and beneficial effects described in the embodiments of the above method for calibrating image distortion are applicable to the embodiments of the image calibration apparatus.
[0149] For specific definition of the image distortion calibration device, reference may be made to the definition of the above method for calibrating image distortion, and details are not described herein again. Each module in the image distortion calibration device may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules can be embedded in or independent of the processor in the computer device in the form of hardware or stored in the memory in the computer device in the form of software, so as to facilitate the processor to call and execute the corresponding operations of the above modules.
[0150] According to another aspect of the present disclosure, an electronic device is provided, and the electronic device may be a terminal, and an internal structural diagram thereof may be shown in
[0151] A person skilled in the art would understand that the structure shown in
[0152] In an embodiment, an electronic device is further provided, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the foregoing embodiments of the above method when executing the computer program.
[0153] In an embodiment, a computer readable storage medium is provided, on which a computer program is stored, and the computer program is executed by a processor to implement the steps in the foregoing embodiments of the above method.
[0154] A person of ordinary skill in the art would understand that all or part of the processes of the method in the foregoing embodiments may be implemented by a computer program instructing relevant hardware. The computer program may be stored in a non-volatile computer readable storage medium. When the computer program is executed, the computer program may include the processes of the embodiments of the above method. Any reference to memory, storage, database, or other media used in the embodiments provided by the present disclosure may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (Synchlink SLDRAM), rambus, direct RAM (RDRAM), direct rambus dynamic RAM (DRDRAM), and rambus dynamic RAM (RDRAM), and so on.
[0155] The technical features of the above embodiments can be combined arbitrarily. In order to make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, it shall be considered to be the scope recorded in the specification.
[0156] The above embodiments merely express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but cannot be construed as limiting the scope of the present disclosure. It should be noted that, for a person of ordinary skill in the art, several modifications and improvements can also be made without departing from the inventive concept, which all belong to the scope of protection of the present disclosure. Therefore, the scope of protection of the present disclosure shall be subject to the appended claims.