IMAGE PROCESSING METHOD AND APPARATUS
20200267297 ยท 2020-08-20
Inventors
Cpc classification
H04N23/683
ELECTRICITY
G06T7/80
PHYSICS
G06T3/604
PHYSICS
International classification
Abstract
An image processing method includes obtaining two-dimensional coordinate points of an input image, and according to a camera imaging model or a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain a first processing result. The method also includes performing at least one of virtual reality processing or electronic image stabilization on the first processing result to obtain a second processing result, and mapping the second processing result to a two-dimensional image coordinate system to obtain an output image.
Claims
1. An image processing method, comprising: obtaining two-dimensional coordinate points of an input image; according to a camera imaging model or a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain a first processing result; performing at least one of virtual reality processing or electronic image stabilization on the first processing result to obtain a second processing result; and mapping the second processing result to a two-dimensional image coordinate system to obtain an output image.
2. The method according to claim 1, wherein the performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result further includes: according to camera parameters and the distortion correction model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result; or according to the camera parameters and the camera imaging model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result.
3. The method according to claim 1, wherein the virtual reality processing is performed on the first processing result according to a first rotation matrix.
4. The method according to claim 2, wherein the electronic image stabilization is performed on the first processing result according to a second rotation matrix.
5. The method according to claim 3, wherein: the first rotation matrix is determined according to an attitude-angle parameter of an observer; and according to the first rotation matrix, the first processing result is processed to obtain the second processing result.
6. The method according to claim 5, further comprising: obtaining the attitude-angle parameter of the observer.
7. The method according to claim 4, wherein: the second rotation matrix is determined according to measurement parameters obtained from an inertial measurement unit connected to a camera; and the first processing result is processed to obtain the second processing result according to the second rotation matrix.
8. The method according to claim 7, further comprising: obtaining the measurement parameters from the inertial measurement unit connected to the camera, and determining the second rotation matrix according to the measurement parameters; or obtaining the second rotation matrix from the inertial measurement unit connected to the camera, wherein the second rotation matrix is determined by the inertial measurement unit according to the measurement parameters.
9. The method according to claim 2, wherein the camera imaging model includes any one of a pinhole imaging model, an isometric rectangular model, a stereo imaging model, a fisheye lens model, or a wide-angle lens model.
10. An image processing apparatus, comprising: a lens, an image sensor, and a processor, wherein: the image sensor acquires a two-dimensional image through the lens, and the two-dimensional image is used as an input image; and the processor is configured to perform: obtaining two-dimensional coordinate points of the input image; according to a camera imaging model or a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain a first processing result; performing at least one of virtual reality processing or electronic image stabilization on the first processing result to obtain a second processing result; and mapping the second processing result to a two-dimensional image coordinate system to obtain an output image.
11. The apparatus according to claim 10, wherein the performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result further includes: according to camera parameters and the distortion correction model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result; or according to the camera parameters and the camera imaging model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result.
12. The apparatus according to claim 10, wherein the virtual reality processing is performed on the first processing result according to a first rotation matrix.
13. The apparatus according to claim 11, wherein the electronic image stabilization is performed on the first processing result according to a second rotation matrix.
14. The apparatus according to claim 12, wherein: the first rotation matrix is determined according to an attitude-angle parameter of an observer; and according to the first rotation matrix, the first processing result is processed to obtain the second processing result.
15. A non-transitory computer-readable storage medium containing computer-executable instructions for, when executed by one or more processors, performing an image processing method, the method comprising: obtaining two-dimensional coordinate points of an input image; according to a camera imaging model or a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain a first processing result; performing at least one of virtual reality processing or electronic image stabilization on the first processing result to obtain a second processing result; and mapping the second processing result to a two-dimensional image coordinate system to obtain an output image.
16. The storage medium according to claim 15, wherein the performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result further includes: according to camera parameters and the distortion correction model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result; or according to the camera parameters and the camera imaging model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points to obtain the first processing result.
17. The storage medium according to claim 15, wherein the virtual reality processing is performed on the first processing result according to a first rotation matrix.
18. The storage medium according to claim 16, wherein the electronic image stabilization is performed on the first processing result according to a second rotation matrix.
19. The storage medium according to claim 17, wherein: the first rotation matrix is determined according to an attitude-angle parameter of an observer; and according to the first rotation matrix, the first processing result is processed to obtain the second processing result.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The following drawings are merely examples for illustrative purposes according to various disclosed embodiments and are not intended to limit the scope of the present disclosure.
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0022] To make the objectives, technical solutions and advantages of the present invention clearer and more explicit, the present invention is described in further detail with accompanying drawings. It should be understood that specific exemplary embodiments described herein are only for explaining the present invention and are not intended to limit the present invention.
[0023] Reference will now be made in detail to exemplary embodiments of the present disclosure, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
[0024] It should be noted that relative arrangements of components and steps, numerical expressions and numerical values set forth in exemplary embodiments are for illustration purpose only and are not intended to limit the present disclosure unless otherwise specified. Techniques, methods and apparatus known to the skilled in the relevant art may not be discussed in detail, but these techniques, methods and apparatus should be considered as a part of the specification, where appropriate.
[0025]
[0026] The image processing apparatus may include a lens 1, an image sensor 2, and an image processor 3. The lens 1 is connected to the image sensor 2, and the image sensor 2 is connected to the image processor 3. Light may enter the image sensor 2 through the lens 1, and the image sensor 2 may perform an imaging function, and thus an input image may be obtained. The image processor 3 may perform at least two operations of distortion correction, electronic image stabilization, or virtual reality processing, on the input image, and thus an output image may be obtained.
[0027] An image processing method provided by the present disclosure may reduce calculation complexity, shorten calculation time, and improve image processing efficiency of the image processor during a period of performing at least two processing operations of distortion correction, electronic image stabilization, or virtual reality.
[0028] It should be noted that, in the present disclosure, the image processor 3 and the lens 1 and the image sensor 3 may be located on different electronic devices or on a same electronic device.
[0029]
[0030] S101: obtaining two-dimensional coordinate points of an input image. Specifically, when light enters an image sensor through a lens, the image sensor may perform an imaging function and thus an input image may be obtained. Since the input image is a two-dimensional image, two-dimensional coordinate points of all pixel points of the input image may be obtained.
[0031] S102: according to a camera imaging model or a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points, and obtaining a first processing result.
[0032] Specifically, performing the two-dimension to three-dimension conversion operation refers to establishing one-to-one correspondence between the two-dimensional coordinate points and incident rays. The two-dimensional coordinate points of all pixel points of the input image may be mapped as incident rays, and the first processing result refers to the incident rays corresponding to the two-dimensional coordinate points of all the pixel points of the input image.
[0033] In one embodiment, S102 may include, according to camera parameters and the camera imaging model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points, and obtaining the first processing result. In some other embodiments, S102 may include, according to the camera parameters and the distortion correction model, performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points, and obtaining the first processing result.
[0034] The camera parameters may include a focal length of the camera and an optical-center position of the camera, etc. The camera imaging model may include one of a pinhole imaging model, an isometric rectangular model, a stereo imaging model, a fisheye lens model, or a wide-angle lens model. The camera imaging model may be set according to actual requirements.
[0035] S103: performing at least one of virtual reality processing or electronic image stabilization on the first processing result, and obtaining a second processing result. The virtual reality processing may refer to producing a computer simulated environment that may simulate a physical presence in places in the real world or imagined worlds. The electronic image stabilization may refer to an image enhancement technique using electronic processing, and may minimize blurring and compensate for device shake. The virtual reality processing may be performed on the first processing result according to a first rotation matrix, and the electronic image stabilization may be performed on the first processing result according to a second rotation matrix. The second processing result may be obtained by processing the first processing result obtained in S102, according to at least one of the first rotation matrix or the second rotation matrix.
[0036] Specifically, the first rotation matrix may be determined according to an attitude-angle parameter of an observer, and the second rotation matrix may be determined according to a measurement parameter obtained from an inertial measurement unit connected to a camera. The camera may specifically refer to the lens and the image sensor shown in
[0037] S104: mapping the second processing result to a two-dimensional image coordinate system. Specifically, an output image may be obtained by mapping each adjusted incident ray to the two-dimensional image coordinate system. The output image is an image after undergoing at least two operations of distortion correction, electronic image stabilization, or virtual reality processing.
[0038] In one embodiment, the first processing result is obtained by performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points of an input image obtained. The first processing result is processed according to at least one of a first rotation matrix or a second rotation matrix, and a second processing result may thus be obtained. The second processing result is mapped to a two-dimensional image coordinate system, and an output image may thus be obtained. Accordingly, fast processing of the input image may be realized, such that at least two operations of distortion correction, electronic image stabilization, or virtual reality processing may be completed. This processing method may reduce calculation complexity, shorten calculation time, and improve image processing efficiency. For the camera imaging model, the distortion correction model, the first rotation matrix, and the second rotation matrix involved in the present disclosure, reference may be made to existing technologies.
[0039] Technical solutions of the image processing method provided by the present disclosure are described in detail with following embodiments.
[0040]
[0041] S201: obtaining two-dimensional coordinate points of an input image. For a specific explanation of S201, reference may be made to S101 in the embodiment shown in
[0042] S202: according to camera parameters and a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points and obtaining a first processing result.
[0043] S202 may realize a conversion from 2D to 3D shown in
[0044] S203: performing a virtual reality processing on the first processing result and obtaining a second processing result. Specifically, the first rotation matrix is a rotation matrix used in a virtual reality processing, and may be determined according to an attitude-angle parameter of an observer. In S203, a 3D to 3D rotation processing shown in
[0045] Specifically, with P.sub.3D denoting the second processing result, and R.sub.VR denoting the first rotation matrix, S203 may be, according to a formula P.sub.3D=R.sub.VRP.sub.3D, obtaining the second processing result P.sub.3D. By inserting the formula P.sub.3D=.sub.pin(P.sub.2D) of S202 into the formula P.sub.3D=R.sub.VRP.sub.3D, a formula P.sub.3D=R.sub.VR.sub.pin(P.sub.2D) may be obtained.
[0046] S204: mapping the second processing result to a two-dimensional image coordinate system. Specifically, by mapping the incident rays after a rotation processing in S203 to the two-dimensional image coordinate system, an output image may be obtained. The output image is an image that has undergone the distortion correction and the virtual reality processing. S204 may realize a 3D to 2D mapping shown in
[0047] Specifically, with P.sub.2D denoting coordinate points mapped to the two-dimensional image coordinate system, S204 may be, according to a formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), mapping the second processing result to the two-dimensional image coordinate system. A function .sub.cam.sup.1( ) may be set according to actual requirements. By inserting the formula P.sub.3D=R.sub.VR.sub.pin(P.sub.2D) of S203 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.VR.sub.pin(P.sub.2D)) may be obtained.
[0048] In one embodiment, according to the camera parameters and the distortion correction model, the first processing result may be obtained by performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points of the input image obtained. The second possessing result may be obtained by performing the virtual reality processing on the first processing result. The output image may be obtained by mapping the second processing result to the two-dimensional image coordinate system. Accordingly, fast processing of the input image may be realized, such that the distortion correction and the virtual reality processing may be completed. As such, calculation complexity may be reduced, calculation time may be shortened, and image processing efficiency may be improved.
[0049] In addition, in the present disclosure, since the distortion correction and virtual reality processing are completed in a way described above, operations P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are not required to be performed before P.sub.3D=R.sub.VRP.sub.3D and after formula P.sub.3D=.sub.pin(P.sub.2D). Thus, calculation may be simplified. In addition, since calculations of P.sub.2D=.sub.cam.sup.1 and P.sub.3D=.sub.cam(P.sub.2D) are usually performed through fixed-pointing or looking-up tables, P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) may not be completely equivalent inverse operations. After repeated calculations, cumulative errors may increase. Accordingly, by simplifying the calculation in a manner described above, the cumulative errors may be eliminated, and calculation accuracy may be improved.
[0050]
[0051] S301: obtaining two-dimensional coordinate points of an input image. For a specific explanation of S301, reference may be made to S101 in the embodiment shown in
[0052] S302: according to camera parameters and a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points and obtaining a first processing result.
[0053] S302 may realize a conversion from 2D to 3D shown in
[0054] With P.sub.3D denoting the first processing result and P.sub.2D denoting the two-dimensional coordinate points, S302 may be, according to a formula P.sub.3D=.sub.pin(P.sub.2D), obtaining the first processing result P.sub.3D, where the function .sub.pin( ) may be a polynomial.
[0055] S303: performing electronic image stabilization on the first processing result and obtaining a second processing result. A second rotation matrix is a rotation matrix used in the electronic image stabilization, and may be determined according to a measurement parameter obtained from an inertial measurement unit connected to the camera. S303 may realize a 3D to 3D rotation processing shown in
[0056] With P.sub.3D denoting the second processing result and R.sub.IS denoting the second rotation matrix, S303 may be, according to a formula P.sub.3D=R.sub.ISP.sub.3D, obtaining the second processing result P.sub.3D. By inserting the formula P.sub.3D=.sub.pin(P.sub.2D) of S302 into the formula P.sub.3D=R.sub.ISP.sub.3D, a formula P.sub.3D=R.sub.IS.sub.pin(P.sub.2D) may be obtained.
[0057] S304: mapping the second processing result to a two-dimensional image coordinate system. Specifically, by mapping the incident rays after the rotation processing in S303 to the two-dimensional image coordinate system, an output image may be obtained. The output image is an image that has undergone the distortion correction and the electronic image stabilization. S304 may realize a 3D to 2D mapping shown in
[0058] Specifically, with P.sub.2D denoting coordinate points mapped to the two-dimensional image coordinate system, S304 may be, according to a formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), mapping the second processing result to the two-dimensional image coordinate system. A function .sub.cam.sup.1( ) may be set according to actual requirements. By inserting the formula P.sub.3D=R.sub.IS.sub.pin(P.sub.2D) of S303 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.IS.sub.pin(P.sub.2D)) may be obtained.
[0059] In one embodiment, according to the camera parameters and the distortion correction model, the first processing result may be obtained by performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points of the input image obtained. The second possessing result may be obtained by performing the electronic image stabilization on the first processing result. The output image may be obtained by mapping the second processing result to the two-dimensional image coordinate system. Accordingly, fast processing of the input image may be realized, such that the distortion correction and electronic image stabilization may be completed. As such, calculation complexity may be reduced, calculation time may be shortened, and image processing efficiency may be improved.
[0060] In addition, in the present disclosure, since the distortion correction and the electronic image stabilization are completed in a way described above, operations P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are not required to be performed before P.sub.3D=R.sub.ISP.sub.3D and after P.sub.3D=.sub.pin(P.sub.2D). Thus, calculation may be simplified. In addition, since calculations of P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are usually performed through fixed-pointing or looking-up tables, P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) may not be completely equivalent inverse operations. After repeated calculations, cumulative errors may increase. Accordingly, by simplifying the calculation in the manner described above, the cumulative errors may be eliminated, and calculation accuracy may be improved.
[0061]
[0062] S401: obtaining two-dimensional coordinate points of an input image. For a specific explanation of S401, reference may be made to S101 in the embodiment shown in
[0063] S402: according to camera parameters and a camera imaging model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points and obtaining a first processing result.
[0064] S402 may realize a conversion from 2D to 3D as shown in
[0065] With P.sub.3D denoting the first processing result, and P.sub.2D denoting the two-dimensional coordinate points, S402 may be, according to a formula P.sub.3D=.sub.cam(P.sub.2D), obtaining the first processing result P.sub.3D.
[0066] S403: performing virtual reality processing and electronic image stabilization on the first processing result and obtaining a second processing result.
[0067] Specially, a first rotation matrix is a rotation matrix used in the virtual reality processing, and may be determined according to an attitude-angle parameter of an observer. A second rotation matrix is a rotation matrix used in the electronic image stabilization, and may be determined according to a measurement parameter obtained from an inertial measurement unit connected to the camera. S403 may realize a 3D to 3D to 3D rotation processing shown in
[0068] With P.sub.3D denoting the second processing result, R.sub.VR denoting the first rotation matrix, and R.sub.IS denoting the second rotation matrix, S403 may be, according to a formula P.sub.3D=R.sub.ISR.sub.VRP.sub.3D, obtaining the second processing result P.sub.3D. That is, the virtual reality processing is performed first, and then the electronic image stabilization is performed. By inserting the formula of S402 into formula P.sub.3D=R.sub.ISR.sub.VRP.sub.3D, a formula P.sub.3D=R.sub.ISR.sub.VR.sub.cam(P.sub.2D) may be obtained.
[0069] It should be noted that, S403 may also be, according to a formula P.sub.3D=R.sub.VRR.sub.ISP.sub.3D, obtaining the second processing result P.sub.3D. That is, the electronic image stabilization is performed first, and then the virtual reality processing is performed. By inserting the formula of S402 into the formula P.sub.3D=R.sub.VRR.sub.ISP.sub.3D, a formula P.sub.3D=R.sub.VRR.sub.IS.sub.cam(P.sub.2D) may be obtained.
[0070] S404: mapping the second processing result to a two-dimensional image coordinate system. Specifically, by mapping the incident rays after the rotation processing of S403 to the two-dimensional image coordinate system, the output image may be obtained. The output image is an image that has undergone the virtual reality processing and the electronic image stabilization. S404 may realize a 3D to 2D mapping shown in
[0071] Specifically, with P.sub.2D denoting coordinate points mapped on the two-dimensional image coordinate system, S404 may be, according to a formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), mapping the second processing result to the two-dimensional image coordinate system. A function .sub.cam.sup.1( ) may be set according to actual requirements.
[0072] By inserting the formula P.sub.3D=R.sub.ISR.sub.VR.sub.cam(P.sub.2D) of S403 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.ISR.sub.VR.sub.cam(P.sub.2D)) may be obtained. By inserting the formula P.sub.3D=R.sub.VRR.sub.IS.sub.cam(P.sub.2D) of S403 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.VRR.sub.IS.sub.cam(P.sub.2D)) may be obtained.
[0073] In one embodiment, according to the camera parameters and the distortion correction model, the first processing result may be obtained by performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points of the input image obtained. The second possessing result may be obtained by performing the virtual reality processing and the electronic image stabilization on the first processing result. The output image may be obtained by mapping the second processing result to the two-dimensional image coordinate system. Accordingly, fast processing of input images may be realized, such that the virtual reality processing and electronic image stabilization may be completed. As such, calculation complexity may be reduced, calculation time may be shortened, and image processing efficiency may be improved.
[0074] In addition, in the present disclosure, since the virtual reality processing and electronic image stabilization are completed in a way described above, operations P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are not required to be performed before P.sub.3D=R.sub.ISR.sub.VRP.sub.3D (or P.sub.3D=R.sub.VRR.sub.ISP.sub.3D) and after formula P.sub.3D=.sub.pin(P.sub.2D) Thus, calculation may be simplified. In addition, since calculation of P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) is usually performed through fixed-pointing or looking-up tables, P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) may not be completely equivalent inverse operations. After repeated calculations, cumulative errors may increase. Accordingly, by simplifying the calculation in a manner described above, the cumulative errors may be eliminated, and calculation accuracy may be improved.
[0075]
[0076] S501: obtaining two-dimensional coordinate points of an input image. For a specific explanation of S501, reference may be made to S101 in the embodiment shown in
[0077] S502: according to camera parameters and a distortion correction model, performing a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points and obtaining a first processing result.
[0078] S502 may realize a conversion from 2D to 3D as shown in
[0079] With P.sub.3D denoting the first processing result and P.sub.2D denoting the two-dimensional coordinate points, S502 may be, according to a formula P.sub.3D=.sub.pin(P.sub.2D), obtaining the first processing result P.sub.3D.
[0080] It should be noted that, unlike the embodiment shown in
[0081] S503: performing virtual reality processing and electronic image stabilization on the first processing result and obtaining a second processing result.
[0082] The first rotation matrix is a rotation matrix used in the virtual reality processing, and may be determined according to an attitude-angle parameter of an observer. The second rotation matrix is a rotation matrix used in the electronic image stabilization, and may be determined according to a measurement parameter obtained from an inertial measurement unit connected to the camera. S503 may realize a 3D to 3D to 3D rotation processing shown in
[0083] In some other embodiments, in S503, electronic image stabilization may be performed first and then the virtual reality processing is performed.
[0084] With P.sub.3D denoting the second processing result, R.sub.VR denoting the first rotation matrix, and R.sub.IS denoting the second rotation matrix, S403 may be, according to a formula P.sub.3D=R.sub.ISR.sub.VRP.sub.3D, obtaining the second processing result P.sub.3D. By inserting the formula of S502 into the formula P.sub.3D=R.sub.ISR.sub.VRP.sub.3D, a formula P.sub.3D=R.sub.ISR.sub.VR.sub.pin(P.sub.2D) may be obtained.
[0085] It should be noted that, S503 may also be, according to a formula P.sub.3D=R.sub.VRR.sub.IsP.sub.3D, obtaining the second processing result P.sub.3D. By inserting the formula of S502 into the formula P.sub.3D=R.sub.VRR.sub.ISP.sub.3D, a formula P.sub.3D=R.sub.VRR.sub.IS.sub.pin(P.sub.2D) may be obtained.
[0086] S504: mapping the second processing result to a two-dimensional image coordinate system. Specifically, by mapping the incident rays after the rotation processing in S503 to the two-dimensional image coordinate system, the output image may be obtained. The output image is an image that has undergone the distortion correction, virtual reality processing and electronic image stabilization. S504 may realize a 3D to 2D mapping shown in
[0087] With P.sub.2D denoting coordinate points mapped to the two-dimensional image coordinate system, S504 may be, according to a formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), mapping the second processing result to the two-dimensional image coordinate system. A function .sub.cam.sup.1( ) may be set according to actual requirements.
[0088] By inserting the formula P.sub.3D=R.sub.ISR.sub.VR.sub.pin, S503 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.ISR.sub.VR.sub.pin(P.sub.2D)) may be obtained. By inserting the formula P.sub.3D=R.sub.VRR.sub.IS.sub.pin(P.sub.2D) of S503 into the formula P.sub.2D=.sub.cam.sup.1(P.sub.3D), a formula P.sub.2D=.sub.cam.sup.1(R.sub.VRR.sub.IS.sub.pin(P.sub.2D)) may be obtained.
[0089] In one embodiment, according to the camera parameters and the distortion correction model, the first processing result may be obtained by performing the two-dimension to three-dimension conversion operation on the two-dimensional coordinate points of the input image obtained. The second possessing result may be obtained by performing the virtual reality processing and the electronic image stabilization on the first processing result. The output image may be obtained by mapping the second processing result to the two-dimensional image coordinate system. Accordingly, fast processing of input images may be realized, such that the distortion correction, virtual reality processing and electronic image stabilization may be completed. As such, calculation complexity may be reduced, calculation time may be shortened, and image processing efficiency may be improved.
[0090] In addition, in the present disclosure, since the distortion correction, virtual reality processing and electronic image stabilization are completed in a way described above, operations P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are not required to be performed before P.sub.3D=R.sub.ISR.sub.VRP.sub.3D (or P.sub.3D=R.sub.VRR.sub.ISP.sub.3D) and after P.sub.3D=.sub.cam(P.sub.2D). Thus, calculation may be simplified. In addition, since calculations of P.sub.2D=.sub.cam.sup.1(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) are usually performed by fixed-pointing or looking-up tables, P.sub.2D=.sub.cam.sup.(P.sub.3D) and P.sub.3D=.sub.cam(P.sub.2D) may not be completely equivalent inverse operations. After repeated calculations, cumulative errors may increase. Accordingly, by simplifying the calculation in a manner described above, the cumulative errors may be eliminated, and calculation accuracy may be improved.
[0091]
[0092] In one embodiment, the processor 12 is configured to perform a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points according to camera parameters and a camera imaging model to obtain a first processing result. In some other embodiments, the processor 12 may be configured to perform a two-dimension to three-dimension conversion operation on the two-dimensional coordinate points according to camera parameters and a distortion correction model to obtain a first processing result.
[0093] The processor 12 may be configured to perform a virtual reality processing on the first processing result according to a first rotation matrix.
[0094] The processor 12 may be configured to perform electronic image stabilization on the first processing result according to a second rotation matrix.
[0095] The first rotation matrix may be determined according to an attitude-angle parameter of an observer, and the first processing result may be processed according to the first rotation matrix to obtain the second processing result.
[0096] The processor 12 may also be configured to obtain an attitude-angle parameter of the observer.
[0097] The second rotation matrix may be determined according to measurement parameters obtained from an inertial measurement unit connected to the camera. The processor 12 may be configured to obtain a second processing result by processing the first processing result according to the second rotation matrix.
[0098] In one embodiment, the processor 12 is used to obtain the measurement parameters from an inertial measurement unit connected to the camera, and the processor 12 is also used to determine the second rotation matrix according to the measurement parameters. In some other embodiments, the processor 12 may be configured to obtain the second rotation matrix from an inertial measurement unit connected to the camera, where the second rotation matrix is determined by the inertial measurement unit according to the measurement parameters.
[0099] The camera imaging model includes any one of a pinhole imaging model, an isometric rectangular model, a stereo imaging model, a fisheye lens model, or a wide-angle lens model.
[0100] The image processing apparatus provided by the present disclosure may be used to implement the technical solutions of the present disclosure.
[0101] It should be noted that division of modules in the present disclosure is schematic, and is only a type of division based on logical functions. In actual implementations, modules may be divided in other ways. In one embodiment, all functional modules may be integrated into an integrated processing module. In some other embodiments, each functional module may separately exist physically, or two or more functional modules may be integrated into one integrated processing module. The functional modules may be implemented in a form of hardware or software, and the integrated processing modules may also be implemented in a form of hardware or software.
[0102] When the integrated processing module is implemented in a form of software, and sold or used as an independent product, the integrated processing module may be stored in a non-transitory computer-readable storage medium. Based on this understanding, the technical solutions of the present disclosure essentially, or part of the technical solutions that contributes to existing technologies, or all or part of the technical solutions, may be embodied in a form of a software product. The software product may be stored in a storage medium. The software product may include several instructions, such that a computer device (may be a personal computer, a server, or a network device) or a processor may perform all or part of steps of the image processing method provided by the present disclosure. The storage medium may include any medium that may be used to store program codes, such as a U disk, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
[0103] The embodiments of the present disclosure may be implemented in whole or in part by one or a combination of software, hardware, or firmware. When implemented by software, the embodiment may be implemented in whole or in part in a form of a computer program product. The computer program product may include one or more computer instructions. When the computer program instructions are loaded and executed on a computer, processes or functions according to the embodiment may be wholly or partially realized.
[0104] The computer may be a general-purpose computer, a special-purpose computer, a computer network, or another programmable device. The computer instructions may be stored in a non-transitory computer-readable storage medium, or may be transmitted from one non-transitory computer-readable storage medium to another non-transitory computer-readable storage medium. For example, the computer instructions may be transmitted from a website site, computer, server, or data center to another website site, computer, server, or data center via a wired approach (for example, coaxial cable, optical fiber, or digital subscriber line (DSL)) or a wireless approach (for example, infrared, wireless, microwave, etc.).
[0105] The non-transitory computer-readable storage medium may be any usable medium that may be accessed by a computer, or a data storage device such as a server, a data center, or the like that includes one usable medium or a plurality of usable media that are integrated. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
[0106] In the present disclosure, division of the functional modules is exemplary, and is for a purpose of description convenience and brevity only. Those skilled in the art may understand that, functions in the present disclosure may be allocated to different functional modules according to practical applications. That is, an internal structure of an image processing apparatus provided by the present disclosure may be divided into different functional modules, such that all or part of the functions may be achieved. For specific working process of the apparatus, references may be made to processes of corresponding embodiments in the present disclosure, and details are not described herein again.
[0107] Accordingly, the technical solutions of the present disclosure may have the following advantages. The image processing method and apparatus provided by the present disclosure may obtain a first processing result by performing a two-dimension to three-dimension conversion operation on two-dimensional coordinate points of an acquired input image. A second processing result may be obtained by processing the first processing result, according to at least one of a first rotation matrix or a second rotation matrix. The second processing result may be mapped to a two-dimensional image coordinate system, and an output image may thus be obtained. Accordingly, rapid processing of the input image may be realized, such that at least two operations of distortion correction, virtual reality processing and electronic image stabilization may be completed. As such, calculation complexity may be reduced, calculation time may be shortened, and image processing efficiency may be improved.
[0108] Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present disclosure, but do not limit the present disclosure. Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced. These modifications or replacements do not deviate the essence of the corresponding technical solutions from the scope of the technical solutions of the present disclosure.