Method and system for generating a surround view
10075634 ยท 2018-09-11
Assignee
Inventors
Cpc classification
B60R2300/306
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
G08G1/168
PHYSICS
B60R2300/607
PERFORMING OPERATIONS; TRANSPORTING
G06T3/4038
PHYSICS
B60R2300/60
PERFORMING OPERATIONS; TRANSPORTING
H04N13/282
ELECTRICITY
B60R2300/602
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/30
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/00
PERFORMING OPERATIONS; TRANSPORTING
H04N13/243
ELECTRICITY
International classification
B60R11/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method and system for generating a surround view are provided. The method may include: establishing a surround surface; obtaining a plurality of images of surroundings; and projecting the images onto the surround surface based on a projection relationship between points on the surround surface and pixels on the images to generate a surround view, where the projection relationship may change with heights of the points on the surround surface. An improved projection effect may be obtained.
Claims
1. A method for generating a surround view, comprising: establishing a surround surface; obtaining one or more images of surroundings; and generating a surround view by projecting the one or more images onto the surround surface based on a projection relationship between a point on the surround surface and a pixel on the images, wherein the projection relationship changes with a height of the point on the surround surface from a bottom surface of the surround surface, and wherein the projection relationship is derived at least by applying a translation matrix from a world coordinate system to a camera coordinate system, where the translation matrix used for each projected pixel is weighted differently based on a height of the projected pixel, wherein the projection relationship is obtained based on an equation:
2. The method according to claim 1, wherein the weighting factor W is equal to 1 when the height of the point relative to the bottom surface or a lowest tangent plane of the surround surface is equal to 0.
3. The method according to claim 1, wherein the weighting factor W is equal to 0 when the height of the point is greater than a height of the origin of the world coordinate system from the bottom surface of the surround surface.
4. The method according to claim 1, wherein the weighting factor W is greater than 0 and less than 1 when the height of the point is between 0 and the height of the origin of the world coordinate system.
5. The method according to claim 4, wherein the weighting factor W reduces with increasing of the height of the point.
6. A system for generating a surround view, comprising: a processing device, configured to: establish a surround surface, obtain one or more images of surroundings, and project the one or more images onto the surround surface based on a projection relationship between a point on the surround surface and a pixel on the images to generate a surround view, where the projection relationship changes with a height of the point on the surround surface from a bottom surface of the surround surface, wherein the projection relationship is derived at least by applying a translation matrix from a world coordinate system to a camera coordinate system, where the translation matrix used for each projected pixel is weighted differently based on a height of the projected pixel; and a display device for showing the surround view, wherein the projection relationship between the images and the surround surface is obtained based on an equation:
7. The system according to claim 6, wherein the weighting factor W is equal to 1 when the height of the point relative to a lowest tangent plane of the surround surface is equal to 0.
8. The system according to claim 6, wherein the weighting factor W is equal to 0 when the height of the point is greater than a height of the origin of the world coordinate system from the bottom surface of the surround surface.
9. The system according to claim 6, wherein the weighting factor W is greater than 0 and less than 1 when the height of the point is between 0 and the height of the origin of the world coordinate system.
10. The system according to claim 9, wherein the weighting factor W reduces with increasing of the height of the point.
11. A system for generating a surround view of an object, comprising: one or more cameras for acquiring one or more images of surroundings of the object, and a processing device, configured to: establish a surround surface of the object; and generate a surround view by projecting the one or more images onto the surround surface based on a projection relationship between a point on the surround surface in a world coordinate system and a pixel on the images in a camera coordinate system, wherein the projection relationship includes a rotation matrix and a translation matrix, and the translation matrix is weighted based on a height of the point on the surround surface from a bottom surface of the surround surface, wherein the projection relationship is derived at least by applying the translation matrix from the world coordinate system to the camera coordinate system, wherein the projection relationship between the images and the surround surface is obtained based on an equation:
12. The system according to claim 11, wherein the weighting factor W is equal to 1 when the height of the point relative to the bottom surface or a lowest tangent plane of the surround surface is equal to 0.
13. The system according to claim 11, wherein the weighting factor W is equal to 0 when the height of the point is greater than a height of the origin of the world coordinate system from the bottom surface of the surround surface.
14. The system according to claim 11, wherein the weighting factor W reduces with increasing of the height of the point when the height of the point is between 0 and the height of the origin of the world coordinate system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
(11)
(12) In S101, establishing a surround surface.
(13) A surround surface means a simulated 3D surface with a specific shape, which may at least encompass an object around. The object may be a vehicle, a detector, or the like.
(14)
(15) It should be noted that the specific configuration of the surround surface 200, for example, size, position, shape, and the like may be set based on practical requirements. In some embodiments, the surround surface 200 may have a bottom plane 201, which is the lowest tangent plane thereof, coincide with the ground plane 400.
(16) The surround surface 200 may be established in a world coordinate system. Therefore, points on the surround surface 200 may have world coordinates, which may be used in the calculation for projection. In some embodiments, the origin may be a center of a vehicle, or set at a position of a driver in the vehicle. One axis of the world coordinate system may be set in parallel with the ground plane.
(17) In S103, obtaining a plurality of images of surroundings.
(18) Images of surroundings means that the images may include the scenery encompassing the vehicle 300. In some embodiments, the images may be captured by a plurality of cameras orientating in different directions. In some embodiments, the cameras may be fish eye cameras with a field of view of about 190, therefore, it is needed at least two fish eye cameras, preferably but not limiting, four fish eye cameras.
(19) In S105, projecting the images onto the surround surface 200 based on a projection relationship between points on the surround surface 200 and pixels on the images, where the projection relationship may change with heights of the points on the surround surface 200.
(20)
(21)
(22) In some embodiments, extrinsic transformation and intrinsic transformation may be applied to establish a projection relationship between points on the surround surface 200 and pixels on the images.
(23) In existing solutions, extrinsic transformation may be performed based on Equation (1):
(24)
where X.sub.w, Y.sub.w, and Z.sub.w are world coordinates of a point on the surround surface 200, X.sub.c, Y.sub.c, and Z.sub.c are camera coordinates of the point in a corresponding camera coordinate system, R stands for a rotation matrix from the world coordinate system to the camera coordinate system and T stands for a translation matrix from the world coordinate system to the camera coordinate system.
(25) The camera coordinate system may be established based on position of the camera's optical center and direction of its optical axis. Rotation matrix, translation matrix and configuration of the camera coordinate system are well known in the art, and will not be described in detail hereunder.
(26) Therefore, the point's world coordinates may be transformed into camera coordinates. Thereafter, in intrinsic transformation, the camera coordinates may be transformed into image coordinates, which may be relevant to the camera's intrinsic parameters such as focus length. As a result, a pixel corresponding to the point may be identified in the image and then projected onto the point.
(27) However, based on the extrinsic and intrinsic transformation, distortion may occur.
(28) Specifically, the influence of the translation matrix T will be illustrated with reference to
(29)
where N stands for a distance from the origin of the world coordinate system to the optical center of the front camera 310, i.e., the origin of the camera coordinate system.
(30) The surround view is proposed to simulate a human's field of sight. Ideally, a point B on the ground plane 400 should be projected onto a point A on the surround surface 200, as the origin of the world coordinate system, the point A and the point B are in a same straight line. Therefore, a pixel C on an image 311 of the front camera 310 should be projected onto the point A. However, based on conventional Equation (1), a pixel C on the image 311, corresponding to a point B on the ground plane 400, may be projected onto the point A, as the camera's optical center, the pixel C, the point A and the point B are in a same straight line.
(31) For a point D on the lowest tangent plane of the surround surface 200, ideally, a point E on the ground plane 400 should be illustrated on the point D. Since normally the lowest tangent plane may be configured to coincide with the ground plane 400, the point D and the point E are at the same position. As a result, a pixel F on the image, corresponding to the point E may be projected onto the point D based on Equation (1).
(32) In light of above, the ground region near the vehicle 300, within the coincident region with the lowest tangent plane of the surround surface 200, may be correctly projected in the surround view. However, scenery farther away may be twisted.
(33) Therefore, the projection relationship needs adjustment. In some embodiments, the projection relationship may change with heights of the points on the surround surface 200. The height is calculated from the bottom surface of the surround surface 200, or the lowest tangent plane of the surround surface has no bottom surface.
(34) In some embodiments, the projection relationship may be calculated based on Equation (3):
(35)
where a weighting factor W, which may change with heights of the points on the surround surface 200, is introduced into Equation (3) to adjust the projection relationship.
(36) Normally, on higher regions on the surround surface 200, scenery which is far away from the vehicle 300 may be illustrated. And the translation matrix T may have less influence on farther scenery. Therefore, in some embodiments, the weighting factor W may reduce along with the increasing of the height of the point on the surround surface 200.
(37) Specifically, in some embodiments, the weighting factor W may be equal to 0 when the height of the point is greater than the height of the origin of the world coordinate system. Normally, the height of the origin may be configured the same as the installation height of the camera. Therefore, these points (higher than the origin of the world coordinate system) may stand for the sceneries on high, for example, the sky or building far away. Basically, the translation matrix T has no influence on these sceneries. Therefore, the weighting factor W may be determined to be zero.
(38) In some embodiments, the weighting factor W may be equal to 1 when the height of the point is 0. These points (with heights being 0) generally stand for the ground region near the vehicle 300. The translation matrix T has the greatest influence on the points. Therefore, the weighting factor W may be determined to be 1.
(39) In some embodiments, the weighting factor W may be greater than 0 and less than 1 when the height of the point is not greater than the height of the origin. These points may stand for obstacles around the vehicle 300. In some embodiments, the weighting factor W may be a constant for these points. In some embodiments, the weighting factor W may reduce along with the increasing of the height of the points. In some embodiments, the weighting factor W may be calculated based on Equation (4):
(40)
where H.sub.0 stands for the height of the origin of the world coordinate system and H.sub.1 stands for the height of the point on the surround surface 200. Therefore, for these points higher than bottom surface of the surround surface 200 but lower than the origin of the world coordinate system, the projection relationship may be obtained based on Equation (5).
(41)
(42) By employing the above described method, distortion may be reduced, especially for the ground plane 400 and obstacles near the vehicle 300.
(43) A derivation process of Equation (4) is also provided with reference to
(44)
(45) Suppose the point A translates a distance M along X axis to the virtual point A as a result of transformation based on Equation (3), put world coordinates of the point A, camera coordinates of the point A, the rotation matrix and the translation matrix into Equation (3), thereby Equation (6) is obtained.
(46)
(47) Solve Equation (6) with a constraint Equation (7) obtained based on the geometric relationship of lines in
(48)
Equation (4) may be calculated.
(49)
(50) According to one embodiment of the present disclosure, a system for generating a surround view is provided. The system may include: a plurality of cameras adapted for capturing images of surroundings; a processing device configured to conduct S101 to S105 of method 100 to generate a surround view; and a display device adapted for showing the surround view. The system may be mounted on a vehicle 300, a detector, or the like.
(51) There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally a design choice representing cost vs. efficiency tradeoffs. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle 300; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
(52) While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.