Method and system for handling images
10861139 · 2020-12-08
Assignee
Inventors
Cpc classification
B60R2300/802
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/10
PERFORMING OPERATIONS; TRANSPORTING
G06T3/4038
PHYSICS
International classification
G06T3/40
PHYSICS
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method performed by a vehicle system for handling images of surroundings of a vehicle. An image of surroundings of the vehicle is obtained. The image is obtained from at least one image capturing device mounted in or on the vehicle, and the image capturing device comprises a fisheye lens. At least a part of distortions in the image is corrected to obtain a corrected image. The corrected image is rotationally transformed using a first rotational transformation to obtain a first transformed image. The corrected image is rotationally transformed using a second rotational transformation to obtain a second transformed image. The first and second rotational transformations are different from each other, and the first and second transformed images are consecutive images.
Claims
1. A method performed by a vehicle system for handling images of surroundings of a vehicle, the method comprising: obtaining an image of surroundings of the vehicle, wherein the image is obtained from at least one image capturing device mounted in or on the vehicle, wherein the image capturing device comprises a fisheye lens; correcting at least a part of distortions in the image to obtain a corrected image; rotationally transforming the corrected image using a first rotational transformation to obtain a first transformed image; and rotationally transforming the corrected image using a second rotational transformation to obtain a second transformed image, wherein the first and second rotational transformations are different from each other, wherein an amount of rotation applied in the first and second rotational transformations is such that it appears that the first and second transformed images are captured by separate image capturing devices facing different directions or having different orientations, and wherein the first and second transformed images are consecutive images.
2. The method according to claim 1, further comprising: removing redundant overlapping areas from at least one of the first and second transformed images.
3. The method according to claim 1, wherein the first transformed image is mapped on one planar surface and the second transformed image is mapped on another planar surface.
4. The method according to claim 1, further comprising: providing the first and second transformed images as input to another vehicle system for further processing.
5. The method according to claim 1, further comprising: displaying at least one of the first and second transformed images to a user of the vehicle.
6. The method according to claim 1, wherein the image is of at least 180 degrees of the surroundings of the vehicle.
7. The method according to claim 1, wherein the image capturing device is a fisheye camera.
8. A vehicle system for handling images, wherein the vehicle system is adapted to: obtain an image of surroundings of the vehicle, wherein the image is obtained from at least one image capturing device mounted in or on the vehicle, wherein the image capturing device comprises a fisheye lens; correct at least a part of distortions in the image to obtain a corrected image; rotationally transform the corrected image using a first rotational transformation to obtain a first transformed image; and rotationally transforming the corrected image using a second rotational transformation to obtain a second transformed image, wherein the first and second rotational transformations are different from each other, wherein an amount of rotation applied in the first and second rotational transformations is such that it appears that the first and second transformed images are captured by separate image capturing devices facing different directions or having different orientations, and wherein the first and second transformed images are consecutive images.
9. A vehicle comprising the vehicle system of claim 8.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The embodiments herein will now be further described in more detail in the following detailed description by reference to the appended drawings illustrating the embodiments and in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12) The drawings are not necessarily to scale and the dimensions of certain features may have been exaggerated for the sake of clarity. Emphasis is instead placed upon illustrating the principle of the embodiments herein.
DETAILED DESCRIPTION
(13) The embodiments herein relate to a way of dewarping and undistortion of images (e.g. 180 degree images, 360-degree images) that enhances the visualizing of the view for a human vehicle user (e.g. a driver), as well as the performance of general computer vision and machine learning algorithms.
(14)
(15) The vehicle 100 comprises at least one image capturing device 105.
(16) Recall that a fisheye lens may produce strong visual distortion in obtained images intended to create a fisheye image, a wide panoramic or hemispherical image. Fisheye lenses achieve extremely wide angles of view in the obtained images. Instead of images with straight lines of perspective (rectilinear images) obtained by rectilinear lenses, fisheye lenses use a certain mapping, which gives images a characteristic convex non-rectilinear appearance.
(17) A user 110 of the vehicle 100 may be referred to as a vehicle operator, a passenger, an occupant, a driver etc.
(18) The vehicle 100 comprises at least one vehicle system (not shown). One such vehicle system may be a system for handling images of surroundings of the vehicle 100. Another such system may be a processing system, a brake system, a steering system etc.
(19) The method performed by the vehicle system for handling images of surroundings of a vehicle 100, according to some embodiments will now be described with reference to the flowchart depicted in
(20) Step 301
(21) The vehicle system obtains an image 400 of surroundings of the vehicle 100. The image 400 is obtained from at least one image capturing device 105 mounted in or on the vehicle 100. The image capturing device 105 comprises a fisheye lens, and therefore the obtained image may be referred to as a fisheye image. The image 400 may be obtained by receiving it from the image capturing device 105. In another embodiment, the image capturing device 105 captures the image 400 and stores it in a memory, and then the vehicle system obtains the image 400 from the memory. The image may be obtained upon request from the vehicle system, on regular basis, or continuously.
(22) The obtained image 400 comprises distortions. The image 400 may be of at least 180 degrees of the surroundings of the vehicle 100. The image capturing device 105 may be a fisheye camera.
(23) Step 302
(24) The vehicle system corrects at least a part of the distortions in the image 400 to obtain a corrected image.
(25) The correcting may also be referred to as dewarping or base dewarping. A base dewarp may be carried out by any appropriate camera calibration algorithms or models executed by the vehicle system that obtains the mapping from warped coordinates to dewarped coordinates, as well as the intrinsic parameters of the image capturing device 105. Dewarping may be described as correcting the obtained image 400 to reverse effects of geometric distortions caused by the image capturing device 105, e.g. the fisheye lens of the image capturing device 105.
(26) Step 303
(27) The vehicle system rotationally transforms the corrected image using a first rotational transformation to obtain a first transformed image 403.
(28) Step 304
(29) The vehicle system rotationally transforms the corrected image using a second rotational transformation to obtain a second transformed image 405. The first and second rotational transformations are different from each other. The first and second transformed images 403, 405 are consecutive images. The term consecutive images may refer to images that are following, sequential, serial, succeeding etc. For example, the first transformed image 403 and the second transformed image 405 are consecutive in that the first transformed image 403 represents the right part of the obtained image and the second transformed image 405 represents the left part of the obtained image. When the two transformed images 403, 405 are placed together they may form one image which corresponds to the obtained image, but instead it provides an undistorted image.
(30) The first transformed image 403 may be mapped on one planar surface and the second transformed image 405 may be mapped on another planar surface, as illustrated in
(31) Steps 303 and 304 may be referred to as a 2-fold mapping where two rotational transformations are applied separately after the base dewarp. This generates two different views. The amount of rotation applied in the rotational transform is set such that the dewarped images look natural and as if they are captured by two cameras facing different directions or having different orientation. The structure of the 2-fold is demonstrated in 4a and 4b.
(32) Step 305
(33) The vehicle system may remove redundant overlapping areas from at least one of the first and second transformed images 403, 405.
(34) Step 305 may also be described as applying appropriate cropping to remove part of redundant overlapping areas between the first and second transformed images 403, 405. This step may also remove a part of highly distorted areas (usually at the edges of the views). Some overlapping areas may be preserved between the first and second transformed images 403, 405 to allow for potential stitching/porting of algorithm results between the first and second transformed images 403, 405.
(35) The first and second transformed images 403, 405 (possibly also after removal of the redundant overlapping areas) may also be referred to as resulting undistorted dewarped images. The resulting undistorted dewarped images allow for more natural and understandable views towards at least two directions defined by planar surfaces for human beings (especially user 110 in the vehicle 100).
(36) Step 306
(37) The vehicle system may provide the first and second transformed images 403, 405 as input to another vehicle system for further processing. Such other vehicle system may be for example an autonomous driving system, a lane detection system, a vehicle detection system etc.
(38) The first and second transformed images 403, 405 allow for general machine learning/computer vision algorithms/models being applied the vehicle system or by other vehicle systems. Here, general algorithms/models may refer to those designed for and/or trained on images that are usually captured by normal cameras, i.e. non-fisheye/non-omnidirectional cameras.
(39) Step 307
(40) The vehicle system 10 may display at least one of the first and second transformed images 403, 405 to a user 110 of the vehicle 100. In one embodiment, all transformed images 403, 405 may be displayed on a display in the vehicle 100 at the same time. In another embodiment, one image 403, 405 may be displayed at the time, and the user 110 may then be able to switch between the different images.
(41) The method in
(42)
(43)
(44) The original images before 3-fold dewarp (before step 302-307) are shown in
(45) In both
(46) The embodiments herein aim at achieving undistorted images without losing much field-of-view of the image capturing device 105, so that they can be better used by both human beings (e.g. users 110) and vehicle systems (AI/machine learning/computer vision algorithms for ADAS and AD).
(47) The embodiments relate to computer vision for a vehicle 100, more specifically, an image processing method that benefits both human driver and computer systems for autonomous driving vehicles.
(48) Steps 302, 303 and 304 will now be described in more detail using three rotational transformations as an example:
(49) Step 302
(50) The base dewarp may be described as the process of estimating a set of intrinsic related parameters K, , and D, as well as extrinsic related parameters r and t, from a set of images containing calibration pattern such as a chessboard. Here, K is a generalized image capturing device matrix, is a single value parameter, D comprises the distortion coefficients, and r and t characterize rotations and translations between the set of images and the image capturing device 105, respectively. K, , and D are used to undistort the images taken by the image capturing device 105. The image capturing device matrix for the rectified images K.sub.new is usually a scaled identity matrix.
(51) Steps 303 and 304
(52) Rotational transform is applied after the base dewarp in step 302, by multiplying a rotational matrix R with the image capturing device matrix for rectified images K.sub.new:
K.sub.R=K.sub.new.Math.R
(53) The new image capturing device matrix K.sub.R replaces K.sub.new, and is used together with previous K, and D to obtain the rotated view of undistorted images (i.e. the first, second and third transformed images 403, 450, 408).
(54) Here, the rotational transform can be decomposed into 3 rotational transforms around x (horizontal), y (vertical), and z (optical axis of the fisheye lens in the image capturing device 105) axis.
(55)
For the left fold among the 3 fold dewarps (the angles are in radians):
.sub.x=0
.sub.y[0.65,0.95]
.sub.z[0.4,0.7]
For the center fold among the 3 fold dewarps (the angles are in radians):
.sub.x[0,0.3]
.sub.y=0
.sub.z=0
For the right fold among the 3 fold dewarps (the angles are in radians):
.sub.x=0
.sub.y[0.95,0.65]
.sub.z[0.7,0.4]
(56) To perform the method steps shown in
(57) The vehicle system is adapted to, e.g. by means of an obtaining module 901, obtain an image of surroundings of the vehicle 100. The image is obtained from at least one image capturing device 105 mounted on the vehicle 100. The image capturing device 105 comprises a fisheye lens. The image may be of at least 180 degrees of the surroundings of the vehicle 100. The image capturing device 105 may be a fisheye camera. The obtaining module 901 may also be referred to as an obtaining unit, an obtaining means, an obtaining circuit, means for obtaining etc. The obtaining module 901 may be or comprised in a processor 903 of the vehicle system. In some embodiments, the obtaining module 901 may be referred to as a receiving module.
(58) The vehicle system is adapted to, e.g. by means of a correcting module 905, correct at least a part of distortions in the image to obtain a corrected image. The correcting module 905 may also be referred to as a correcting unit, a correcting means, a correcting circuit, means for correcting etc. The correcting module 905 may be or comprised in the processor 903 of the vehicle system.
(59) The vehicle system is adapted to, e.g. by means of a transforming module 908, rotationally transform the corrected image using a first rotational transformation to obtain a first transformed image. The transforming module 908 may also be referred to as a transforming unit, a transforming means, a transforming circuit, means for transforming etc. The transforming module 908 may be or comprised in the processor 903 of the vehicle system.
(60) The vehicle system is adapted to, e.g. by means of the transforming module 908, rotationally transform the corrected image using a second rotational transformation to obtain a second transformed image. The first and second rotational transformations are different from each other, and the first and second transformed images are consecutive images. The first transformed image may be mapped on one planar surface and the second transformed image is mapped on another planar surface.
(61) The vehicle system may be adapted to, e.g. by means of a removing module 910, remove redundant overlapping areas from at least one of the first and second transformed images. The removing module 910 may also be referred to as a removing unit, a removing means, a removing circuit, means for removing etc. The removing module 910 may be or comprised in the processor 903 of the vehicle system.
(62) The vehicle system may be adapted to, e.g. by means of a providing module 913, provide the first and second transformed images as input to another vehicle system for further processing. The providing module 911 may also be referred to as a providing unit, a providing means, a providing circuit, means for providing etc. The providing module 913 may be or comprised in the processor 903 of the vehicle system. In some embodiments, the providing module 913 may be referred to as a transmitting module.
(63) The vehicle system may be adapted to, e.g. by means of a, e.g. by means of a displaying module 915, display at least one of the first and second transformed images to a user 110 of the vehicle 100. The images may be displayed on a display in the vehicle 100. The displaying module 915 may also be referred to as a displaying unit, a displaying means, a displaying circuit, means for displaying etc. The displaying module 915 may be or comprised in the processor 903 of the vehicle system.
(64) In some embodiments, the vehicle system comprises the processor 903 and a memory 918. The memory 918 comprises instructions executable by the processor 903. The memory 918 comprises one or more memory units. The memory 918 is arranged to be used to store data, received data streams, power level measurements, images, parameters, distortion information, transformation information, vehicle information, vehicle surrounding information, threshold values, time periods, configurations, schedulings, and applications to perform the methods herein when being executed by the vehicle system.
(65) A vehicle 100 may comprise the vehicle system as described above.
(66) The embodiments herein for handling images of surroundings of a vehicle 100 may be implemented through one or more processors, such as a processor 903, in the vehicle system arrangement depicted in
(67) A computer program may comprise instructions which, when executed on at least one processor, cause the at least one processor to carry out the method described above. A carrier may comprise the computer program, and the carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium.
(68) Those skilled in the art will also appreciate that the obtaining module 901, the correcting module 905, the transforming module 908, the removing module 910, the providing module 913 and the displaying module 915 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g. stored in a memory, that when executed by the one or more processors such as the processor 903 perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
(69) The following terminologies are used interchangeable herein: dewarping, undistortion, and mapping. They all describe the process of some geometric transformation of an image, usually from the two-dimensional (2D) images captured by the image capturing device 105 to at least two planar images that do not have distortion effects introduced by the image capturing device 105.
(70) Computer vision and machine learning algorithms refers to general algorithms that use images captured by the image capturing device 105 as input, and output decisions that are relevant for ADAS and/or AD, based on machine learning/artificial intelligence technology. Some examples are lane detection, pedestrian detection, vehicle detection, distance measurement, etc.
(71) Directions as used herein, e.g. horizontal, vertical, lateral, relate to when the vehicle system is mounted in the vehicle 100, which stands on flat ground. The vehicle system may be manufactured, stored, transported and sold as a separate unit. In that case, the directions may differ from when mounted in the vehicle 100.
(72) The embodiments herein are not limited to the above described embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the embodiments, which is defined by the appending claims. A feature from one embodiment may be combined with one or more features of any other embodiment.
(73) It should be emphasized that the term comprises/comprising when used in this specification is taken to specify the presence of stated features, integers, steps or components, but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. It should also be noted that the words a or an preceding an element do not exclude the presence of a plurality of such elements. The terms consisting of or consisting essentially of may be used instead of the term comprising.
(74) The term configured to used herein may also be referred to as arranged to, adapted to, capable of or operative to.
(75) It should also be emphasised that the steps of the methods defined in the appended claims may, without departing from the embodiments herein, be performed in another order than the order in which they appear in the claims.