FOVEATED STITCHING

20230216981 · 2023-07-06

    Inventors

    Cpc classification

    International classification

    Abstract

    The present disclosure relates to a computer-implemented method for stitching images representing the surroundings of an automated vehicle into a stitched view and an image stitching system for an automated vehicle for use in said method. The method comprises the steps of: providing, by means of respective image capturing units, two images representing surroundings of the automated vehicle, wherein the two images share an overlapping region of the surroundings from different viewpoints of the respective image capturing units; determining an image transformation between the two images based on pre-calculated calibration information, or feature matching of discernable features of the surroundings visible in said two images; stitching the two images into a stitched view with a respective image seam between the two images based on said image transformation; displaying the stitched view to an operator of the automated vehicle, and receiving, by means of an operator input device, operator input data indicating the operator's viewpoint in the stitched view; determining a region of interest within the stitched view based on said operator input data; wherein the step of stitching the two images into a stitched view involves determining a set of stitching solutions between the two images and selecting a stitching solution that results in a stitched view with a stitching seam that is displaced away a distance from a point in the region of interest in a direction towards the outside of the region of interest.

    Claims

    1. A computer-implemented method for stitching images representing the surroundings of an automated vehicle into a stitched view, the method comprising the steps of: providing, by means of respective image capturing units, two images representing surroundings of the automated vehicle, wherein the two images share an overlapping region of the surroundings from different viewpoints of the respective image capturing units; determining an image transformation between the two images based on pre-calculated calibration information, or feature matching of discernable features of the surroundings visible in said two images; stitching the two images into a stitched view with a respective image seam between the two images based on said image transformation; displaying the stitched view to an operator of the automated vehicle, and receiving, by means of an operator input device, operator input data indicating the operator's viewpoint in the stitched view; determining a region of interest within the stitched view based on said operator input data; wherein the step of stitching the two images into a stitched view involves determining a set of stitching solutions between the two images and selecting a stitching solution that results in a stitched view with a stitching seam that is displaced away a distance from a point in the region of interest in a direction towards the outside of the region of interest.

    2. Method according to claim 1, wherein said point is a center point of said region of interest and said distance from said point is within the interval of 25% to 50% of the horizontal or vertical extension of the region of interest.

    3. Method according to claim 1, wherein the step of stitching the two images into a stitched view is based on the location of the region of interest within the stitched view so that when the two images are stitched together to form the stitched view, the respective image seam is displaced fully outside the region of interest.

    4. Method according to claim 1, further comprising a step of displaying the stitched view at a remote terminal.

    5. Method according to claim 1, wherein the stitched view within the region of interest is provided at a first resolution and the stitched view outside the region of interest is provided at a second resolution lower than the first resolution.

    6. Method according to claim 1, wherein the two images are selected from an image group of mutually different images, each provided by a respective image capturing unit, each image of the image group sharing an overlapping region of the surroundings from different viewpoints of the respective image capturing units, wherein the selection of which two images are to be stitched together to create the stitched view is determined so that when the two images are stitched together to form the stitched view, the respective image seam is displaced away a distance from a point in the region of interest in a direction towards the outside of the region of interest.

    7. Method according to claim 1, wherein the stitched view is stitched together using more than two images, wherein each adjacent pair is selected so that when each adjacent image pair is stitched together to form the stitched view, the respective image seams are displaced away a distance from the point in the region of interest in a direction towards the outside of the region of interest.

    8. Method according to claim 7, wherein the number of images stitched together to form the stitched view is 3, 4, 5, 6, 7, 8, 9, 10 or more.

    9. Method according to claim 1, wherein the vertical and horizontal extension of the overlapping region between any two images is such that the region of interest does not extend across the boundary of any image of said any two images.

    10. Method according to claim 1, wherein at least a portion of a stitching seam of any two images is moved to an opposite horizontal or vertical side of the region of interest when the proximate side of the region of interest is within a predetermined horizontal or vertical distance to said at least a portion of said stitching seam.

    11. Method according to claim 10, wherein the region of interest is characterized by a horizontal extension and a vertical extension and said predetermined horizontal or vertical distance is between 1-10% of the horizontal extension or the vertical extension respectively.

    12. Method according to claim 1, wherein the operator input device is either: a gaze tracking device; a VR headset; a pointing device such as a computer mouse, joystick, or the like; a motion sensing input device; or a voice recognition input device.

    13. An image stitching system for an automated vehicle, comprising a processing unit adapted to perform the computer implemented method according to claim 1, at least two image capturing units configured to capture the respective at least two images, and an operator input device configured to an operator's viewpoint of the automated vehicle's surroundings.

    14. The image stitching system according to claim 13, wherein the operator input device is either: a gaze tracking device; a VR headset; a pointing device such as a computer mouse, joystick, or the like; a motion sensing input device; or a voice recognition input device.

    15. The image stitching system according to claim 13, wherein two or more of the at least two image capturing units are members of a single image capturing device configured to capture images in at least two directions.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0034] The invention is defined by the appended independent claims, with embodiments being set forth in the appended dependent claims, in the following description and in the drawings.

    [0035] The invention will in the following be described in more detail with reference to the enclosed drawings, wherein:

    [0036] FIG. 1 shows a schematic representation of the method according to one embodiment of the invention;

    [0037] FIG. 2 shows an illustration of a stitched view stitched together from two images according to one embodiment of the invention;

    [0038] FIG. 3 shows an illustration of some of the steps of the method according to one embodiment of the invention.

    DESCRIPTION OF EMBODIMENTS

    [0039] The invention is defined by the appended independent claims, with embodiments being set forth in the appended dependent claims, in the following description and in the drawings.

    [0040] The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, like numbers refer to like elements.

    [0041] FIG. 1 shows a schematic representation of a computer-implemented method M100 according to one embodiment of the invention. The method M100 enables stitching of images 1, 2 representing the surroundings of an automated vehicle into a stitched view 3. The method M100 comprises a step of providing M101, by means of respective image capturing units, two images 1, 2 representing surroundings of the automated vehicle, wherein the two images 1, 2 share an overlapping region 1a, 2a of the surroundings from different viewpoints of the respective image capturing units. The method M100 comprises a step of determining M102 an image transformation between the two images 1, 2 based on pre-calculated calibration information, or feature matching of discernable features 6a, 6b, 6c of the surroundings visible in said two images 1, 2. The method M100 comprises a step of stitching M103 the two images 1, 2 into a stitched view 3 with a respective image seam 5 between the two images 1, 2 based on said image transformation. The method M100 comprises a step of displaying M104 the stitched view 3 to an operator of the automated vehicle. The method M100 comprises a step of receiving M105, by means of an operator input device, operator input data indicating the operator's viewpoint in the stitched view 3. The method M100 comprises a step of determining M106 a region of interest 4 within the stitched view 3 based on said operator input data. The method M100 is such that the step of stitching the two images 1, 2 into a stitched view 3 involves determining a set of stitching solutions between the two images 1,2 and selecting a stitching solution that results in a stitched view 3 with a stitching seam 5 that is displaced away a distance d from a point 4c in the region of interest 4 in a direction towards the outside of the region of interest 4.

    [0042] The automated vehicle may for instance be a car, a truck, a motorcycle, a construction vehicle, a train, an UAV or drone, an airplane, etc.

    [0043] The operator may be a user with some operative control over the automated vehicle or at least provided with means allowing the user to assume at least some operative control of the automated vehicle when desired.

    [0044] The operator input device may comprise a gaze tracking device; a VR headset; a pointing device such as a computer mouse, joystick, or the like; a motion sensing input device; or a voice recognition input device.

    [0045] By this method, the operator is provided with a stitched view where stitching seams are displaced away from a region of interest 4, preferably entirely outside the region of interest. This reduces the risk that the operator operates the automated vehicle, or allows it to carry out operation, based on erroneous images due to stitching artefacts. A safer driving is thus realized.

    [0046] The method is implemented to be executed by hardware which allows very fast updating of the stitched view. Preferably, the method may be such that the stitched view is updated at an updating frequency of 30-60 Hz, or more.

    [0047] FIG. 2 shows an illustration of a stitched view stitched 3 together from two images 1, 2 according to one embodiment of the invention. The images 1, 2 are provided by image capturing units which are arranged and oriented such that the images 1, 2 have overlapping regions 1a, 2a and respective non-overlapping regions 1b, 2b. When stitched together by the stitching method M100 herein disclosed, a stitching solution is found wherein the stitching seam 5 is displaced a distance d from a point 4c of the region of interest 4. The point 4c may be a center point of the region of interest 4. The distance d from said point 4c may be within the interval of 25% to 50% of the horizontal or vertical extension of the region of interest 4. The stitching solution is found by first having to determine an image transformation between the two images 1, 2. This may preferably be based on pre-calculated calibration information which may involve relative position and orientation of the respective image capturing units. Alternatively, this may be based on feature matching of discernable features 6a, 6b, 6c of the surroundings visible in said two images 1, 2.

    [0048] FIG. 3 shows an illustration of some of the steps of the method M100 according to one embodiment of the invention. In this embodiment, the two images 1, 2 are selected from an image group 10 of mutually different images, each provided by a respective image capturing unit, wherein each image of the image group 10 shares an overlapping region of the surroundings from different viewpoints of the respective image capturing units. The selection of which two images 1, 2 are to be stitched together to create the stitched view 3 is determined so that when the two images 1, 2 are stitched together to form the stitched view 3, the respective image seam 5 is displaced away a distance d from a point 4c in the region of interest 4 in a direction towards the outside of the region of interest 4. The stitching solution is found by first determining an image transformation between the two images 1, 2. This may preferably be based on pre-calculated calibration information which may involve relative position and orientation of the respective image capturing units. Alternatively, this may be based on feature matching of discernable features 6a, 6b, 6c of the surroundings visible in said two images 1, 2.

    [0049] The image group subject for selection may comprise anywhere from 3 to 10 or more images.

    [0050] Further, although not shown, the stitched view 3 may be formed by stitching together more than two images 1, 2. The images may be stitched together in consecutive pairs of adjacent images in a horizontal direction and/or a vertical direction. The number of stitching seams thus depends on the number of images being stitched together. Each stitching seam 5 may be independently be adjusted so as to displace it relative the position of the region of interest 4. The number of images stitched together to form the stitched view (3) is 3, 4, 5, 6, 7, 8, 9, 10 or more.

    [0051] In the drawings and specification, there have been disclosed preferred embodiments and examples of the invention and, although specific terms are employed, they are used in a generic and descriptive sense only and not for the purpose of limitation, the scope of the invention being set forth in the following claims.