A METHOD AND AN APPARATUS FOR COMPUTER-IMPLEMENTED ANALYZING OF A ROAD TRANSPORT ROUTE

20230033780 · 2023-02-02

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for analyzing of a road transport route for transport of a heavy load from an origin to a destination includes i) obtaining images of the transport route, the images being images taken by a drone or satellite camera system, where each of the images includes a different road section of the complete transport route and an peripheral area adjacent to the respective road section; ii) determining objects and their location in the peripheral area of the road section by processing each of the images by a first trained data driven model, where the images are as a digital input to the first trained data driven model and where the first trained data driven model provides the objects, if any, and their location as a digital output; and iii) determining critical objects from the number of determined objects along the road transport route

    Claims

    1. A method for computer-implemented analyzing of a road transport route intended to be used for transport of a heavy load from an origin to a destination, the method comprising: i) obtaining a plurality of images of the road transport route, the plurality of images being images taken by a camera system installed on a drone or satellite, where each of the images comprises a different road section of a complete transport route and a peripheral area adjacent to the respective road section; ii) determining objects and a location of the objects in the peripheral area of the road section by processing each of the images by a first trained data driven model, where the images are fed as a digital input to the first trained data driven model and where the first trained data driven model provides the objects, if any, and the location of the objects as a digital output; and iii) determining critical objects from the objects along the road transport route, the critical objects being potential obstacles for road transportation due to overlap with the heavy load, by a simulation of the transport along the road transport route by processing at least those images, as relevant images, of the images having at least one determined object, using a second trained data driven model, where the relevant images are fed as a digital input to the second trained data driven model and the second trained data driven model provides the critical objects for further evaluation.

    2. The method according to claim 1, wherein the first trained data driven model and/or the second trained data driven model is a neural network.

    3. The method according to claim 1, wherein the first trained data driven model is based on semantic segmentation.

    4. The method according to claim 1, wherein the location of the objects is defined in a given coordinate system and/or a given relation information defining a distance relative to the road section.

    5. The method according to claim 1, wherein a height of a determined object is determined by processing an additional image of the road section, the additional image being an image taken from a street-level perspective.

    6. The method according to claim 1, wherein steps i) to iii) are conducted for a plurality of different road transportation routes where the road transportation route having the least number of critical objects is provided for further evaluation.

    7. The method according to claim 1, wherein an information about the critical object and a location of the critical object is output via a user interface.

    8. An apparatus for computer-implemented analysis of a road transport route for transport of a heavy load from an origin to a destination, the apparatus comprising: a processor configured to perform the following steps: i) obtaining images of the road transport route, the images being images taken by a camera system installed on a drone or satellite, where each of the images comprises a different road section of a complete transport route and a peripheral area adjacent to the respective road section; ii) determining objects and a location of the objects in the peripheral area of the road section by processing each of the images by a first trained data driven model, where the images is fed as a digital input to the first trained data driven model and where the first trained data driven model provides the objects, if any, and the location of the objects as a digital output; and iii) determining critical objects from the objects along the road transport route, the critical objects being potential obstacles for road transportation due to overlap with the heavy load, by a simulation of the transport along the road transport route by processing at least those images, as relevant images, of the images having at least one determined object, using a second trained data driven model, where relevant images are fed as a digital input to the second trained data driven model and the second trained data driven model provides the critical objects for further evaluation.

    9. The apparatus according to claim 8, wherein the apparatus is configured to perform a method for computer-implemented analyzing of the road transport route.

    10. A computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement a method according to claim 1 when the program code is executed on a computer.

    Description

    BRIEF DESCRIPTION

    [0022] Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:

    [0023] FIG. 1 shows a schematic illustration of a road section as a part of a road transport route with objects in the peripheral area of the road section where at least some of the objects are critical with respect to the transport of a heavy load; and

    [0024] FIG. 2 is a schematic illustration of a controller for performing an embodiment of the invention.

    DETAILED DESCRIPTION

    [0025] FIG. 1 shows an image IM being taken by a camera or camera system installed on a drone or satellite or satellite system. The image IM illustrates a road section RS of a road transport route TR intended to be used for transport of a heavy load HL from a not shown origin to a not shown destination. The heavy load may, in particular, be a component of a wind turbine, such as a rotor blade or nacelle, or any other large component. The road section RS shown in the image IM consists of two curves, a right turn followed by a left turn. The direction of transport of the heavy load HL is indicated by arrow ToD. Peripheral areas PA close to the right turn comprise three different objects O, e.g. trees, masts, walls and so on. As can be easily seen by FIG. 1, objects O additionally denoted with CO constitute critical objects CO being potential obstacles for road transportation due to overlap with the heavy load HL. Hence, further investigation by an analyst is necessary whether the critical objects CO are insurmountable obstacles or obstacles which can be passed by the heavy load HL, e.g. by the possibility of temporal removal.

    [0026] For analyzing the road transport route TR intended to be used for transport of the heavy load HL from the origin to the destination, a plurality of images IM has to be analyzed for potential critical objects. The method as described in the following provides an easy method to detect potential critical objects which are subject for further evaluation by a data analyst.

    [0027] To do so, a number of images IM of the transport route TR is obtained. The number of images are images taken by a camera or camera system installed on a drone or satellite, where each of the number of images IM comprises different road sections RS of the complete transport route TR and the peripheral area PA adjacent to the respective road section RS. The respective images of the camera or cameras of the drone or satellite or satellite system are transferred by a suitable communication link to a controller 100 (see FIG. 2) implemented for carrying out embodiments of the present invention. The controller 100 illustrated in FIG. 2 comprises the processor PR implementing a first and a second trained data driven model MO_1, MO_2 where the first trained data driven model MO_1 receives the respective number of images IM as a digital input and providing objects O in the peripheral areas PA adjacent to the respective road section RS, if any, and their location as a digital output. The location of detected objects O can be defined in a given coordinate system (such as a coordinate system using latitude and longitude coordinates or any other suitable coordinate system) and/or a given relation information defining, for example, a distance of each of the objects O relative to the road section RS.

    [0028] In the embodiment described herein, the first trained data driven model MO_1 is based on a convolutional neural network having been learned beforehand by training data. In particular, the first trained data driven model MO_1 is based on semantic segmentation which is a known data driven model to detect and classify objects O as output of the data driven model MO_1. The training data comprise a plurality of images of different road sections taken by a drone or satellite camera system together with the information of the objects and its classes occurring in the respective image. Convolutional neural networks as well as semantic segmentation are well-known from the prior art and are particularly suitable for processing digital images. A convolutional neural network comprises convolutional layers typically followed by convolutional layers or pooling layers as well as fully connected layers in order to determine at least one property of the respective image where the property according to embodiments of the invention is an object and its class.

    [0029] In the embodiment of FIG. 2, the object O produced as an output of the first data driven model MO_1 is used as further input being processed by the second data driven model MO_2. The second data driven model MO_2 receives those images, as relevant images RIM, of the number of images IM having at least one determined object O to output critical objects CO from the number of determined objects O along the road transport route TR. The critical objects CO are potential obstacles for the road transportation due to overlap with the heavy load HL. The image IM shown in FIG. 1 would therefore be regarded to be a relevant image to be evaluated by the second data driven model MO_2. The second data driven model MO_2 aims to simulate the transport of the heavy load HL along the road transport route TR. The second trained data driven model MO_2 provides the critical objects CO as output for further evaluation by the data analyst.

    [0030] In the embodiment described herein, the second trained data driven model MO_2 is based on a convolutional neural network having been learned beforehand by training data. The training data comprise, as before, a plurality of images of road sections RS together with the information whether objects occurring in the respective image are critical objects.

    [0031] In the embodiment of FIG. 2, the critical objects CO produced as an output of the second model MO_2 lead to an output on a user interface UI which is only shown schematically. The user interface UI comprises a display. The user interface provides information for a human operator or analyst. The output based on the critical objects CO may be the type of an object, the location with respect to the road section RS and the relevant image RIM to enable further investigation.

    [0032] In addition, the height of an object determined in step ii) can be determined by processing an additional image of the road section, where the additional image is an image taken from street-level perspective. For example, the additional image can be taken by a car-installed camera. An object detection algorithm can detect and classify objects in the images, together with location information from a satellite navigation system to provide precise coordinates for available objects in each image and match the coordinates with the objects found by the first data driven model MO_2. Using street-level images enables to derive heights of determined objects in the peripheral area of the road sections of the road transport route.

    [0033] By the method as described above, one possible route can be evaluated. In another preferred embodiment, more possible routes may be evaluated. For each proposed route drone or satellite images are obtained for the complete route and analyzed as described above. The route having the least critical objects may be suggested as a suitable route on the user interface UI.

    [0034] Embodiments of the invention as described in the foregoing has several advantages. Particularly, an easy and straight forward method is provided in order to detect critical objects along a road transport route for a heavy load in order to detect potential overlaps. To do so, objects and critical objects are determined based on images of a drone or satellite camera system via two different suitably trained data driven models. The planning time to determine a suitable route for road transport of heavy load can be provided with less time compared to manual investigation. The process is less error-prone because human analysts are supported, as they can concentrate on critical locations.

    [0035] Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.

    [0036] For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.

    REFERENCES

    [0037] [1] Jonathan Long, Evan Shelhamer, and Trevor Darrel “Fully Convolutional Networks for Semantic Segmentation” published under https://people.eecs.berkeley.edu/˜jonlong/long_shelhamer_fcn.pdf [0038] [2] James Le “How to do Semantic Segmentation using Deep learning” published on May 3, 2018 under https://medium.com/nanonets/how-to-do-image-segmentation-using-deep-learning-c673cc5862ef.