METHOD OF IMAGING A WIND TURBINE ROTOR BLADE

20230105991 · 2023-04-06

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of imaging a wind turbine rotor blade is provided, which method includes the steps of controlling a camera to capture a plurality of images, each image showing a part of the rotor blade surface; augmenting each image with geometry metadata; generating a three-dimensional model of the rotor blade from the image metadata; and re-projecting the images on the basis of the three-dimensional model to obtain a composite re-projection image of the rotor blade. Also provided is a wind turbine rotor blade imaging arrangement.

    Claims

    1. A method of imaging a wind turbine rotor blade, the method comprising, controlling a camera to capture a plurality of images, each image showing a part of a rotor blade surface; augmenting each image with geometry metadata; generating a three-dimensional model of the rotor blade from the image metadata; and re-projecting the images on a basis of the three-dimensional model to obtain a composite re-projection image of the rotor blade; and using the composite re-projected image of the rotor blade to identify a defect in the rotor blade surface.

    2. The method according to claim 1, wherein the re-projecting the images comprises applying a homograph matrix to each image.

    3. The method according to claim 2, wherein the homograph matrix of an image (10i) is compiled on the basis of geometry metadata of that image.

    4. The method according to claim 1, wherein the re-projecting the images is assisted by a neural network.

    5. The method according to wherein the neural network is pre-trained using a plurality of annotated datasets.

    6. The method according to claim 1, wherein geometry metadata of an image comprises the spatial coordinates of the camera at an instant of image capture, and/or comprises a camera viewing angle at the instant of image capture.

    7. The method according to claim 1, wherein geometry metadata of an image comprises a distance between the camera and the rotor blade surface at an instant of image capture.

    8. The method according to claim 1, further comprising mapping an image feature to a coordinate system if the rotor blade.

    9. The wind turbine rotor blade imaging arrangement, comprising: a camera configured to capture a plurality of images, each image showing part of a rotor blade surface; a plurality of metadata generators for generating geometry metadata for an image; an image augmentation module configured to augment each image with the geometry metadata provided by the plurality of metadata generators; a model generation unit configured to generate a three-dimensional model of the rotor blade from the image metadata of the images; and a reprojection module configured to re-project the images on a basis of the three-dimensional model to obtain a composite re-projection image of the rotor blade.

    10. An imaging arrangement according to claim 9, wherein the reprojection module comprises a neural network trained to align an image with respect to a root end of the rotor blade.

    11. The imaging arrangement according to claim 9, wherein the plurality of metadata generators comprises a position tracking unit configured to obtain camera spatial coordinates and/or a viewing angle tracking unit configured to obtain a camera viewing angle and/or a range-finding unit configured to measure distance between the camera and rotor blade surface.

    12. The imaging arrangement according to claim 9, comprising a camera controller configured to adjust a position of the camera and/or an orientation of the camera and/or a focal length of the camera.

    13. The imaging arrangement according to claim 9, configured to identify a finding in an image and to determine coordinates of the finding in a reference frame of the rotor blade.

    14. The imaging arrangement according to claim 9, wherein the camera is carried by a drone, and/or the camera is mounted on a fixed track.

    15. A computer program product, comprising a computer readable hardware storage devices having computer readable program code stored therein, said program code executable to a processor of a computer system to implement a method according to claim 1 when the computer program product is loaded into a memory of a programmable device.

    16. The method according to claim 1, wherein the step of re-projecting the images comprises re-projecting the images at the same scale and viewing angle on the basis of the three-dimensional model to obtain the composite re-projection image of the rotor blade.

    17. The method according to claim 1, wherein the three-dimensional model provides a reference frame from which to carry out the re-projection of the images for accurately relate any pixel of a specific image to a point on the re-projected composite image.

    18. The method according to claim 1, comprising the step of identifying a defect on the rotor blade surface by applying an image processing algorithm, in particular an algorithm configured to detect color anomalies and/or edge anomalies.

    19. The wind turbine rotor blade imaging arrangement, configured for performing the method according to claim 1.

    Description

    BRIEF DESCRIPTION

    [0030] Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:

    [0031] FIG. 1 is a simplified block diagram of an embodiment of the inventive wind turbine rotor blade imaging arrangement;

    [0032] FIG. 2 shows an implementation of the inventive method;

    [0033] FIG. 3 shows an further implementation of the inventive method;

    [0034] FIG. 4 is a simplified schematic of a plurality of images of a rotor blade surface;

    [0035] FIG. 5 is a simplified schematic showing the result of a conventional image stitching procedure;

    [0036] FIG. 6 is a simplified schematic of a rotor blade model generated during the inventive method;

    [0037] FIG. 7 illustrates the principle of homograph transformation applied by the inventive method;

    [0038] FIG. 8 is a simplified schematic of a composite reprojection image of a rotor blade generated by the inventive imaging arrangement; and

    [0039] FIG. 9 is a flowchart illustrating steps of the inventive method.

    DETAILED DESCRIPTION

    [0040] FIG. 1 is a simplified block diagram of an embodiment of the inventive imaging arrangement 1 and shows a camera 10 that can capture images 10i of a rotor blade (not shown). The camera 10 is part of an assembly that also comprises a viewing angle tracking unit GM1 that outputs the camera viewing angles GM_θ, GM_ψ, GM_φ at the instant an image 10i is captured, a position tracking unit GM2 that outputs the camera spatial coordinates GM_xyz at the instant an image 10i is captured, and a range-finding unit GM3 that measures the distance GM_d between camera 10 and rotor blade surface at the instant an image 10i is captured. An image augmentation module 11 augments each image 10i with its geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM_d and forwards the augmented image 10i GM to a model generation unit 12.

    [0041] After receiving a sufficient number of augmented images 10i_GM, the model generation unit 12 generates an accurate three-dimensional model 2_3D of the rotor blade from the geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM_d of the images 10i.

    [0042] The reference system of the 3D model 2_3D is used as a basis from which to compile a homograph matrix 10i HM for each image 10i. Each image 10i of the plurality of images then undergoes a re-projection by its homograph matrix 10i HM. This can be done using a suitable choice of available mathematical algorithms. The result is a set of images all at the same scale, and all of which appear to have been captured from the same camera angle and at the same distance to the rotor blade. A neural network 13 NN can assist in identifying which “end” of an image is closest to a reference such as the rotor blade root end. The result of the image processing is a composite re-projection image 2 rpi showing the entire surface that was captured by the plurality of images.

    [0043] FIG. 2 shows an implementation of the inventive method. Here, a camera-carrying drone 4 is used to collect images 10i of a rotor blade 20. The rectangular bounding box indicates an exemplary field of view of the camera 10. In addition to the camera 10, the drone 4 can also carry the metadata generators GM1, GM2, GM3 described above, so that metadata GM_xyz, GM_θ, GM _ψ, GM_φ, GM_d of an image 10i is defined relative to a main reference frame 2xyz. Here, a vertical such as the longitudinal axis of the wind turbine tower 21 may serve as a principal axis (the Y-axis in this case) of the main reference frame 2xyz. The images 10i may be used to identify a defect F on the rotor blade surface and to report its location y.sub.F relative to the rotor blade root 2R.

    [0044] FIG. 3 shows an further implementation of the inventive method. Here, a camera assembly is mounted to a fixed support 5, so that the camera coordinates GM_xyz remain constant. Again, the rectangular bounding box indicates an exemplary field of view of the camera 10. The camera assembly may be assumed to also include the metadata generators GM1, GM2, GM3 described above, so that metadata GM_xyz, GM_θ, GM _ψ, GM_φ, GM_d of an image 10i is defined relative to a main reference frame 2xyz. Here, a horizontal such as the longitudinal axis of the rotor blade being imaged may serve as a principal axis of the main reference frame 2xyz.

    [0045] FIG. 4 is a simplified schematic of a plurality of images 10i of a rotor blade surface 20S, collected by a camera 10 as described in FIG. 2 or FIG. 3, for example. In this exemplary embodiment, 23 images 10i are shown, labelled #1 to #23. Each image 10i shows a portion of the rotor blade 2, and in this case each image also shows some background. While the images are arranged in a root-to-tip sequence for clarity, the inventive method does not require that the images 10i be captured in that order.

    [0046] The diagram also indicates a “finding” F or anomaly in an image 10i. One aspect of wind turbine maintenance is how to identify defects on the rotor blade in order to assess the severity of damage and whether repair is necessary.

    [0047] A conventional art approach may rely on global GPS coordinates (without reference to the wind turbine's location) and/or time-stamps to identify the correct order of the images prior to “stitching” them together so that the location of a defect F may be estimated. Another approach may be to identify common regions of adjacent images. For example, the light/dark transition in image #19 and image #20 might be used as a basis from which to “stitch” these images together. Image features can also be problematic, for example the foundation structure is visible in each of the images labelled #5-#12, since these images were all collected at different viewing angles and at different distances to the rotor blade. For these reasons, a conventional art technique that relies on global GPS coordinates (without reference to the wind turbine's location) can be quite inaccurate, since it is difficult to identify the correct arrangement of the images in order to “stitch” them together. The cumulative error that accrues during the image-stitching procedure means that the reported location of a defect may differ significantly from its actual location. This means that a reported position of a defect F—e.g., its estimated distance y.sub.F from the root—may be off by a significant amount. When (at a later stage) a service technician abseils from the hub to inspect/repair the defect, the erroneous reported position can result in long delays while the technician searches for the defect, and additional service costs.

    [0048] The inventive method takes a different approach, as explained in the following:

    [0049] FIG. 6 is a simplified schematic of a rotor blade model 2_3D generated by the inventive imaging arrangement 1. Here, the model 2_3D is a boundary model built on the basis of the geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM _d of a plurality of augmented images 10i_GM as explained in FIG. 1 above. The images 10i will now be re-projected according to a desired re-projection scheme, for example to re-project all images 10i in a plane that is parallel to the rotor blade long axis, and at a certain distance from the rotor blade. From a reference frame 3DM.sub.XYZ of the model 2_3D and the geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM_d of an image 10i, a homograph transformation matrix 10i HM is compiled for each image 10i.

    [0050] When applied to an image 10i, the homograph transformation matrix 10i HM will re-project or transform that image according to the re-projection scheme.

    [0051] This is illustrated in FIG. 7, which shows a camera 10 in the process of capturing an image 10i of a region of the rotor blade 2. Various sensors (not shown) of the camera 10 record its position GM_xyz in a reference frame, its Euler angles GM_θ, GM_ψ, GM_φ, and the distance to the rotor blade surface GM_d. Using this information along with the 3D model, as explained in FIG. 1, a homograph transformation matrix 10i HM can be compiled for that image 10i so that the image can be re-projected according to the chosen re-projection scheme. The re-projected image will appear as though taken from a virtual camera 10′ with all Euler angles at 0° and at a specific distance from the rotor blade 2. The original image 10i shown in the top left, distorted because of the camera's orientation relative to the blade 2, can then appear “correctly” after re-projection as shown in the top right of the diagram.

    [0052] This homograph transformation and re-projection is done for all images 10i, and the resulting cumulative image 2_rpi of the rotor blade 2 is shown in FIG. 8. Because of the precise geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM_d provided with each image 10i and the associated precision of the model 2_3D, the image mapping step also results in a precise composite image 2_rpi. This composite image 2_rpi obtained using the inventive method clearly shows that background features visible in multiple images taken from different viewing angles have no effect on the image stitching. The diagram shows that the dimensions of the rotor blade 2 in the composite re-projected image 2_rpi correspond to the dimensions of the actual rotor blade 2. This means that a reported position of a defect F is favorably accurate, so that (at some later stage) an inspection technician can go directly to the defect, avoiding unnecessary costs.

    [0053] FIG. 9 is a flowchart illustrating steps of the inventive method. In a first step 91, the input data is collected, i.e., the plurality of images 10i collected for a rotor blade surface, and the geometry metadata GM_xyz, GM_θ, GM_ψ, GM_φ, GM_d of each image 10i. In a subsequent step 92, a reference frame or coordinate system 2xyz is defined for the rotor blade. In a subsequent step 93, a 3D model 2_3D is constructed from the geometry metadata of the images. In a subsequent step 94, homograph transformation matrices 10i HM are compiled for the images, which are then re-projected onto a common plane. In step 95, each re-projected image is aligned in the reference frame of the rotor blade, i.e., to “line up” the image correctly relative to a fixed reference such as the blade root. To this end, a neural network 13_NN can be applied to assist in correct alignment. In a subsequent step 96, a composite re-projection image 2 rpi is output for the entire rotor blade.

    [0054] Although the present invention has been disclosed in the form of embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.

    [0055] For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements. The mention of a “unit” or a “module” does not preclude the use of more than one unit or module.