METHOD, AERIAL VEHICLE AND SYSTEM FOR DETECTING A FEATURE OF AN OBJECT WITH A FIRST AND A SECOND RESOLUTION
20230366775 · 2023-11-16
Inventors
Cpc classification
F03D17/00
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
B64U2101/26
PERFORMING OPERATIONS; TRANSPORTING
F03D17/003
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
B64U2201/10
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0094
PHYSICS
G05D1/0088
PHYSICS
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
F05B2260/80
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F05B2270/8041
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
G01M5/00
PHYSICS
G05D1/00
PHYSICS
Abstract
Embodiments according to a first and second aspect of the present invention are based on the core idea of flying along the object for detecting a feature of an object and detecting at least a part of the object with a capturing unit with a first resolution and providing, for those areas of the object that comprise the feature, images with the second resolution that is higher than the first resolution.
Claims
1. A method for detecting a damage of an object, comprising: (a) flying along the object and optically detecting at least a part of the object by at least one capturing unit with the first resolution to generate a plurality of images, wherein each image represents an at least partly different area of the object, (b) evaluating the plurality of images to classify the generated images into images that do not comprise the damage and into images that comprise the damage, and (c) optically detecting again those areas of the object whose allocated images comprise the damage with a second resolution that is higher than the first resolution.
2. The method according to claim 1, wherein step (b) is performed after flying along the object, and step (c) comprises approaching those areas of the object whose allocated images comprise the damage.
3. The method according to claim 2, wherein in step (a) and in step (c), the capturing unit generates one image each with the same focal length, in step (a), the object is approached such that the first capturing unit has a first distance to the object when generating an image, and in step (c), the object is approached such that the capturing unit has a second distance to the object that is lower than the first distance when generating an image.
4. The method according to claim 2, wherein in step (a) and in step (c), the object is approached such that the capturing unit has the same or similar distance to the object when generating an image, in step (a), the capturing unit generates an image with a first focal length, and in step (c), the capturing unit generates an image with a second focal length that is greater than the first focal length.
5. The method according to claim 2, wherein optically detecting the area again in step (c) comprises generating a plurality of partial images of the area, each with the second resolution.
6. The method according to claim 2, wherein position and/or location information of the capturing unit is allocated to each image generated in step (a), and in step (c), the areas of the object that are to be flown along are determined by using the position and/or location information of the images comprising the damage.
7. The method according to claim 2, wherein an unmanned aerial vehicle, such as a drone comprising the capturing unit, flies along the object, and step (b) comprises transmitting the images generated in step (a) from the unmanned aerial vehicle to a computer, for example a laptop computer, and evaluating the images by the computer; and evaluating the images comprises evaluating the images in an automated manner.
8. The method according to claim 7, wherein in step (a), the unmanned aerial vehicle flies along the object autonomously, step (b) comprises generating waypoints by using the position and/or location information of the images comprising the damage and transmitting the waypoints to the unmanned aerial vehicle, and in step (c), the unmanned aerial vehicle approaches the areas of the object autonomously by using the waypoints.
9. The method according to claim 2, wherein an unmanned aerial vehicle, e.g. a drone comprising the capturing unit, flies along he object autonomously, the unmanned aerial vehicle comprises a computer, wherein step (b) comprises evaluating the images and generating waypoints by using the position and/or location information of the images comprising the damage by the computer of the unmanned aerial vehicle, wherein evaluating the images comprises evaluating the images in an automated manner, and in step (c), the unmanned aerial vehicle approaches the areas of the object autonomously by using the waypoints.
10. The method according to claim 1, wherein flying along the object in step (a) is flying along the object autonomously by an unmanned aerial vehicle, wherein the unmanned aerial vehicle comprises the at least one capturing unit; and wherein step (b) comprises generating waypoints by using the position and/or location information of the images comprising the damage and transmitting the waypoints to the unmanned aerial vehicle; and step (c) comprises flying along the area of the object autonomously by using the waypoints.
11. The method according to claim 1, wherein steps (a) to (c) are performed during flying along the object such that in step (a), an image of an area is generated, in step (b), the image generated in step (a) is classified for a further area prior to generating an image, when the image is classified as comprising the damage in step (b), prior to generating the further image, the area is optically detected again in step (c) before an image is generated for the further area and when the image is classified as not comprising the damage in step (b), an image is generated for the further area.
12. A method for detecting a damage of an object, the method comprising: (a) flying along the object and optically detecting at least a part of the object by at least one capturing unit to generate a plurality of images, wherein each image represents an at least partly different area of the object, and wherein, for one area, an image with the first resolution and a plurality of partial images, each with a second resolution that is higher than a first resolution, are generated, (b) evaluating the plurality of images to classify the generated images into images that do not comprise the damage and into images that comprise the damage, and (c) providing the partial images of those areas of the object whose allocated images comprise the damage.
13. The method according to claim 12, wherein an unmanned aerial vehicle, e.g., a drone comprising the capturing unit, flies along the object autonomously, step (b) comprises transmitting the images and partial images generated in step (a) from the unmanned aerial vehicle to a computer, e.g., laptop computer, and evaluating the images by the computer, wherein evaluating the images comprises evaluating the images in an automated manner and step (c) comprises providing the partial images of the area allocated to the image by the computer.
14. The method according to claim 12, wherein an unmanned aerial vehicle, e.g., a drone comprising the capturing unit, flies along the object autonomously, the unmanned aerial vehicle comprises a computer, wherein step (b) comprises evaluating the images and the partial images by the computer of the unmanned aerial vehicle, wherein evaluating the images and the partial images comprises evaluating the images and the partial images in an automated manner and in step (c), the unmanned vehicle transmits the partial images to an evaluating unit, e.g., for classifying or cataloging the detected damages.
15. The method according to claim 1, wherein step (b) comprises AI or machine learning.
16. An unmanned aerial vehicle, e.g., drone for detecting a damage of an object, comprising: at least one capturing unit for generating images by optical detection, wherein the unmanned aerial vehicle can be controlled to fly along the object and to optically detect at least part of the object by the capturing unit with a first resolution to generate a plurality of images, wherein each image represents an at least partly different area of the object and optically detect again those areas of the object whose allocated images comprise the damage with a second resolution that is higher than a first resolution; wherein the unmanned aerial vehicle is configured to transmit the plurality of images to an external computer, e.g., laptop computer that classifies the generated images into images that do not comprise the damage and into images that comprise the damage, receive information from the external computer that indicate the areas of the object to be optically detected by the second resolution, or wherein the unmanned aerial vehicle comprises a computer that is configured to evaluate the plurality of images to classify the generated images into the images that do not comprise the damage and into the images that comprise the damage.
17. An unmanned aerial vehicle, e.g., drone, for detecting a damage of an object comprising: at least one capturing unit for generating images by optical detection, wherein the unmanned aerial vehicle can be controlled to fly along the object and optically detect at least a part of the object by the capturing unit to generate a plurality of images, wherein each image represents an at least partly different area of the object and generate, for each area, an image with a first resolution and a plurality of partial images, each with a second resolution that is higher than the first resolution.
18. A system for detecting a damage of an object comprising: an unmanned aerial vehicle, e.g., a drone, wherein the unmanned aerial vehicle can be controlled to fly along the object to optically detect at least a part of the object by at least one capturing unit with a first resolution to generate a plurality of images, wherein each image represents an at least partly different area of the object, wherein the system is configured to evaluate the plurality of images to classify the generated images into images that do not comprise the damage and into images that comprise the damage and wherein the unmanned aerial vehicle can be controlled to optically detect again those areas of the object whose allocated images comprise the damage with a second resolution that is higher than a first resolution.
19. The system according to claim 18, wherein the unmanned aerial vehicle comprises the at least one capturing unit and wherein the unmanned aerial vehicle can be controlled to fly along the object autonomously and approach those areas of the object autonomously whose allocated images comprise the damage by using the waypoints; and wherein the unmanned aerial vehicle is configured to transmit the plurality of images to a computer; wherein the system comprises a computer; and wherein the computer is configured to evaluate the plurality of images at least partly in an automated manner to classify the generated images into images that do not comprise the damage and into images that comprise the damage and wherein the computer is configured to generate waypoints by using the position and/or location information of the images comprising the damages; and wherein the computer is configured to transmit the waypoints to the unmanned aerial vehicle.
20. A system for detecting a damage of an object, comprising: an unmanned aerial vehicle, e.g., drone, wherein the unmanned aerial vehicle can be controlled to fly along the object and optically detect at least a part of the object by the capturing unit to generate a plurality of images, wherein each image represents an at least partly different area of the object and generate, for each area, an image with a first resolution and the plurality of partial images, each with a second resolution that is higher than the first resolution, wherein the system is configured to evaluate the plurality of images to classify the generated images into images that do not comprise the damage and into images that comprise the damage and provide the partial images of those areas of the object whose allocated images comprise the damage, e.g., for classifying or cataloging the detected damages.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0056] Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
DETAILED DESCRIPTION OF THE INVENTION
[0063] Before embodiments of the present invention will be discussed in more detailed below based on the drawings, it should be noted that identical, functionally equal or equal elements, objects and/or structures in the different figures are provided with the same or similar reference numbers, such that the description of these elements illustrated in different embodiments is inter-exchangeable or inter-applicable.
[0064]
[0065] Starting from a starting point S, the aerial vehicle 120 flies along the object to detect one or several features 110a. The flight trajectory is indicated by waypoints WP1 to WP4. The flight trajectory can originate for example from a preceding trajectory planning. Here, for example waypoint generation can be performed, for example by the computer 140, based on a 3D model of the object, which is provided to the aerial vehicle 120 via a connection 140a. Here, it should be noted that the computer 140 could also be part of the aerial vehicle 120, such that the aerial vehicle plans its own trajectory, for example autonomously. Further, the trajectory can also be manually predetermined by a human pilot, for example due to the lack of a 3D model of the object 110.
[0066] When flying along the object 110, the capturing unit 130 detects the front side 110b of the object 110 or generally a part of the object. Here, the capturing unit 130 generates a plurality of images B1-B4 with a first resolution, wherein each image represents at least partly a different area of the object 110 or the front side 110b of the object. Accordingly, the images B1-B4 can partly overlap as shown in
[0067] According to the invention, for detecting the feature 110a, a plurality of methods and method steps are available whose features, as long as not stated otherwise, are inter-exchangeable and can be used together in any combination. Some inventive options will be discussed below with the help of
[0068]
[0069] Again, as shown in
[0070] Based on the classification, the capturing unit 130 can optically detect again the area of the object 110, which includes the feature by the image B2 allocated to the area on which the area 110a has been detected with a second resolution that is higher than the first resolution.
[0071] If the classification is performed during flying along the object 110, the aerial vehicle can stay on the waypoint WP2 after generating the image B2 and after detecting the feature on image B2 and can detect the respective area of the object again. However, the aerial vehicle 120 does not have to stay on the waypoint WP2 but, for example, can merely maintain a same or similar distance to the object 110. For increasing the resolution when detecting again, the capturing unit 130 can increase the focal length, for example by adapting a zoom setting or by changing a lens. Thereby, an image B21 with higher resolution can be generated, which is, for example, a partial image of the image B2. For this, apart from the evaluation regarding a classification of images B1-B4, which include the feature 110a, the position of the feature can be evaluated by using position and location data of the aerial vehicle 120 to direct the capturing unit 130 to the feature 110a for generating the image B21. Here, it should be noted that the image B21 does not necessarily have to include a partial area of the image B2. The area of the object detected with image B21 can be selected for example only in dependence on a position of the feature 110a, such that the image section of image B21 can be selected independently of areas of the images B1-B4.
[0072] If, for example, the positon of the feature 110a is not known, an amount of partial images B21-B24 can be generated with the second resolution. Further, also independent of a classification and detection of the feature 110a on the images B1-B4, a plurality of areas of the object or, for example, each area of the object can be detected by a plurality of partial images of the area, each with the second resolution (for example for image B1 the partial images B11-B14, for image B2 the partial images B21-B24 etc.).
[0073] Respective partial images can be transmitted, for example via the communication 140b to an evaluation unit, for example in the form of the computer 140, for example, for classifying or cataloging the detected feature 110a.
[0074] Alternatively, the classification of images B1-B4 can also take place after flying along the waypoints WP1-WP4 and subsequently landing on the starting point S. By the communication 140a, subsequently, based on the classification at allocated positon and/or location information about the feature 110a, a second flight trajectory 150 can be provided to the aerial vehicle 120. For easier understanding,
[0075] For increasing the resolution when detecting the object again, the aerial vehicle 120 or the capturing unit 130 can also reduce a distance to the object d. The same will be discussed below with reference to
[0076] Adapting the trajectory of the aerial vehicle 120 can take place during the flight, as shown optionally in
[0077] Further embodiments include an AI supported inspection of wind turbines with drones and will be discussed below based on
[0078] For the case that the feature is the damage 110a, the partial image 470 is shown as an example for illustration. According to embodiments, areas with images 220a including the feature can be detected again with higher resolution, wherein, however, not the entire previously scanned area of the object has to be scanned again but also merely a partial area of the original image section or the area of the object can be detected. Here, it should noted, in comparison to
[0079] The basic concept of defect detection or pattern detection according to embodiments takes place by detecting and subsequently excluding the defect-free areas with the help of AI. In other words, defect detection i.e. for example, detection of the damage 110a, takes place by detecting defect-free areas (image 210), which in this context are referred to as patterns, wherein with this pattern detection for example the defect 110a can be inferred. Subsequently, the areas of the wind power plant 400 or patterns that have been detected by the AI as not defect-free, i.e., defective and therefore not excluded are approached again for generating high-resolution defect images. For approaching again, automatic generation of waypoints can be used. With reference to
[0080] Regarding the wind power plant 400, the AI-supported image/defect detection or pattern detection can be used, for example, in the following tasks or missions. [0081] 1. Calibration flight [0082] 2. Tower inspection [0083] 3. Blade inspection
[0084] According to embodiments, the AI support can be used in several stages. As example, three stages will be explained below, wherein features of the individual stages are inter-exchangeable or combinable in any manner, as far as not indicated otherwise. They are merely to explain the idea regarding the usage of AI with respect of feature detection and are therefore not to be considered as being limiting.
[0085] Stage 1:
[0086] After landing of the drone and transmitting the image data, subsequent further processing of the image data by the AI is performed, which computes on a remote computer (laptop) or runs on the same. The AI generates waypoints for inspection or an inspection flight, for example, after a calibration flight or waypoints for a defect flight, i.e. approaching the wind power plant 400 for detecting the defect 110a or another feature (for example rotor tips 540) with increased resolution after an inspection flight.
[0087] The drone performs the inspection flight autonomously and transmits the images to the remote computer after the landing, for example during a battery change. The image/defect detection or pattern detection takes place on the remote computer. The results, for example waypoints for a subsequent inspection flight, for example based on a generically generated CAD model of the wind power plant after the calibration flight or waypoints for the subsequent defect flight, i.e., approaching the detected defects 110a for example at a short distance and capturing the defects 100a with high resolution are retransmitted to the drone after the calculation.
[0088] Stage 2 Local Intelligence:
[0089] AI runs or computes on an additional computing unit in real time on the drone and controls the inspection flight after calibration or the defect approach during the inspection flight. The additional computing unit can be, for example, an add on GPU (graphics processing unit) Board (additional graphic processor board) or CPU (central processing unit) or specific AI boards. Further, the above-described computer can also be part of the drone and hence provide the hardware for operating the AI.
[0090] The drone is provided with its own local intelligence, for example by the additional computing unit and performs calculations onboard, for example locally on its own drone hardware in real time. Above that, the drone can perform actions, for example calculation of the waypoints for the subsequent inspection flight directly during the calibration flight and further perform the inspection flight, for example, directly afterwards. Defects are detected in real time and instantaneous direct approaching or zooming in at the detected defects and capturing the defect with respective high resolution takes place. Subsequently, the inspection flight is continued up to the next defect.
[0091] By using a 300 DJI drone 1 with P1 full format camera and 50 mm lens or alternatively a zoom lens, the inspection flight can be performed, for example, with a distance to the blade 440 of 8 m and approaching the detected defect again with a distance of 3 m.
[0092] After the inspection flight is terminated, the drone transmits the data, for example all image data of the inspection flight having, for example, a low or the first resolution (for example corresponding to images B1-B4 of
[0093] Stage 3 Evaluation:
[0094] Detected and high-resolution defect images are categorized by the AI. The defect images stored in the cloud can be automatically categorized with the help of AI and the defect protocols can be generated automatically. According to the different stages of the AI support, the AI can be used as discussed below in the three above stated tasks.
[0095] 1. Calibration flight: in embodiments, the basic principle of the AI support is the optical recognition and detection of the optical recognition and detection of the blade tips 450 by AI and the calculation of the position of the blade tips 450 as well as the optical recognition and detection of the blade flanges 460 by AI and calculation of the positions, distances and angles of the blade flanges 460.
[0096] Alternatively or additionally, the pitch angles of the blades can be detected and/or calculated. With these values, final calculation or modification of a generic model, for example a CAD model of the wind power plant 400 including the positioning and orientation of the plant as well as the bending of the blades 440 can take place. From these data, the waypoints for the inspection flights can be calculated. This can take place with intermediate landing (remote—stage 1) or in real time without intermediate landing (local intelligence—stage 2).
[0097] 2. Tower inspection: In tower inspection, the usage of AI can be particularly advantageous due to the large amount of images. Based on the above-stated hardware (300 DJI drone with P1 full format camera and 50 mm lens or alternatively zoom lens) for example 400 images can be generated at a distance of 9 m between capturing unit and wind power plant with a resolution of 1.25 pixel/mm at a tower height of 145 m. On the other hand, the possible variations of damages 110a or in other words defect classes are manageable and the defects mostly large-scale, such that the AI can be reached relatively quickly.
[0098] 3. Blade inspection: Inventive methods for blade inspection are similar to the methods for tower inspection, for example at a significantly lower number of images. For obtaining, for example, needed or advantageous resolution of approximately 1.6 pixel/mm for a first inspection, for example for the usage of AI, for example at a blade length of approximately 7 m at distance of the drone or capturing unit of 7 m to the blade approximately 25 images can be generated or needed per side. For obtaining an improved resolution, for example the above-discussed second resolution, for example a resolution of more than 3.5 pixel/mm requested by a reviewer, detected defects should be approached again or immediately at a distance of approximately 3 m to the blade. However, as described above, for obtaining the improvement of the resolution a change of a zoom setting or a change of the used lens could be performed.
[0099] Based on the following table, aspects of embodiments according to the invention will be summarized briefly again and their advantages are illustrated based on numerical examples. The numerical values are based on the above-described 300 DJI drone with P1 full format camera and 50 mm lens. The image sensor has a width of 35.9 mm with 8197 pixel and a height of 24 mm with 5460 pixel.
TABLE-US-00001 P1 P1 P1 P1 P1 P1 Image Full Distance Image Section Pixel/ sensor format mm Width Height Drone Width Height 35.9 mm 24 mm mm mm Pixel 45 Mio 8.197 5.460 Tower e.g. 9 m 6.400 4.320 1.28 38 Images Inspection Height per Tower 145 m Side Total 8 Blades per 45 Degree 304 20 cm Images Overlap Blade e.g. 7.0 m 5.026 3.360 1.63 25 Images/ Inspection Length Blade 70 m Side Total 3 Sides 75 20 cm Images Overlap Defect 3.0 m 2.120 1.420 3.87 Flight Calibra- 25 m 18.000 12.000 0.46 tion
[0100] In the table, the above-described possible tasks or missions of inventive methods regarding a wind power plant are plotted in the form of a tower inspection, a flight inspection and a calibration (calibration flight). Further, an example for an above-described defect flight is entered. For each of these tasks, in the fourth column, a distance of the aerial vehicle or the drone to the wind power plant, in the fifth column, the image width covered by the respective image section with respect to the surface of the wind power plant in mm, in the sixth column, the image height covered by a respective image portion regarding the surface of the wind power plant in mm and in the seventh column, the respective resolution in pixel/mm is entered.
[0101] The inspection of a wind power plant can start, for example with the calibration or the calibration flight. For this, a pilot flies along the wind power plant at a distance of 25 m with the aerial vehicle, for example the drone. Here, the wind power plant is optically detected, wherein an image in an image area corresponds to a width of 18 m and a height of 12 m in reality. Accordingly, images of this optical detection have a resolution of 0.46 pixel/mm. Based on known position and location information of the aerial vehicle connected to the captured images, by recognizing characteristic features of the wind power plant, such as the blade tips, a CAD model of the wind power plant can be generated or a generated model can be modified. Due to the high distance and the low resolution, this step can be performed with little time effort. Detecting the features can be performed in particularly by using methods of machine learning. It should be noted that the distance could also be in an area or interval such that the distance is for example at most 25 m or at most 20 m or is in a range of 20 m to 25. Further, the distance can also be for example 20m.
[0102] Based on the CAD model, subsequently, waypoints for the inspection flights for example for tower inspection and/or blade inspection can be generated. These waypoints can also again be generated by using AI. Both the evaluation as well as the waypoint generation can be performed after landing after the calibration flight or also during the calibration flight by the aerial vehicle itself. In order to be able to detect defects with sufficient accuracy, in the for example subsequently autonomously performed inspection flights, the distance of the aerial vehicle to the wind power plant is reduced. For blade inspection, for example a distance of 7 m can be set, such that a generated image corresponds to a width of approximately 5 m and a height of approximately 3.3 m in reality, which results in a resolution of 1.63 pixel/mm. In 25 images per blade side with 3 sides and an overlap of 20 cm, with an exemplary length of the blades of 70 m, 75 images result. Analogously, during tower inspection, 304 images result at a resolution of 1.28 pixel/mm.
[0103] For evaluating the images of the inspection flights, again, AI can be used. As already mentioned above, the same can already itself categorize the images during the flight and divide them into images showing damages and into images showing no damages. Alternatively, this can also take place after landing on an external computer. The large advantage of methods of machine learning becomes obvious based on the large number of images, which would result in a high time effort when evaluation is performed by persons. By the inventive idea of separately detecting areas of the wind power plant comprising damages again, direct time-intensive and data-intensive generation of images with high resolution during tower and/or blade inspection or for the entire surface and/or blades can be omitted. By the position and location information of the drone that can be connected to the respective images on which damages have been detected, respective parts can be approached again during a defect flight. Alternatively, detecting can take place with the second or higher resolution also during the tower or blade inspection. The distance of the aerial vehicle to the object can be reduced to 3 m or a respective lens can be attached (e.g. during an intermediate landing) or a zoom setting (e.g. during the flight) can be adapted accordingly. Thereby, a resolution of 3.87 pixel/mm can be obtained.
[0104] With such a high resolution, even smallest damages can be detected and categorized. Thus, for example strict legal requirements regarding the safety of plants can be implemented. By reducing the number of detailed images during renewed optical detection with the increased or second resolution, an inventive method is combined with low time and resource effort due to the preselection of the areas of the wind power plant to be considered. Above that, a respective method is very well scalable due to the usage of a plurality of autonomously flying drones. A further option of scaling is the accuracy, wherein both the inspection flights and the defect flights can be improved with even further reduced distances and improved image sensors. Further, also with zoom lenses, more images with higher resolution can be generated.
[0105] In the following
[0106]
[0107]
[0108] Further, it should be noted that optically detecting according to embodiments could also include detection in the infrared range, for example by means of infrared cameras.
[0109] All listings of materials, environmental influences, electric characteristics and optical characteristics stated herein are to be considered as exemplary and not as limiting.
[0110] Although some aspects have been described in the context of an apparatus, it is obvious that these aspects also represent a description of the corresponding method, such that a block or device of an apparatus also corresponds to a respective method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or detail or feature of a corresponding apparatus. Some or all of the method steps may be performed by a hardware apparatus (or using a hardware apparatus), such as a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some or several of the most important method steps may be performed by such an apparatus.
[0111] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray disc, a CD, an ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, a hard drive or another magnetic or optical memory having electronically readable control signals stored thereon, which cooperate or are capable of cooperating with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
[0112] Some embodiments according to the invention include a data carrier comprising electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
[0113] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
[0114] The program code may, for example, be stored on a machine-readable carrier.
[0115] Other embodiments comprise the computer program for performing one of the methods described herein, wherein the computer program is stored on a machine readable carrier.
[0116] In other words, an embodiment of the inventive method is, therefore, a computer program comprising a program code for performing one of the methods described herein, when the computer program runs on a computer.
[0117] A further embodiment of the inventive method is, therefore, a data carrier (or a digital storage medium or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium, or the computer-readable medium are typically tangible or non-volatile.
[0118] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transmitted via a data communication connection, for example via the Internet.
[0119] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
[0120] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
[0121] A further embodiment in accordance with the invention includes an apparatus or a system configured to transmit a computer program for performing at least one of the methods described herein to a receiver. The transmission may be electronic or optical, for example. The receiver may be a computer, a mobile device, a memory device or a similar device, for example. The apparatus or the system may include a file server for transmitting the computer program to the receiver, for example.
[0122] In some embodiments, a programmable logic device (for example a field programmable gate array, FPGA) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus. This can be a universally applicable hardware, such as a computer processor (CPU) or hardware specific for the method, such as ASIC.
[0123] The apparatuses described herein may be implemented, for example, by using a hardware apparatus or by using a computer or by using a combination of a hardware apparatus and a computer.
[0124] The apparatuses described herein or any components of the apparatuses described herein may be implemented at least partly in hardware and/or software (computer program).
[0125] The methods described herein may be implemented, for example, by using a hardware apparatus or by using a computer or by using a combination of a hardware apparatus and a computer.
[0126] The methods described herein or any components of the methods described herein may be performed at least partly by hardware and/or by software.
[0127] While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.