METHOD AND APPARATUS FOR DETERMINING DEFORMATIONS ON AN OBJECT

Abstract

The invention relates to a method for determining deformations on an object, wherein the object is illuminated and moved while being illuminated. In the process, the object is observed by means of at least one camera and at least two camera images are generated at different times by said camera. In the camera images, polygonal chains are ascertained in each case for reflections at the object caused by form features. The form features are classified on the basis of the behavior of the polygonal chains over the at least two camera images and a two-dimensional representation is generated, the latter representing a spatial distribution of deformations. Moreover, the invention relates to a corresponding apparatus.

Claims

1-15. (canceled)

16. A method for determining deformation on an object, comprising: wherein in an illumination process, irradiating the object by at least one illumination device with electromagnetic radiation of at least such a frequency that the object reflects the electromagnetic radiation as reflected radiation, moving the object and the at least one illumination device relative to each other in a direction of movement during the illumination process, observing the object by at least one camera and producing at least two camera images by the at least one camera by observing at different times t.sub.i, i=1, . . . , n, which image the respectively reflected radiation, in the camera images, at least one reflection of the radiation on the object caused by a shape feature of the object being determined, determining in a polygonal chain for at least one of the at least one reflections in the at least two camera images, producing a two-dimensional representation from the at least two camera images, in which, in one dimension of the two dimensions, the times t.sub.i are plotted, at which the at least two camera images are produced and, in the other of the two dimensions, termed x-dimension, a spatial coordinate perpendicular to the direction of movement being plotted, and at least one property of the at least one polygonal chain in the camera image being plotted as value at the points of the two-dimensional representation at the time t.sub.i at the location x, and classifying at least one of the shape features as deformation or non-deformation on the basis of the behavior of the at least one polygonal chain over the at least two camera images.

17. The method according to claim 16, wherein an image is produced which images a spatial distribution of those deformations which are classified as deformation.

18. The method according to claim 16, wherein the at least one property of the polygonal chain is an average incline of the polygonal chain at the spatial coordinate in the x-dimension, and/or a spacing between two sections of the polygonal chain at the spatial coordinate in the x-dimension and/or a position of the polygonal chain in the direction of movement.

19. The method according to claim 18, a background being present which essentially does not reflect or emit electromagnetic radiation of the frequency with which the object is irradiated in the illumination process, and being disposed such that the object reflects the background in the direction of the at least one camera where it does not reflect the light of the at least one illumination device in the direction of the at least one camera.

20. The method according to claim 16, determining a distance between the at least one camera and the object by at least one distance sensor disposed at a prescribed location, and scaling the two-dimensional representation on the basis of the distance in the direction of the t.sub.i and/or the method being controlled by measuring values of at least one control sensor, preferably at least one light barrier, and/or determining the speed of movement of the object during the illumination process by at least one speed sensor and/or by image processing in the camera images, and scaling the two-dimensional representation on the basis of the speed in the direction of the

21. The method according to claim 16, wherein the illumination device being at least one or precisely one light strip which surrounds a region at least partially, through which the object is moved during the illumination process.

22. The method according to claim 16, which includes a further determining step in which, from the two-dimensional representation, a position and/or size of the deformation is determined.

23. The method according to claim 16, further including an assignment process in which the two-dimensional representation is assigned to individual parts of the object.

24. The method according to claim 16, wherein the object is a motor vehicle and/or the deformations are dents in a surface of the object.

25. The method according to claim 16, wherein the shape features are classified on the basis of the behavior of the at least one polygonal chain over the at least two camera images by at least one neuronal network.

26. The method according to claim 25, wherein the neuronal network is taught by an object that is irradiated by at least one illumination device with electromagnetic radiation of at least such a frequency that the object reflects the electromagnetic radiation as reflected radiation, the object is moved during the illumination relative to the at least one illumination device in the direction of movement, the object is observed by the at least one camera and producing at least two camera images by the at least one camera by the observation at different times t′.sub.i, i=1, . . . , m, which image the respectively reflected radiation, determining reflections of the radiation on the object in the camera images caused by shape features of the object, determining respectively one polygonal chain for at least one of the reflections in the at least two camera images, producing a two-dimensional representation from the at least two camera images, in which, in one dimension of the two dimensions, the times t′.sub.i, are plotted, at which the at least two camera images are produced and, in the other of the two dimensions, termed x-dimension, a spatial coordinate perpendicular to the direction of movement is plotted, and at least one property of the polygonal chain in the camera image is entered as value at the points of the two-dimensional representation at the time t.sub.i at the location x, and at least some of the form features being prescribed as deformations of the object and the behavior of the polygonal chains corresponding to these deformations over the at least two camera images being prescribed to the neuronal network as characteristic for the deformations.

27. A device for determining deformations on an object having at least one illumination device with which a measuring region, through which the object can be moved, can be illuminated with electromagnetic radiation of at least such a frequency that the object reflects the electromagnetic radiation as reflected radiation, at least one camera with which the object can be observed, whilst it is moved through the measuring region, and with which, by the observation, at least two camera images can be produced at different times t.sub.i, i=1, . . . n, which image the respectively reflected radiation, having furthermore an evaluation unit with which at least one reflection on the object, caused by a shape feature of the object, can be detected in the camera images, for at least one of the at least one reflections in the at least two camera images respectively a polygonal chain being able to be determined, from the at least two camera images, a two-dimensional representation being able to be produced, in which, in one dimension of the two dimensions, the times t.sub.i are plotted, at which the at least two camera images are produced and, in the other of the two dimensions, termed x-dimension, a spatial coordinate perpendicular to the direction of movement being plotted, and at least one property of the at least one polygonal chain in the camera image being entered as value at the points of the two-dimensional representation at the time t.sub.i at the location x, and the at least one of the shape features being able to be classified as deformation on the basis of the behavior of the at least one polygonal chain over the at least two camera images.

28. The device according to claim 27, wherein the illumination device is at least one, or precisely one, light strip which surrounds the measuring region at least partially.

29. The device according to claim 27, which has a background which is configured such that it does not reflect or emit electromagnetic radiation of the frequency with which the object can be illuminated by the at least one illumination device, the background being disposed such that the object reflects the background in the direction of the at least one camera, where it does not reflect the at least one light source in the direction of the at least one camera.

30. The device according to claim 28, which has a background which is configured such that it does not reflect or emit electromagnetic radiation of the frequency with which the object can be illuminated by the at least one illumination device, the background being disposed such that the object reflects the background in the direction of the at least one camera, where it does not reflect the at least one light source in the direction of the at least one camera.

Description

[0064] The invention is intended to be explained subsequently by way of example with reference to some Figures.

[0065] There are shown:

[0066] FIG. 1 an embodiment of the device according to the invention, by way of example,

[0067] FIG. 2 a process diagram, by way of example, for determining a polygonal chain in the method according to the invention, and

[0068] FIG. 3 a procedure, by way of example, for producing a two-dimensional representation,

[0069] FIG. 4 by way of example, a two-dimensional representation which is producible in the method according to the invention,

[0070] FIG. 5 a camera image, by way of example, and

[0071] FIG. 6 an end result of a method according to the invention, by way of example.

[0072] FIG. 1 shows an example of a device according to the invention in which a method according to the invention for determining deformations on an object can be implemented. In the example shown in FIG. 1, the device has a background 1 which here is configured as tunnel with two walls parallel to each other and a round roof, for example a roof of a circular-cylindrical section. The background 1, in the shown example, has a colour on its inner surface which differs significantly from a colour with which an illumination device 2, here a light arc 2, illuminates an object in the interior of the tunnel. For example, if the illumination device 2 produces visible light, then advantageously the background can, on its inner surface which is orientated towards the object, have a dark or black background. The object is not illustrated in FIG. 1.

[0073] The light arc 2 extends, in the shown example, in a plane which is perpendicular to a direction of movement with which the object moves through the tunnel 1. The light arc extends here essentially over the entire extension of the background 1 in this plane, which is not however necessary. It is also adequate if the light arc 2 extends only on a partial section of the extension of the background in this plane. Alternatively, also the illumination device 2 can also have one or more individual light sources.

[0074] In the example shown in FIG. 1, three cameras 3a, 3b and 3c are disposed on the light arc, which cameras observe a measuring region in which the object, when it is moved in the direction of movement through the tunnel 1, is illuminated by the at least one illumination device. The cameras then detect respectively the light emanating from the illumination device 2 and reflected by the object and produce, at at least two times respectively, camera images of the reflections. In the shown example, the viewing directions of the cameras 3a, 3b, 3c extend in the plane in which the light point 2 extends or in a plane parallel thereto. The central camera 3b looks perpendicularly downwards and the lateral cameras 3a and 3c look in the direction perpendicular to the viewing direction of the camera 3b at the same height towards to each other. Reference may be made to the fact that also fewer cameras or more cameras can be used, the viewing directions thereof can also be orientated differently.

[0075] The cameras 3a, 3b and 3c produce respectively camera images 21 in which, as shown by way of example in FIG. 2, polygonal chains can be determined. FIG. 2 shows, as can be determined in one of the camera images 21, a polygonal chain of the reflection of the light arc 2 on the surface of the object. The reflections are thereby produced by shape features of the object. The camera image 21 is hereby processed by a filter 22 which produces for example a grey-scale image 23 from the coloured camera image 21. Such a one can be for example a false colour binary image. From the resulting grey scales, by comparison with a threshold value, a binary image can hereby be produced by for example all the pixels with grey scales above the threshold value assuming the one value and all the pixels with grey scales below the threshold value the other value. In a further filtering, in addition all pixels which were not produced by a reflection can be set to zero. Thus for example a black-white camera image 23 can be produced from the filter.

[0076] On the black-white camera image 23 thus produced, together with the original camera image 21, an edge recognition 24 can be implemented. The edge image determined thus can be entered then in a further filter 25 which produces a polygonal chain 26 of the reflection of the light arc 2.

[0077] The maximum edge recognition runs for example through the RGB camera image on the basis of the white pixels in the black/white image and detects, for each X-position, the two most highly pronounced edges (upper and lower edge of the reflection). Filter 25 combines these edges to form a polygonal chain. Further plausibility tests can exclude false reflections so that, at the end, only the polygonal chain of the reflection of the illumination source remains.

[0078] FIG. 3 shows, by way of example, how a two-dimensional representation 31 is produced from the camera images 21 produced at different times ti. In the two-dimensional representation 31, each line corresponds to a camera image at a time ti, i=1, . . . n. Each line of the two-dimensional representation 31 is able therefore to correspond to the i. In the horizontal direction, an x-coordinate can be plotted in the two-dimensional representation 31, which preferably corresponds to a coordinate of the camera images 21, which particularly preferably is perpendicular to the direction of movement in the camera image 21. In the illustrated example, for example an average gradient or an average incline of the polygonal chain 26, for each point in the two-dimensional representation 31, can now be entered in the camera image at the time ti at the point x and/or for example a vertical thickness of the polygonal chain can be entered, i.e., a thickness in the direction perpendicular to the x-direction in the camera image. As further value, also a y-position of the polygonal chain, i.e., a position in the direction perpendicular to the x-direction in the camera image could be entered, for example coded in the third property. A coding of the y-position of the polygonal chain as vertical y-shift in the 2D representation in combination with the y-positions of the camera images t.sub.i is likewise possible, however not plotted in FIG. 3. Advantageously, the two-dimensional representation 31 can be stored as colour image in which the colour components red, blue or green bear the values of different properties of the polygonal chain. For example, in the green component, the gradient or the mentioned average incline of the polygonal chain could be stored and, in the blue component, the vertical thickness of the polygonal chain.

[0079] FIG. 4 shows, by way of example, a two-dimensional reconstruction produced in this way. FIG. 4 comprises recognisable curved lines which are produced from the y-deformation of the reflections in the camera image. Numerous shape features, three of which are particularly pronounced as deformations 41a, 41b and 41c can be recognised. These appear in the two-dimensional representation with a different colour value from those regions in which no deformation is present.

[0080] Such two-dimensional representations can be used in order to train a neuronal network. In a concrete example, the behaviour of the reflections is converted automatically into this 2D representation. There, the deformations are determined and noted (for example manually). Then finally only the 2D representation with its marks is required to be learned. On the 2D representation, direct markers are painted (e.g., copy/paste). These can be recognised easily automatically (since they are preferably always of the same shape) and for example can be converted into an XML representation of the dent positions on the 2D representation. That is only then the basis for the training of the neuronal network (NN). In the later application of the NN, there is then only the 2D representation and no longer any markers.

[0081] FIG. 5 shows, by way of example, a camera image which was recorded, here of a bonnet of a car. Numerous reflections, some of which are marked as 51a, 51b, 51c and 52, can be recognised. The reflections are produced by shape features of the bonnet, such as for example bends and surfaces, reflecting the illumination source. On the planar parts of the bonnet, a strip-shaped illumination unit is reflected and produces, in the camera image, the reflections 52, which illumination unit appears here in the flat region as two strips, however has steps at the bends.

[0082] The reflections can then be surrounded by polygons which can be further processed as described above.

[0083] FIG. 6 shows, by way of example, an end result of a method according to the invention, given by way of example. Here, a two-dimensional representation forms the background of the image on which recognised deformations are marked by rectangles. Hail dents are prescribed here as deformations to be recognised. By applying the neuronal network, all of the hail dents were determined as deformations and provided with a rectangle.

[0084] The invention present here is aimed advantageously at the mobile low-cost market which requires as rapid as possible assembly and dismantling and also as rapid as possible measurements and hence eliminates all of the above-mentioned disadvantages. For example, the assessment of hail damage on vehicles can be effected preferably according to the weather event at variable locations and with a high throughput. Some existing approaches use, comparably with the present invention, the recording of reflections of light patterns in which the object can also be moved partially (an expert or even the owner himself drives the car through under the device).

[0085] The special feature of the invention presented here, in contrast to existing approaches, resides in calculating a 2D reconstruction or 2D representation as description of the behaviour of the reflection over time, in which shape deviations can be recognised particularly well. This behaviour arises only by moving the object to be examined or the device. Since here only the behaviour of the reflection over time is relevant, it is possible, in contrast to existing systems, to restrict it, for example, to a single light arc as source of the reflection.

[0086] The reconstruction or representation is a visualisation of this behaviour, which can be interpreted by humans, and need not be able to be assigned necessarily proportionally to the examined object shape. Thus, for example not the depth of a deviation but probably preferably its size is determined. It proves to be sufficient for an assessment.

[0087] In the following, a course of the method, given by way of example, is intended to be summarised again briefly. This course is advantageous but can also be produced differently. [0088] 1. One or more light sources of a prescribed shape (e.g., strip-like) are provided and span a space provided for the measurement. The light sources can have for example the shape of a light strip and surround the provided space in the form of an arc. The light can be in the non-visible spectrum, white or radiate in any other colour. [0089] 2. A material which is contrast-rich relative to light is prescribed in the background of the light sources. The material can be, for example in the case of white light, a dark material which spans the provided space before use. [0090] 3. Objects to be measured pass through this space. For this purpose, the object can move through the provided space or a device can travel along the provided space over the stationary object. [0091] 4. One or more sensors are advantageously provided inside the spanned space for measurement of the distance relative to the object surface. [0092] 5. One or more sensors for controlling the measurement can be provided. This can be for example a light barrier, with which the measurement is started and stopped again as soon as the object passes through or leaves the spanned space. [0093] 6. One or more cameras are present which are directed towards the object to be examined inside the spanned space and detect the reflections of the light sources. The cameras can be high-resolution (e.g., 4K or more) or also operate with higher frame rates (e.g., 100 Hz or more). [0094] 7. An algorithm or sensor which determines the direction and speed of the object in the spanned space can be used. [0095] 8. An algorithm for quality measurement of the calculated 2D representation based on [0096] i. image processing [0097] ii. markers fitted on the object surface [0098] iii. sensor values [0099] can be used. [0100] Such sensors can measure for example the speed at which the object passes by the cameras and hence give an indication of the minimally visible movement step between two images of the camera or the movement blur in the images. [0101] 9. An algorithm can be used which calculates the surrounding polygonal chain of each light reflection in the camera image. [0102] The algorithm can comprise inter alia methods for binarisation, edge recognition (e.g., canny edge), and also noise filters or heuristic filters. [0103] 10. An algorithm can be used which calculates a 2D representation of the object surface from the behaviour of the polygonal chains. [0104] This representation visualises the behaviour of the polygonal chains in object surfaces which correspond to the expected shape, and also the deviations thereof in a different illustration. One embodiment can be for example a false colour- or grey-scale illustration. The representation need not be proportional to the object surfaces or with the same detail precision. Information about a shape to be expected is not required. If this information is present, then it can be used. [0105] 11. An algorithm can be used which determines, on the basis of the 2D representation, application-specific deviations of the object surface shapes. The deviations arise e.g., from [0106] a. application-specific assumptions about shape and behaviour of the reflections. Smooth surfaces produce for example smooth, low-noise- and low-distortion reflections. In this case, expected shapes would be available. [0107] b. Comparison of shape and behaviour of the reflection on the basis of a reference measurement of an identically shaped object series or of the same object prior to use. For this purpose, the reference measurement can be stored for example in conjunction with an object series number or type number (in vehicles a number plate) in a data bank. Here also, expected shaped could be used. [0108] c. Application of a trained neuronal network which recognises precisely this type of deviations of the object surface shapes on the 2D representation. For this purpose, no expected shape of deviation is present. [0109] In order to recognise the deviation with a trained algorithm (neuronal network) and to classify it, it is advantageous if it is known how it looks. It is therefore advantageous to know an expected shape of the deviation. The precise shape of the object is however not necessary (e.g., dents on the roof and bonnet of a motor vehicle can be recognised without the information roof/bonnet being present). [0110] 12. An algorithm can be used optionally to determine position and size of the deviations, based on a measurement of [0111] a. the shape or size of the light reflection [0112] b. a marker known in shape and size and fitted on the object surface. The marker can have for example the form of a circle, rectangle, cross etc. and be present in a known colour. [0113] c. sensor values (e.g., distance sensors) [0114] 13. An algorithm can be used to assign the 2D representation to various individual parts of the object on the basis of [0115] i. segmentation of the object image in the camera image (e.g., by trained neuronal networks) [0116] ii. comparing with present 2D or 3D shape information about the object (e.g., CAD data, 3D scans) [0117] iii. sensor measurement [0118] iv. markers fitted on the object surface.

[0119] If the speed of the object is measured, the 2D colour illustration can be standardised in the vertical size by a pixel row being written into the image multiplied according to the speed.

[0120] In addition to gradient and vertical thickness of the polygonal chain, advantageously also its y-position at any place x in the camera image can be coded in the 2D colour illustration. This means the y-position of the coding of a polygonal chain in the 2D colour illustration can be dependent upon e.g.: [0121] the frame number of the camera video [0122] the speed of the object (=number of vertical pixels used per camera image) [0123] and in addition y-position of the polygonal chain in the camera image respectively at the position x.

[0124] This variant is not illustrated in FIG. 3, however in FIG. 4. It reinforces the appearance of object shape deviations in the 2D colour illustration.

[0125] In order to support the production of training sets (=annotated videos) for the neuronal network, virtual 3D objects (for the hail damage recognition 3D car models) can be rendered graphically or finished images of the object surface (for the hail damage recognition car images) can be used, on which for example artificial hail damage is produced with mathematical 2D functions (for the hail damage recognition WARP functions).

[0126] In the following, it is intended to be explained, by way of example, how the two-dimensional representation or parts of the two-dimensional representation can be assigned respectively to individual parts of the object. [0127] 1. A trained neuronal network obtains the camera image as input, detects possible components of the object and marks their size and position (possible annotation types are bounding boxes and black-white masks for each component). [0128] 2. The behaviour of the reflection is examined—if it is known that specific components of the object cause a certain reflection behaviour (straight surfaces cause a straight reflection, highly curved surfaces cause a curved reflection), a classification of the components can be undertaken on the basis of the behaviour of the reflection. [0129] 3. If CAD data or general spacing information relating to the construction are known, it can be predicted, on the basis of the distance information, whether no object/a certain component of the object is situated directly opposite the camera. [0130] 4. Known markers which can be detected simply in the camera image can be applied, at the beginning/end/at the corners of the component. On the basis of the known position of the markers, conclusions can then be drawn about the position of the component of the object.