METHOD FOR RECOGNIZING SURFACES

20230206637 · 2023-06-29

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for recognizing surfaces (2), in particular for optically recognizing or identifying structured and/or pictorial surfaces (2), said method involving the steps of —focusing a camera (5) onto a prominent image dot (3) on the surface (2), —creating at least one live video stream of a recognizable, high-contrast area of the image dot (3), and —guiding the camera (5) to the image frame of the video having the greatest depth of detail by focusing a camera (5) onto a prominent image dot (3) on the surface (2) to be sensed, —creating at least two images of a recognizable, high-contrast area of the image dot (3), —storing the image having the greatest depth of detail as a reference image, —analyzing each image frame of the live video stream, —comparing the image frame having the greatest depth of detail with a target image or reference image.

Claims

1-4. (canceled)

5. A method for recognizing surfaces, the method comprising: focusing a camera on a prominent image dot of the surface; creating at least one live video stream images of a recognizable, high-contrast area of the image dot; guiding one camera to the single frame of the video with the highest level of detail by focusing a camera on a prominent image dot of the surface to be assessed; creating at least two images of a recognizable, high-contrast area of the image dot; storing the image with the highest level of detail as a reference image; analyzing each frame of the live video stream; comparing the single image with the highest level of detail with a target or reference image.

6. The method according to claim 5, wherein the position of the target image in the single frame of the video is determined.

7. The method according to claim 5, wherein a distance of the camera from the surface is 9-10 cm.

8. The method according to claim 6, wherein a distance of the camera from the surface is 9-10 cm.

9. The method according to claim 5, wherein the reference image is compared with a photograph of the prominent image dot of the surface created at a later time.

10. The method according to claim 6, wherein the reference image is compared with a photograph of the prominent image dot of the surface created at a later time.

11. The method according to claim 7, wherein the reference image is compared with a photograph of the prominent image dot of the surface created at a later time.

12. The method according to claim 8, wherein the reference image is compared with a photograph of the prominent image dot of the surface created at a later time.

13. The method according to claim 9, wherein the reference image is compared with a photograph of the prominent image dot of the surface created at a later time.

Description

[0021] The figures show:

[0022] FIG. 1: an arrangement of a camera according to the invention over the surface to be detected,

[0023] FIG. 2: the camera according to FIG. 1 with glare protection.

[0024] A surface to be assessed 2, in the example of a painting 1, is first aligned horizontally or vertically (FIG. 1) and illuminated in an optimal and shadow-free manner by means of daylight and/or artificial light.

[0025] Subsequently, the auto-focusing of a camera 5 of a mobile device 4, for example, a tablet or a smartphone, is activated and the mobile device is aligned at a distance 6 of, for example, of 9-10 cm approximately parallel (horizontally or vertically) with relation to the surface 2 above a prominent image dot.

[0026] The camera 5 is focused if, for example, there is no re-focusing of the lens of the camera 5 within about 0.5 seconds. For this purpose, the physical position of the lens of the lens is continuously monitored. As far as the average of the obtained lens positions of the lens of the camera 5 from the last approx. 0.5 seconds corresponds to the next obtained lens position, the camera 5 is classified and triggered as focused by the software of the mobile device 4.

[0027] By means of the last determined physical lens position, the actual distance to the focused object (image dot 3 of the surface 2) can be calculated by the aforementioned software, provided that the camera 5 is already measured.

[0028] Based on the measured reference distances and the corresponding lens positions, the control electronics of the mobile device 4 calculate the current distance to the focused object. If the distance corresponds to a defined specification, a video recording is automatically created from an area that can be recognized.

[0029] Each individual image of the live video stream is sent to a subroutine of the control electronics for processing (if necessary, individual frames can remain unprocessed at the expense of accuracy). In each image to be processed, high-contrast image dots 3 are identified. Colours, contrasts, distances and/or depths of structures are automatically determined. These image dots 3 are surrounded by significantly stronger or weaker intense image dots.

[0030] Geometric shapes are then projected on the basis of the identified high-contrast image dots 3. These image dots 3 form the corners of the projected geometric figures. The number, positions, and sizes of the geometric figures are stored in a data set for each image.

[0031] The target image or target image, which is to be found in the video stream, is prepared. The data records of the target image and the single image with the highest level of detail are compared with each other. If the target image is found in the single frame of the video, this can be determined by comparing the data sets. The number of projected geometric figures indicates the level of detail of an image.

[0032] The image with the highest level of detail, i.e., the largest number of details, is selected for further processing and stored as a reference (target image) and/or has already been created or stored at an earlier point in time.

[0033] Due to the coordinates of image dot 3 determined in this way, a marking for the target image can be drawn in the single frame with the highest level of detail, the position of the target image can be determined in the single frame of the video.

[0034] By subsequently importing the modified individual image into the video stream, the user can be guided to the desired position (target image), or the position of the target image is displayed.

[0035] If the surface to be assessed mirrors 2 and/or is located behind a reflective, transparent cover, the mobile device 4 can be provided with a reversible glare protection 7. This glare protection 7 is, in the example, a flat square frame with a cut-out 8 for the camera 5. The glare protection 7 can, for example, be magnetically mounted and positioned on the mobile device 4 by means of a guide rail or click connection (FIG. 2).

[0036] In order to assess a surface for the first time, the aforementioned software/app must be installed on the mobile device 4 and this must be registered and authenticated.

[0037] Registration Procedure: [0038] 1. Collection of the personal data of the owner of the object to be recorded (artwork) [0039] 2. Acquisition of the key data of the object, such as name, creator, year of creation, dimensions in centimetres [0040] 3. Photography of the front side of the entire object [0041] 4. Optional photography of the back side/remaining sides of the object [0042] 5. Selection of an area on the object that is to be used as a recognizable area (and/or fingerprint) [0043] 6. Capture of the surface of the object using mobile device 4 according to the aforementioned description (creation of an image with the highest level of detail of the selected area). [0044] 7. In order to further increase the quality of the selected reference image (target image), the aforementioned live video stream is created with the same camera 5 and, if necessary, further individual images of the selected area are created [0045] 8. The ultimately best image from steps 5-7 is saved [0046] 9. To ensure that the stored image is suitable as a reference for a fingerprint, another image of the selected area is created according to the above description and stored as a secondary reference image. [0047] 10. The reference image from step 8, as well as the secondary reference image from step 9, will be used for future comparisons of the selected area.

[0048] Authentication Process:

[0049] The user wants to determine whether a work of art/object in his possession corresponds to the object originally recorded during the registration process or is identical to it. [0050] A1. The user selects the artwork/object to be authenticated from his collection [0051] A2. The reference image from step 8 of the registration process is now used to create the best possible image of the same area (image dot 3) from a distance 6 of, for example, 9-10 cm according to steps 6 and/or 7 of the registration [0052] A3. The best image from step A2 is saved as an authentication image for further processing [0053] A4. The stored authentication image is now compared with the reference image from the registration process by the control electronics/app of the mobile device 4 [0054] A5. On the basis of matches between the authentication image and the reference image, the user receives a statement as to whether the artwork/object is the same or whether it is not the artwork/object originally recorded during the registration process.

LIST OF REFERENCE NUMBERS

[0055] 1 painting [0056] 2 surface [0057] 3 image dot [0058] 4 mobile device [0059] 5 camera [0060] 6 distance [0061] 7 glare protection [0062] 8 cut-out