CONFIGURING A VISUALIZATION DEVICE FOR A MACHINE ZONE

20220097238 ยท 2022-03-31

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of configuring a visualization device for a machine zone is provided in which at least one sensor is arranged, wherein a reference marker is attached in the machine zone and an object marker is attached to the sensor, a respective at least two markers are detected by a detection device, and the two markers are linked to one another abstractly in their proximity relationship and/or geometrically in their mutual spatial locations.

    Claims

    1. A method of configuring a visualization device for a machine zone in which at least one sensor is arranged, wherein a reference marker is attached in the machine zone and an object marker is attached to the sensor; wherein a respective at least two markers are detected by a detection device; and wherein the two markers are linked to one another abstractly in their proximity relationship and/or geometrically in their mutual spatial locations.

    2. The method in accordance with claim 1, wherein markers are detected pair-wise and are linked with one another until the markers attached in the machine zone have been detected.

    3. The method in accordance with claim 2, wherein a check is made whether one of the two markers respectively detected pair-wise had already been previously detected.

    4. The method in accordance with claim 2, wherein the detection device prompts to detect first the one marker and then the other marker and subsequently shows the generated link between the two markers to have it acknowledged.

    5. The method in accordance with claim 1, wherein the abstract linking of the markers takes place in the form of a graph.

    6. The method in accordance with claim 5, wherein the graph is arranged or rearranged such that adjacent nodes in the graph are also geometrically adjacent.

    7. The method in accordance with claim 1, wherein the geometrical link of the markers takes place by evaluating a value and/or a format of the detected markers.

    8. The method in accordance with claim 1, wherein the geometrical link of the markers takes place in that the movement of the detection apparatus between the detections of different markers is monitored; and wherein at least two markers are detected at the same time.

    9. The method in accordance with claim 1, wherein the geometrical link of the markers takes place in that the movement of the detection apparatus between the detections of different markers is monitored; and detections are evaluated during the alignment of the detection apparatus from the one marker to the other marker.

    10. The method in accordance with claim 1, wherein a reference location is associated with a detected reference marker and/or a detected object marker has the sensor represented by it associated with it.

    11. The method in accordance with claim 1, wherein an object marker is arranged on a template with a mount for attachment to the sensor.

    12. The method in accordance with claim 11, wherein information on the position of the sensor relative to the object marker is encoded in the object marker.

    13. A method of visualizing a machine zone using a visualization device for a machine zone in which at least one sensor is arranged, wherein a reference marker is attached in the machine zone and an object marker is attached to the sensor; wherein a respective at least two markers are detected by a detection device; and wherein the two markers are linked to one another abstractly in their proximity relationship and/or geometrically in their mutual spatial locations, in which method first the reference marker is detected and then virtual sensor information from the environment of the reference marker is presented.

    14. The method in accordance with claim 13, wherein only sensor information from sensors is presented that are neighbors of the detected reference marker in accordance with the abstract link.

    15. The method in accordance with claim 13, wherein the sensor information is shown as a superposition with a live image.

    16. The method in accordance with claim 13, wherein the sensor information to be presented is read by the sensor, by a controller connected to the sensor, and/or from a database for sensors.

    17. The method in accordance with claim 13, wherein the sensor information comprises at least one of the following pieces of information: name of the sensor, address of the sensor, type of the sensor, a graphical model of the sensor, an alignment and/or a detection zone of the sensor, a sensor parameter, and/or measurement data of the sensor.

    18. The method in accordance with claim 17, wherein the detection zone of the sensor is a protected field or a region of interest.

    19. A template having an object marker for a configuration method of configuring a visualization device for a machine zone in which at least one sensor is arranged, wherein a reference marker is attached in the machine zone and an object marker is attached to the sensor; wherein a respective at least two markers are detected by a detection device; and wherein the two markers are linked to one another abstractly in their proximity relationship and/or geometrically in their mutual spatial locations, wherein the template has, in addition to the object marker, a mount suitable for a sensor and information encoded in the object marker with which a location of the object marker can be converted into a location of the sensor.

    Description

    [0030] The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:

    [0031] FIG. 1 an overview representation of a robot cell with an plurality of sensors;

    [0032] FIG. 2a a template with an object marker and a mount for attachment to a sensor;

    [0033] FIG. 2b a template similar to FIG. 2a with a different object marker and a different mount for a different sensor type;

    [0034] FIG. 3 an overview representation of the robot cell in accordance with FIG. 1 now with reference markers attached in the robot cell and object markers attached to the sensors;

    [0035] FIG. 4 an exemplary flowchart for the detection of the markers during a configuration of a visualization of the robot cell;

    [0036] FIG. 5 an exemplary representation of the determined path between two detected markers for checking and confirming the path; and

    [0037] FIG. 6 an exemplary graph that is generated during a configuration from the successive detected markers in the robot cell in accordance with FIG. 3.

    [0038] FIG. 1 shows an overview representation of a machine zone 10 that is here designed by way of example as a robot cell. Further elements, apart from a robot 12, for which a conveyor belt 14 and a switch cabinet 16 are shown as representative are located therein. A plurality of sensors 18 are mounted in the machine zone 10 to monitor the robot 12, the conveyor belt 14, and further elements of the machine zone 10, for example access paths or materials that are supplied to the robot 12 and are processed by it. The sensors 18 can work autonomously, but are as a rule connected to one another and/or to a higher ranking controller 20 marked as F1. The higher ranking controller 20 or cell controller is preferably likewise connected to the robot controller of the robot 12 or at least partly acts as this robot controller.

    [0039] The following sensors 18 are installed in the machine zone 10 of FIG. 1: four laser scanners S1, S2, T1, and T2 around the robot 12, four cameras C1 to C4 at the conveyor belt 14, and a light grid L1.1-L1.2. that secures an access. The cameras C1 to C4 are of the same sensor type as one another. Of the laser scanners, two are safe laser scanners S1 and S2 for securing or for accident prevention and two are unsafe laser scanners T1 and T2 for general monitoring or automation work of the robot 12. The light grid with its safety function is likewise designed as safe. Safe sensors are shown by gray shading. The distinguishing into safe and unsafe sensors is as rule very significant in practice, but is here only one possibility for a distinguishing between sensor types. The selection and arrangement of the sensors 18 in FIG. 1 is to be understood purely by way of example overall.

    [0040] The invention does not look into the design of a robot cell or more generally of a machine zone 10 and the selection and mounting of the required sensors 10. It should rather be of assistance in the configuration of the sensors 18, in particular as part of the putting into operation, diagnosis or servicing, and should provide a visualization of the machine zone 10 together with additional information on the sensors 18 or of the sensors 18 for this. This naturally does not preclude the fitter determining the need for additional sensor 18 or a different arrangement of the sensors 18 with reference to the visualization.

    [0041] FIG. 2a shows a template 22 having an object marker 24 that is here designed as an optical 2D code. The object marker 24 can be read by image processing, for example by the camera of a smartphone. The specific design of the optical code and the reading process are not the subject matter of the invention; there are conventional solutions for this. It is in principle conceivable to use a non-optical object marker 24, for instance an RFID tag, but the localization works most reliably with optical codes and common end devices have cameras, but not necessarily an RFID reader.

    [0042] A mount 26 is furthermore provided at the template 22 that is adapted to a specific sensor type. The template 22 can be attached to a sensor 18 of the matching sensor type in a well-defined and reliable manner with the aid of the mount 26, independently of the accessibility and size of the sensor 18. The object marker 24 is then located in a known relative position to the sensor 18 thanks to the template 22. The transformation from the location of the object marker 24 to the location of the sensor 18 is encoded in the object marker 24, either directly, for example in the form of relative coordinates, or indirectly in that a piece of identity information of the object marker 24 in a database or the like is linked to the associated relative coordinates. Thanks to the template 22 and the now known offset between the object marker 24 and the sensor 18 caused by the template 22, the sensor 18 is shown at the correct location and not at that of the object marker 24, for instance, later in the visualization. The template 22 is only required during the configuration of the visualization and can therefore be used multiple times.

    [0043] FIG. 2b shows a template 22 having a different object marker 24 for a different sensor type. Apart from the code content of the object marker 24, the mount 26 is also varied so that the template 22 can be fastened to a sensor 18 of the different sensor type. A respective template 22 having a matching object marker 24 and a matching mount 26 is thus preferably generated and this only has to be done once per sensor type and not per individual sensor 18. Provided that a plurality of sensors 18 of the same sensor type are present in FIG. 1, a plurality of templates 22 are still required but are then preferably of the same type as one another. As an alternative to designing a separate template 22 for every sensor type, generic templates are also conceivable. Their mounts are as flexible as possible or they are attached with wire, adhesive, or the like. The generic template indicates the location, for instance with the aid of an arrow tip with respect to which the object marker 24 encodes the offset. On a careful attachment of the template, there is then likewise no offset between the object marker 24 and the sensor 18. The templates 22 can also be designed for different objects than sensors 18, for example machine parts, or a generic template 22 can be attached to another object. Such objects are thus included in the visualization.

    [0044] FIG. 3 again shows the machine zone 10 of FIG. 1 once now object markers 24 have been attached to the sensors 18 and, as an example for a different object, also to the controller 20 and reference markers to a plurality of positions of the machine zone 10, for example on the hall floor.

    [0045] One object marker 24 is attached per object to be localized, that is per sensor 18, but also per machine part, controller 20, or the like. This is preferably done via templates 22 and alternatively directly on the sensor 18 or another object. Object markers 24 are preferably not prepared individually for a sensor 18, but for a sensor type. In the example of FIG. 3, there are five different object markers 24 that are marked by D1 for safe laser scanners S1, S2, by D2 for unsafe laser scanners T1, T2, by D3 for (unsafe) cameras C1 to C4, by D4 for (safe) light grids L1.1-L1.2, and by D5 for the controller type of the controller 20 that acts as an example for a further object to be visualized that is not a sensor 18. Two respective objects markers 24 are shown at the sensors S1 and T2 that should not actually be attached in double form, but only illustrate that there are a plurality of possibilities or directions to fasten a template 22. This direction can reflect the direction of gaze of a sensor 18 or another object. With the light grid L1.1-L1.2 in contrast, both columns should actually be provided with a separate object marker 24 between which the monitoring beams run. As already mentioned in connection with FIGS. 2a-b, an object marker 24 preferably contains the information of which object or which sensor 18 it is, preferably at the level of types, variants, or families, and not individual objects or sensors, and a transformation from the location or origin of the object marker 24 to the location or origin of the object or sensor 18. The transformation enables a visualization at the desired or correct location and not at that of the object marker 24.

    [0046] At least one reference marker 28 is attached in the machine zone 10 in addition to the object markers 24. The reference markers 28 can be positioned as desired by the fitter. They contain a unique code, for example a 32 digit identification number (universally unique identification, UUID) to preclude confusion with other markers in the machine zone 10. The reference markers 28 serve as reference points. It is later determined in the visualization with reference to a reference marker 28 read from the proximity where the origin of the visualization is and which sensors 18 are in the environment.

    [0047] FIG. 4 shows an exemplary flowchart for a configuration of a visualization of the machine zone 10 on the basis of the object markers 24 and reference markers 28. This flow is based on a pair-wise reading of markers 24, 28. This is a simple procedure, but alternatively more than two markers 24, 28 can be read and put into relationship with one another per configuration step. The configuration takes place in a mobile end device that is called a detection device at some points, for example a smartphone or a tablet that automatically guides the fitter through the configuration.

    [0048] In a step S1, a first marker 24, 28 is read, the fitter is therefore prompted to direct the detection device to a marker 24, 28 to be read and to trigger an image recording of a camera, for example. It is of advantage at the start of the configuration for a reference marker 28 to be read first. This then forms the reference point or point of origin. Alternatively, however, the anchoring can take place at a later point in time after a reference marker 28 has been detected.

    [0049] In a step S2, a second marker 24, 28 is read. A pair of two markers has thus then been read, and indeed by choice of the fitter a pair of two object markers 24, of an object marker 24 and a reference marker 28, or of two reference markers 28 As already stated with respect to step S1, the fitter can be prompted at the first pair to choose at least one reference marker 28 so that there is a point of origin from the start. In later iterations, during the reading of further pairs, the detection device can require that a respective one of the read markers 24, 28 is already known to successively expand the link structure of the markers 24, 28 read during the configuration. Alternatively, two or even more initially separate link structures are generated that can then be joined together as soon as they overlap one another in at least one marker 24, 28 that has become known.

    [0050] In a step S3, a relationship between the two read markers 24, 28 is automatically determined. There is already an abstract relationship in that the two markers 24, 28 are read together and are now automatically referenced to one another. A graph can, for example, be produced with this relationship that will be explained below with reference to FIG. 6. The geometrical relationship between the two read markers 24, 28 should, however, also further be determined, that is a transformation or a path from the one marker 24, 28 to the other marker 24, 28. Different image evaluation processes can be used for this that are known per se and that will not be further explained here. A conclusion can, for example, be drawn, only in outline, from the size of a marker 24, 28 or its perspective distortion on the mutual distance and position. That transformation can thus be found that transforms the one marker 24, 28 or its code elements or envelope into the other marker 24, 28. A special design of the markers 24, 28 can support such image processing.

    [0051] In an optional step S4, the geometrical relationship is shown to have it acknowledged by the fitter. This is illustrated in FIG. 5 for two exemplary object markers 24. The calculated path 30 between the two object markers 24 is displayed so that the fitter can understand whether this path 30 actually transforms the two object markers 24 into one another. The geometrical relationship is stored after the acknowledgment by the fitter. If the fitter does not agree with the path 30, the detection device returns to one of the steps S1 to S3. An attempt is consequently made to recalculate the geometrical relationship with the read markers 24, 28 or at least one of the markers 24, 28 is read again, i.e. preferably at least one other marker 24, 28 is first integrated. Another remedying feature is the attachment of a further reference marker 28.

    [0052] A check is made in a step S5 whether one of the read markers 24, 28 is an object marker 24. A sensor 18 or object is then associated with it in a step S6. In this process, user inputs can also optionally take place with which, for example, the sensor 18 or the object is provided with a name of its own. A reference location, for example a coordinate in a coordinate system, can be associated with reference markers 28. The first read reference marker 28 per link structure preferably fixes the coordinate system, with an origin being able to be displaced as desired. If a plurality of link structures are combined with one another once a marker 24, 28 has appeared in both, the coordinate systems are also aligned.

    [0053] In a step S7, the configuration ends if all the markers 24, 28 have been detected once. Otherwise, at step S1, a new pair of markers 24, 28 is detected and processed in a further iteration. It is preferably part of the responsibility of the fitter to take all the markers 24, 28 into account. It is, however, also conceivable that the detection device has knowledge of the total number of markers 24, 28, for example via a specification, from an overview image of the machine zone 10 with all the markers 24, 28, or by communication with the sensors 18 or with the controller 20.

    [0054] At the end of this configuration, the geometrical relationships between all the markers 24, 28 are known and thus all the sensors 18 and other objects such as the controller 20 with object markers 24 are localized. The object markers 24 or the templates 22 can now be removed.

    [0055] FIG. 6 shows by way of example a graph produced after a completed configuration for the machine zone 10 shown in FIG. 3. The nodes correspond to the object markers 24 with the sensors 18 represented thereby and other objects such as the controller 20 and the reference markers 28. The edges correspond to the proximity relationships. If the fitter had selected a non-adjacent pair of markers 24, 28 in steps S1 and S2, the edges can be reordered using the geometrical relationships so that the proximity relationships in the graph agree with the actual geometry. The geometrical transformation is preferably also stored for the edges. It is thus, for example, therefore not only known that the reference marker M1 is the neighbor of the reference marker M3 in the graph, but also how M1 is geometrically transformed into M3 or where M3 is localized with respect to M1. Edges do not necessarily only represent relationships between a reference marker 28 and an object marker 24, but possibly also a relationship of one reference marker 28 to a further reference marker 28. This equally serves as a bridge to overcome distances that are too large without an optical link. Differing from the representation in FIG. 6, there can also be a plurality of graphs instead of only one single contiguous graph.

    [0056] After a completed configuration, sensor data of the sensors 18 can now be visualized. A mobile end device that can, but does not have to, correspond to the detection device of the configuration, for example a smartphone, a tablet, or VR glasses, in turn serves as the visualization device. The user scans a reference marker 28 in his proximity. Sensors 18 and a possible further object such as the controller 20 are localized in the environment of the scanned reference marker 28 using the graph, in particular the direct or indirect neighbors of the scanned reference marker 28 in the graph. The required geometrical transformations are stored from the configuration in the edges of the graph. The sensor information or object information can thus be visualized at the correct location. This preferably has a camera image superposed (augmented reality).

    [0057] A large variety of visualized information is conceivable. In addition to the name and type of a sensor 18, its configuration can be illustrated, for example a protected field of a laser scanner can be displayed, an operating parameter such as the temperature of the sensor 18 can be displayed, or measurement data of the sensor 18 are visualized. In the case of other objects such as the controller 20, the data are translated into a generic description and are provided with an associated visualization. It can be loaded in dependence on the kind of the visualization.

    [0058] The invention has up to now been described for the example of a robot cell as the machine zone 10. The concept can be transferred to a vehicle, preferably an automated vehicle. Reference markers 28 and sensors 18 are located on the vehicle and thus in a fixed geometrical relationship to one another in the reference system of the vehicle so that the movement of the vehicle with respect to the external environment does not play a role and the configuration in accordance with the invention and the visualization remains comparable with a stationary machine zone 10. A machine zone 10 is, however, not restricted either to a robot cell or to a vehicle, but rather describes a zone in which at least one sensor 18 is located and in which interventions by a machine take place at least at times for which there are innumerable further examples such as a conveyor belt or also a railroad crossing.

    [0059] In an expansion, a link to CAD information of the machine zone 10 can take place that as a rule anyway exists in the case of a robot cell or of a vehicle. Markers 24, 28 can thus be localized even more exactly or an optimum number and position of reference markers 28 can also be planned. 3D models can additionally be used to localize the sensors 18 themselves in addition to the markers 24, 28.