Optoelectronic sensor and method for safe detection of objects of a minimum size
10404971 ยท 2019-09-03
Assignee
Inventors
Cpc classification
G06V20/52
PHYSICS
G06V40/10
PHYSICS
F16P3/142
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
H04N13/239
ELECTRICITY
H04N13/271
ELECTRICITY
International classification
H04N13/239
ELECTRICITY
H04N13/271
ELECTRICITY
Abstract
An optoelectronic sensor (10) for safe detection of objects (32, 34) of a minimum size in a monitoring area (12), the sensor (10) having an image sensor (16a-b) for acquiring image data of the monitoring area (12) and an evaluation unit (24) configured to detect, in the image data, both finely detected objects (34, 36) with a fine detection capability and coarsely detected objects (32) with a coarse detection capability, the coarse detection capability being coarser than the fine detection capability, and to ignore finely detected objects (36) which are not in a vicinity (38) of a coarsely detected object (32).
Claims
1. An optoelectronic sensor (10) for safe detection of objects (32, 34) within a monitoring area (12) and having at least a predetermined minimum size the optoelectronic sensor comprising: an image sensor (16a-b) for acquiring image data of the monitoring area (12); and an evaluation unit (24) configured to detect, in the image data, both finely detected objects (34, 36) with a fine detection capability and coarsely detected objects (32) with a coarse detection capability, the coarse detection capability being coarser than the fine detection capability, to determine a search area with a specified distance around a geometric shape as a search range vicinity, and to ignore finely detected objects (36) not in the search range vicinity (38) of a coarsely detected object (32).
2. The sensor (10) according to claim 1, wherein the evaluation unit (24) is configured to form a common object (42) from a coarsely detected object (32) and finely detected objects (34) in its vicinity (38).
3. The sensor (10) according to claim 1, wherein the sensor (10) is a 3D camera which detects a depth map as image data by means of its image sensor (16a-b).
4. The sensor (10) according to claim 1, wherein the fine detection capability enables leg detection, arm detection, hand detection, or finger detection, and wherein the coarse detection capability enables body detection, leg detection, or arm detection.
5. The sensor (10) according to claim 1, wherein the fine detection capability is 14 mm, 20 mm, 40 mm, or 55 mm, and wherein the coarse detection capability is 70 mm, 120 mm, 150 mm, or 200 mm.
6. The sensor (10) according to claim 1, wherein the evaluation unit (24), when locating finely detected objects (34) in the search range vicinity (38) of a coarsely detected object (32), at first defines a circumscribing geometric shape (40) for the coarsely detected object (32) and then determines a search area (38) with a specified distance (r) around the geometric shape (40) as the search range vicinity (38).
7. The sensor (10) according to claim 6, wherein the geometric shape (40) is a circle, a rectangle, a hexagon, or another polygon.
8. The sensor (10) according to claim 6, wherein the specified distance (r) is derived from safety margins (C.sub.1, C.sub.2) in dependence on a detection capability as defined in safety standards.
9. The sensor (10) according to claim 8, wherein the safety standard is ISO 13855, IEC 61496, or a relevant equivalent.
10. The sensor (10) according to claim 8, wherein the specified distance (r) is derived from a difference of the safety margin (C.sub.1) in dependence on the coarse detection capability minus the safety margin (C.sub.2) in dependence on the fine detection capability.
11. The sensor according to claim 1, wherein the evaluation unit (24), when locating finely detected objects (34) in the search range vicinity (38) of a coarsely detected object (32), places a mask (44) having a geometric shape with the size of a specified distance (r) at each image pixel of the coarsely detected object (32), and thus a sum of the masks (44) determines the search range vicinity (38).
12. The sensor (10) according to claim 11, wherein the geometric shape (40) is a circle, a rectangle, a hexagon, or another polygon.
13. The sensor (10) according to claim 11, wherein the specified distance (r) is derived from safety margins (C.sub.1, C.sub.2) in dependence on a detection capability as defined in safety standards.
14. The sensor (10) according to claim 13, wherein the specified distance (r) is derived from a difference of the safety margin (C.sub.1) in dependence on the coarse detection capability minus the safety margin (C.sub.2) in dependence on the fine detection capability.
15. The sensor (10) according to claim 1, wherein the evaluation unit (24) is configured to evaluate whether a detected object (32, 34, 42) is located within a predefined protection field (28) or too close to a source of danger (26), and to output a safety-related shutdown signal via a safe output (30) in this case.
16. The sensor (10) according to claim 15, wherein the evaluation unit (24) is configured to evaluate different protection fields (28) for the coarse detection capability and the fine detection capability in that the protection field (28) is enlarged by a safety extension for the coarse detection capability.
17. A method for safe detection of objects (32, 34) within a monitoring area (12) and having at least a predetermined minimum size, the method comprising: acquiring and evaluating image data to detect, in the image data, both finely detected objects (34, 36) with a fine detection capability and coarsely detected objects (32) with a coarse detection capability, the coarse detection capability being coarser than the fine detection capability, and determine a search area with a specified distance around a geometric shape as a search range vicinity; and ignoring finely detected objects (36) not in the search range vicinity (38) of a coarsely detected object (32).
18. The method according to claim 17, wherein a common object (40) is formed from a coarsely detected object (32) and finely detected objects (34) in its vicinity (38), and wherein it is evaluated whether the common object (40) intrudes into a protection field (28) or is located too close to a source of danger (26).
19. The method according to claim 17, wherein the search range vicinity (38) is defined by a specified distance (r) with respect to each image pixel of a coarsely detected object (32) or a circumscribed geometric shape (40) of the coarsely detected object (32).
20. The method according to claim 19, wherein the specified distance (r) is derived from safety margins (C.sub.1, C.sub.2) in dependence on a detection capability as defined in safety standards.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention will be explained in the following also with respect to further advantages and features with reference to exemplary embodiments and the enclosed drawing. The Figures of the drawing show in:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10)
(11) For detecting a spatial area 12, two camera modules 14a, 14b are mounted at a known fixed distance to one another and acquire respective images of the spatial area 12. In each camera, an image sensor 16a, 16b is provided, usually a matrix-shaped acquisition chip which acquires a rectangular pixel image, for example a CCD sensor or a CMOS sensor. The two image sensors 16a, 16b together form a 3D image sensor for acquiring a depth map. An objective 18, 18b with imaging optics is arranged in front of each of the image sensors 16a, 16b, which in practice can be any known imaging objective. The maximum viewing angle of the optics is represented by dashed lines in
(12) An illumination unit 22 is provided between the two image sensors 16a, 16b in order to illuminate the spatial area 12 with a structured pattern. The stereo camera as shown is thus configured for active stereoscopy where the pattern provides contrasts that can be evaluated even in a scene which is inherently without structure. Alternatively, no illumination or a homogeneous illumination is provided in order to evaluate the natural object structures in the spatial area 12, but this generally results in additional image defects.
(13) A control and evaluation unit 24 is connected to the two image sensors 16a, 16b and the illumination unit 22. The control and evaluation unit 24 can be implemented in various hardware, for example, an external standard computer, an internal digital device such as a microprocessor, an Application Specific Integrated Circuits (ASIC), a Field Programmable Gate Arrays (FPGA), a Graphics Processing Unit (GPU), or hybrids thereof. Since the generation of the depth map and its evaluation is computationally expensive, there is preferably an at least partially parallel architecture.
(14) The control and evaluation unit 24 generates the structured illumination pattern by means of the illumination unit 22 and receives image data from the image sensors 16a, 16b. From these image data, it calculates the three-dimensional image data or the depth map of the spatial area 12 with stereoscopic disparity estimation. The overall spatial area 12, or the working area, can be limited in a configuration, for example to exclude interfering or unnecessary regions. The remaining working area is also called a configured workings area.
(15) A prominent application of the sensor 10 in safety technology is the monitoring of a source of danger 26, which is symbolized by a robot in
(16) The sensor 10 is designed to be failsafe in safety-related applications. This may for example mean that the sensor 10 tests itself, in particular in cycles faster than a required response time, detects defects of the image sensors 16a-b or the illumination unit 22, or that the control and evaluation unit 24 and/or the output 30 is safe, for example by two-channel design, or uses self-checking algorithms. Such measures are standardized for general contactless protective devices in EN 61496-1 or IEC 61496 as well as in DIN EN ISO 13849 and EN 61508. A corresponding standard for safety cameras is in preparation. IEC/TS 61496-4-3 and ISO 13855 also define the outer limits of the protection fields 28 towards the worker as regards the distance from the source of danger 26.
(17) Now, on the one hand, the sensor 10 is to detect objects with a fine detection capability in order to meet higher safety requirements like arm or even finger protection, and because then smaller safety distances are sufficient according to the standards. On the other hand, the fine detection capability leads to unnecessary shutdowns caused by gaps in the depth maps, and thus to a reduced availability.
(18) The invention solves this double requirement of correctly handling gaps as relevant objects and nevertheless remaining available from the consideration that small objects embodying fingers, hands and arms cannot occur isolated in space. Rather, a large object defining the body must be in a spatial relationship with the small object. This does not necessarily imply a direct contact, i.e. it does not need to be a single object.
(19)
(20) In a vicinity 38 of the large object 32, all small objects 34 are now added to the large object 32, and all isolated or singular small objects 36 are ignored. Any singular small object 36 can only have been detected from erroneous data such as gaps in the depth map. The safe presence detection is or remains deactivated, although a consideration of the fine detection capability alone would require its triggering. There are relevant objects 34 and irrelevant objects 36 in the consideration of the fine detection capability.
(21) A corresponding general object detection in the working area, where an object is detected, its position and/or its movement trajectory is determined, will now be explained with reference to
(22) In a first step, the spatial area 12 or the configured working area is examined once with a coarse detection capability and once with a fine detection capability. The coarse detection capability is typically 120-200 mm, the fine detection capability is 20-55 mm or less, the limit being 55 mm. These are only exemplary values, the inventive approach is general. The evaluation results in the positions of the finely and coarsely detected objects for example in a binary map showing all objects whose size match or exceed the respective detection capability. A possible algorithm for this evaluation with a specific detection capability and resulting binary maps is disclosed in EP 2 819 109 A1 mentioned in the introduction, which is hereby incorporated by reference.
(23) As illustrated in
(24)
(25)
(26) In
(27) Referring now to
(28) The masks 44 are OR-combined in the filter map. The result is the relevant search area or the vicinity 38, as shown on the right-hand side of
(29) The surrounding region is formed locally, i.e. image-pixel-wise by the masks 44, which is more complex than a one-step definition of the vicinity 38 as in
(30) When defining the vicinity 38, a search radius r was used. This is illustrated again in
S.sub.0=(KT)+C+C.sub.tz+d,
where
(31) S.sub.0 is the minimal safety distance to the source of danger,
(32) C is a margin depending on the detection capability, C=f (d),
(33) C.sub.tz is a margin for system tolerances,
(34) d is the detection capability, and
(35) KT is a term of the movement.
(36) This results in the coarse detection capability
S.sub.1=(KT)+C.sub.1+C.sub.tz+d1
and correspondingly in the fine detection capability
S.sub.2=(KT)+C.sub.2+C.sub.tz+d2.
(37) The term of the movement KT can be eliminated for the determination of the search radius r because the consideration is not about reaching the source of danger 26 and a safety distance from the source of danger 36, but the derivation of a search radius in a momentary view.
(38) Thus it generally holds for S that
S=C+C.sub.tz+d.
(39) The standard assumes that the safety distance includes a detection capability d to trigger the sensor. Since, however, the outer edge is located, this d is eliminated.
(40) For further simplification, the measurement error, that is, the margin for system tolerances, can be assumed to be the same for both detection capacities, so that it holds that
S.sub.1=C.sub.1
S.sub.2=C.sub.2
(41) Therefore, the search radius is
r=C.sub.1C.sub.2.
(42) C is to be set according to the standard in dependence on the detection capability. For example, with a fine detection capability of 55 mm and a coarse detection capability of more than 70 mm, the minimal search radius r=462 mm.
(43) There are some final remarks regarding an edge problem. The invention can in principle be used with conventional protection fields 28. However, there is a difficulty at the edge of the monitored protection field, because the limits of the minimum safety distance of a protection field 28 depend on the detection capability. This is smaller for fine detection capability than for coarser detection capability. However, the advantage to decide about the relevance of detected objects in the fine detection capability based on the coarse detection capability should not be lost. This can easily be resolved in that the outer edge for the evaluation with the coarse detection capability is minimally extended by the search radius plus the coarse detection capability and minus the fine detection capability: Extension=r+d.sub.1d.sub.2. A protection field 28 of the coarse detection capability thus is at least as large as a protection field 28 of the fine detection capability. A shutdown of the machine depends on a relevant object detected in the protection field 28 of the fine detection capability.