DETECTOR FOR OBJECT RECOGNITION
20230078604 · 2023-03-16
Inventors
- Benjamin Rein (Ludwigshafen, DE)
- Patrick Schindler (Ludwigshafen, DE)
- Friedrich SCHICK (Ludwigshafen, DE)
- Jakob UNGER (Ludwigshafen, DE)
- Peter Schillen (Ludwigshafen, DE)
- Nils BERNER (Ludwigshafen, DE)
Cpc classification
G06V10/60
PHYSICS
G06T7/521
PHYSICS
International classification
G06T7/521
PHYSICS
Abstract
A detector for object recognition includes an illumination source for projecting an illumination pattern on an area including at least one object; an optical sensor having a light-sensitive area and configured for determining a first image including a two-dimensional image of the area, and a second image including a plurality of reflection features generated in response to illumination, each reflection feature including a beam profile; an evaluation device for determining beam profile information for each reflection feature by analyzing their beam profiles, determining a three-dimensional image using the determined beam profile information, identifying the reflection features located inside and/or outside an image region, determining a depth level from the beam profile information of the reflection features located inside and/or outside of the image region, determining a material property of the object from the beam profile information, and determining a position and/or orientation of the object.
Claims
1. A detector for object recognition comprising at least one illumination source configured for projecting at least one illumination pattern comprising a plurality of illumination features on at least one area comprising at least one object; an optical sensor having at least one light sensitive area, wherein the optical sensor is configured for determining at least one first image comprising at least one two dimensional image of the area, wherein the optical sensor is configured for determining at least one second image comprising a plurality of reflection features generated by the area in response to illumination by the illumination features; at least one evaluation device, wherein the evaluation device is configured for evaluating the first image and the second image, wherein each of the reflection features comprises at least one beam profile, wherein the evaluation device is configured for determining beam profile information for each of the reflection features by analysis of their beam profiles, wherein the beam profile information is information about an intensity distribution of a light spot on the light sensitive area of the optical sensor, wherein the evaluation device is configured for determining at least one three-dimensional image using the determined beam profile information, wherein the evaluation of the first image comprises identifying at least one pre-defined or pre-determined geometrical feature, wherein the evaluation device is configured for identifying the reflection features which are located inside an image region the geometrical feature and/or for identifying the reflection features which are located outside the image region of the geometrical feature, wherein the evaluation device is configured for determining at least one depth level from the beam profile information of the reflection features located inside and/or outside of the image region of the geometrical feature, wherein the evaluation device is configured for determining at least one material property of the object from the beam profile information of the reflection features located inside and/or outside of the image region of the geometrical feature, wherein the evaluation device is configured for determining at least one position and/or orientation of the object by considering the depth level and/or the material property and pre-determined or predefined information about shape and/or size of the object.
2. The detector according to claim 1, wherein the first image and the second image are determined at different time points.
3. The detector according to claim 1, wherein the geometrical feature is at least one characteristic element of the object selected from the group consisting of: a shape, a relative position of at least one edge, at least one borehole, at least one reflection point, at least one line, at least one surface, at least one circle, at least one disk, the full object, and a part of the object.
4. The detector according to claim 1, wherein the evaluation device comprises at least one data storage device, wherein the data storage device comprises at least one table and/or at least one lookup table of geometrical features and/or pre-determined or predefined information about shape and/or size of the object.
5. The detector according to claim 1, wherein the detector comprises at least one first filter element, wherein the first filter element is configured for transmitting light in the infrared spectral range and for at least partially blocking light of other spectral ranges.
6. The detector according to claim 1, wherein the illumination pattern comprises at least one periodic point pattern having a low point density, wherein the illumination pattern has ≤2500 points per field of view.
7. The detector according to claim 1, wherein the detector comprises at least one control unit, wherein the control unit is configured for controlling the optical sensor and/or the illumination source, wherein the control unit is configured for triggering projecting of the illumination pattern and/or imaging of the second image.
8. The detector according to claim 7, wherein the control unit is configured for adapting exposure time for projection of the illumination pattern.
9. The detector according to claim 1, wherein the evaluation device is configured for determining the beam profile information for each of the reflection features by using depth-from-photon-ratio technique.
10. The detector according to claim 1, wherein the optical sensor comprises at least one CMOS sensor.
11. A method for object recognition, wherein at least one detector according to claim 1 is used, wherein the method comprises the following steps: a) projecting at least one illumination pattern comprising a plurality of illumination features on at least one area of comprising at least one object; b) determining at least one first image comprising at least one two dimensional image of the area using an optical sensor, wherein the optical sensor has at least one light sensitive area; c) determining at least one second image comprising a plurality of reflection features comprising a plurality of reflection features generated by the area in response to illumination by the illumination features by using the optical sensor; d) evaluating the first image by using at least one evaluation device, wherein the evaluating of the first image comprises identifying at least one pre-defined or pre-determined geometrical feature; e) evaluating the second image by using the evaluation device, wherein each of the reflection features comprises at least one beam profile, wherein the evaluation of the second image comprises determining beam profile information for each of the reflection features by analysis of their beam profiles and determining at least one three-dimensional image using the determined beam profile information; f) identifying the reflection features which are located inside the geometrical feature and/or for identifying the reflection features which are located outside of the geometrical feature by using the evaluation device; g) determining at least one depth level from the beam profile information of the reflection features located inside and/or outside of the geometrical feature by using the evaluation device; h) determining at least one material property of the object from the beam profile information of the re-flection features located inside and/or outside of the image region of the geometrical feature by using the evaluation device; and i) determining at least one position and/or orientation of the object by considering the depth level and/or the material property and pre-determined or predefined information about shape and/or size of the object by using the evaluation device.
12. A method of using the detector according to claim 1, for a purpose selected from the group consisting of: a position measurement in traffic technology; an entertainment application; a security application; a surveillance application; a safety application; a human-machine interface application; a tracking application; a photography application; an imaging application or camera application; a mapping application for generating maps of at least one space; a homing or tracking beacon detector for vehicles; an outdoor application; a mobile application; a communication application; a machine vision application; a robotics application; a quality control application; and a manufacturing application.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0209] Further optional details and features of the invention are evident from the description of preferred exemplary embodiments which follows in conjunction with the dependent claims. In this context, the particular features may be implemented in an isolated fashion or in combination with other features. The invention is not restricted to the exemplary embodiments. The exemplary embodiments are shown schematically in the figures. Identical reference numerals in the individual figures refer to identical elements or elements with identical function, or elements which correspond to one another with regard to their functions.
[0210] Specifically, in the figures:
[0211]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0212]
[0213] The detector 110 comprises at least one illumination source 114 configured for projecting at least one illumination pattern comprising a plurality of illumination features on at least one area 116 comprising at least one object 112. The object 112 may be located within a scene and/or may have a surrounding environment. Specifically, the object 112 may be located in the at least one area 116. The area 116 may be at least one surface and/or region. The area 116 may comprise additional elements such as the surrounding environment.
[0214] The illumination source 114 may be adapted to directly or indirectly illuminating the object 112, wherein the illumination pattern is reflected or scattered by the object 112 and, thereby, is at least partially directed towards the detector 110. The illumination source 114 may be adapted to illuminate the object 112, for example, by directing a light beam towards the object 112, which reflects the light beam. The illumination source 114 may be configured for generating an illuminating light beam for illuminating the object 112.
[0215] The illumination source 114 may comprise at least one light source. The illumination source 114 may comprise a plurality of light sources. The illumination source may comprise an artificial illumination source, in particular at least one laser source and/or at least one incandescent lamp and/or at least one semiconductor light source, for example, at least one light-emitting diode, in particular an organic and/or inorganic light-emitting diode. As an example, the light emitted by the illumination source 114 may have a wavelength of 300 to 1100nm, especially 500 to 1100 nm. Additionally or alternatively, light in the infrared spectral range may be used, such as in the range of 780 nm to 3.0 μm. Specifically, the light in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1100 nm may be used. The illumination source 114 may be configured for generating the at least one illumination pattern in the infrared region. Using light in the near infrared region allows that light is not or only weakly detected by human eyes and is still detectable by silicon sensors, in particular standard silicon sensors.
[0216] The illumination source 114 may be or may comprise at least one multiple beam light source. For example, the illumination source 114 may comprise at least one laser source and one or more diffractive optical elements (DOEs). Specifically, the illumination source 114 may comprise at least one laser and/or laser source. Various types of lasers may be employed, such as semiconductor lasers, double heterostructure lasers, external cavity lasers, separate confinement heterostructure lasers, quantum cascade lasers, distributed bragg reflector lasers, polariton lasers, hybrid silicon lasers, extended cavity diode lasers, quantum dot lasers, volume Bragg grating lasers, Indium Arsenide lasers, transistor lasers, diode pumped lasers, distributed feedback lasers, quantum well lasers, interband cascade lasers, Gallium Arsenide lasers, semi-conductor ring laser, extended cavity diode lasers, or vertical cavity surface-emitting lasers. Additionally or alternatively, non-laser light sources may be used, such as LEDs and/or light bulbs. The illumination source 114 may comprise one or more diffractive optical elements (DOEs) adapted to generate the illumination pattern. For example, the illumination source 114 may be adapted to generate and/or to project a cloud of points, for example the illumination source may comprise one or more of at least one digital light processing projector, at least one LCoS projector, at least one spatial light modulator; at least one diffractive optical element; at least one array of light emitting diodes; at least one array of laser light sources. On account of their generally defined beam profiles and other properties of handleability, the use of at least one laser source as the illumination source is particularly preferred. The illumination source 114 may be integrated into a housing of the detector 110.
[0217] The illumination pattern may comprise a plurality of illumination features. The illumination pattern may be selected from the group consisting of: at least one point pattern; at least one line pattern; at least one stripe pattern; at least one checkerboard pattern; at least one pattern comprising an arrangement of periodic or non periodic features. The illumination pattern may comprise regular and/or constant and/or periodic pattern such as a triangular pattern, a rectangular pattern, a hexagonal pattern or a pattern comprising further convex tilings. The illumination pattern may exhibit the at least one illumination feature selected from the group consisting of: at least one point; at least one line; at least two lines such as parallel or crossing lines; at least one point and one line; at least one arrangement of periodic or non-periodic feature; at least one arbitrary shaped featured. The illumination pattern may comprise at least one pattern selected from the group consisting of: at least one point pattern, in particular a pseudo-random point pattern; a random point pattern or a quasi random pattern; at least one Sobol pattern; at least one quasiperiodic pattern; at least one pattern comprising at least one preknown feature at least one regular pattern; at least one triangular pattern; at least one hexagonal pattern; at least one rectangular pattern at least one pattern comprising convex uniform tilings; at least one line pattern comprising at least one line; at least one line pattern comprising at least two lines such as parallel or crossing lines. For example, the illumination source 114 may be adapted to generate and/or to project a cloud of points. The illumination source 114 may comprise the at least one light projector adapted to generate a cloud of points such that the illumination pattern may comprise a plurality of point pattern. The illumination source 114 may comprise at least one mask adapted to generate the illumination pattern from at least one light beam generated by the illumination source 114.
[0218] Specifically, the illumination source 114 comprises at least one laser source and/or at least one laser diode which is designated for generating laser radiation. The illumination source 114 may comprise the at least one diffractive optical element (DOE). The detector 110 may comprise at least one point projector, such as the at least one laser source and the DOE, adapted to project at least one point pattern.
[0219] For example, the projected illumination pattern may be a periodic point pattern. The projected illumination pattern may have a low point density. For example, the illumination pattern may comprise at least one periodic point pattern having a low point density, wherein the illumination pattern has ≤2500 points per field of view. In comparison with structured light having typically a point density of 10 k-30 k in a field of view of 55×38° the illumination pattern according to the present invention may be less dense. This may allow more power per point such that the proposed technique is less dependent on ambient light compared to structured light.
[0220] The detector 110 may comprise at least one further illumination source 118. The further illumination source 118 may comprise one or more of at least one further light source such as at least one light emitting diode (LED) or at least one vertical-cavity surface-emitting laser (VCSEL) array. The further illumination source 118 may comprise at least one optical element such as at least one diffusor or at least one lens. The further illumination source 118 may be configured for providing additional illumination for imaging of the first image. For example, the further illumination source 118 may be used in situations in which it is not possible or difficult for recording the reflection pattern, e.g. in case of highly reflective metallic surfaces, in order to ensure a good illumination and, thus, contrasts for two-dimensional images such that a two-dimensional image recognition is possible.
[0221] The detector 110 comprises an optical sensor 120 having at least one light sensitive area 122. The optical sensor 120 is configured for determining at least one first image comprising at least one two dimensional image of the area 116. The optical sensor 120 is configured for determining at least one second image comprising a plurality of reflection features generated by the area 116 in response to illumination by the illumination features. The detector 110 may comprise a single camera comprising the optical sensor 120. The detector 110 may comprise a plurality of cameras each comprising an optical sensor 120 or a plurality of optical sensors 120.
[0222] The optical sensor 120 specifically may be or may comprise at least one photodetector, preferably inorganic photodetectors, more preferably inorganic semiconductor photodetectors, most preferably silicon photodetectors. Specifically, the optical sensor 120 may be sensitive in the infrared spectral range. All pixels of the matrix or at least a group of the optical sensors of the matrix specifically may be identical. Groups of identical pixels of the matrix specifically may be provided for different spectral ranges, or all pixels may be identical in terms of spectral sensitivity. Further, the pixels may be identical in size and/or with regard to their electronic or optoelectronic properties. Specifically, the optical sensor 120 may be or may comprise at least one inorganic photodiode which are sensitive in the infrared spectral range, preferably in the range of 700 nm to 3.0 micrometers. Specifically, the optical sensor 120 may be sensitive in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1100 nm. Infrared optical sensors which may be used for optical sensors may be commercially available infrared optical sensors, such as infrared optical sensors commercially available under the brand name Hertzstueck™ from trinamiX™ GmbH, D-67056 Ludwigshafen am Rhein, Germany. Thus, as an example, the optical sensor 120 may comprise at least one optical sensor of an intrinsic photovoltaic type, more preferably at least one semiconductor photodiode selected from the group consisting of: a Ge photodiode, an InGaAs photodiode, an extended InGaAs photodiode, an InAs photodiode, an InSb photodiode, a HgCdTe photodiode. Additionally or alternatively, the optical sensor 120 may comprise at least one optical sensor of an extrinsic photovoltaic type, more preferably at least one semiconductor photodiode selected from the group consisting of: a Ge:Au photodiode, a Ge:Hg photodiode, a Ge:Cu photodiode, a Ge:Zn photodiode, a Si:Ga photodiode, a Si:As photodiode. Additionally or alternatively, the optical sensor 120 may comprise at least one photoconductive sensor such as a PbS or PbSe sensor, a bolometer, preferably a bolometer selected from the group consisting of a VO bolometer and an amorphous Si bolometer.
[0223] The optical sensor 120 may be sensitive in one or more of the ultraviolet, the visible or the infrared spectral range. Specifically, the optical sensor may be sensitive in the visible spectral range from 500 nm to 780 nm, most preferably at 650 nm to 750 nm or at 690 nm to 700 nm. Specifically, the optical sensor may be sensitive in the near infrared region. Specifically, the optical sensor 120 may be sensitive in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1000 nm. The optical sensor 120, specifically, may be sensitive in the infrared spectral range, specifically in the range of 780 nm to 3.0 micrometers. For example, the optical sensor each, independently, may be or may comprise at least one element selected from the group consisting of a photodiode, a photocell, a photoconductor, a phototransistor or any combination thereof. For example, the optical sensor 120 may be or may comprise at least one element selected from the group consisting of a CCD sensor element, a CMOS sensor element, a photodiode, a photocell, a photoconductor, a phototransistor or any combination thereof. Any other type of photosensitive element may be used. The photosensitive element generally may fully or partially be made of inorganic materials and/or may fully or partially be made of organic materials. Most commonly, one or more photodiodes may be used, such as commercially available photodiodes, e.g. inorganic semiconductor photodiodes.
[0224] The optical sensor 120 may comprise at least one sensor element comprising a matrix of pixels. Thus, as an example, the optical sensor 120 may be part of or constitute a pixelated optical device. For example, the optical sensor 120 may be and/or may comprise at least one CCD and/or CMOS device. As an example, the optical sensor 120 may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area. The sensor element may be formed as a unitary, single device or as a combination of several devices. The matrix specifically may be or may comprise a rectangular matrix having one or more rows and one or more columns. The rows and columns specifically may be arranged in a rectangular fashion. However, other arrangements are feasible, such as non-rectangular arrangements. As an example, circular arrangements are also feasible, wherein the elements are arranged in concentric circles or ellipses about a center point. For example, the matrix may be a single row of pixels. Other arrangements are feasible.
[0225] The pixels of the matrix specifically may be equal in one or more of size, sensitivity and other optical, electrical and mechanical properties. The light-sensitive areas 122 of all optical sensors 120 of the matrix specifically may be located in a common plane, the common plane preferably facing the object 112, such that a light beam propagating from the object to the detector 110 may generate a light spot on the common plane. The light-sensitive area 122 may specifically be located on a surface of the respective optical sensor 120. Other embodiments, however, are feasible. The optical sensor 120 may comprise for example, at least one CCD and/or CMOS device. As an example, the optical sensor 120 may be part of or constitute a pixelated optical device. As an example, the optical sensor 120 may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area 122.
[0226] The optical sensor 120 is configured for determining at least one first image comprising at least one two dimensional image of the area 116. The image itself, thus, may comprise pixels, the pixels of the image correlating to pixels of the matrix of the sensor element. The optical sensor 120 is configured for determining at least one second image comprising a plurality of reflection features generated by the area 116 in response to illumination by the illumination features.
[0227] The first image and the second image may be determined, in particular recorded, at different time points. Recording of the first image and the second time limit may be performed with a temporal shift. Specifically, a single camera comprising the optical sensor 120 may record with a temporal shift a two-dimensional image and an image of a projected pattern. Recording the first and the second image at different time points may ensure that an evaluation device 124 can distinguish between the first and the second image and can apply the appropriate evaluation routine. Moreover, it is possible to adapt the illumination situation for the first image if necessary and in particular independent from the illumination for the second image. The detector 110 may comprise at least one control unit 126. The control unit 126 may be designed as hardware component of the detector 110. In particular, the control unit 126 may comprise at least one microcontroller. The control unit 126 may be configured for controlling the optical sensor 120 and/or the illumination source 114. The control unit 126 may be configured for triggering projecting of the illumination pattern and/or imaging of the second image. Specifically, the control unit 126 may be configured for controlling the optical sensor 120, in particular frame rate and/or illumination time, via trigger signals. The control unit 126 may be configured for adapting and/or adjusting the illumination time from frame to frame. This may allow adapting and/or adjusting illumination time for the first image, e.g. in order to have contrasts at the edges, and at the same time adapting and/or adjusting illumination time for the second image to maintain contrast of the reflection features. Additionally, the control unit 126 may, at the same time and independently, control the elements of the illumination source 114 and/or the further illumination source 118.
[0228] Specifically, the control unit 126 may be configured for adapting exposure time for projection of the illumination pattern. The second image may be recorded with different illumination times. Dark regions of the area 116 may require more light in comparison to lighter regions, which may result to run into saturation for the lighter regions. Therefore, the detector 110 may be configured for recording a plurality of images of the reflection pattern, wherein the images may be recorded with different illumination times. The detector 110 may be configured for generating and/or composing the second image from said images. The evaluation device 124 may be configured for performing at least one algorithm on said images which were recorded with different illumination times.
[0229] The control unit 126 may be configured for controlling the further illumination source 118. The control unit 126 may be configured for triggering illumination of the area by light generated by the further illumination source 118 and imaging of the first image. The control unit 126 may be configured for adapting exposure time for projection of the illumination pattern and illumination by light generated by the further illumination source 118.
[0230] The detector 110 may comprise at least one first filter element 128. The first filter element 128 may be configured for transmitting light in the infrared spectral range and for at least partially blocking light of other spectral ranges. The first filter element 128 may be a monochromatic bandpass filter configured for transmitting light in a small spectral range. For example, the spectral range or bandwidth may be ±100 nm, preferably ±50 nm, most preferably ±35 nm or even less. For example, the first filter element 128 may be configured for transmitting light having a central wavelength of 808 nm, 830 nm, 850 nm, 905 nm or 940 nm. For example, the first filter element 128 may be configured for transmitting light having a central wavelength of 850 nm with a bandwidth of 70 nm or less. The first filter element 128 may have a minimal angle dependency such that the spectral range can be small. This may result in a low dependency on ambient light, wherein at the same time an enhanced vignetting effect can be prevented. For example, the detector 110 may comprise the single camera having the optical sensor 120 and, in addition, the first filter element 128. The first filter element 128 may ensure that even in presence of ambient light recording of the reflection pattern is possible and at the same time to maintain laser output power low such that eye safety operation in laser class 1 is ensured.
[0231] Additionally or alternatively, the detector 110 may comprise at least one second filter element, not shown here. The second filter element may be a band-pass filter. For example, the first filter element may be a long pass filter configured for blocking visual light and for let pass light above a wavelength of 780 nm. The band pass filter may be positioned between the light-sensitive area 122, for example of a CMOS chip, and transfer device 129.
[0232] The spectrum of the illumination source 114 and/or of the further illumination source 118 may be selected depending on the used filter elements. For example, in case of the first filter element 128 having a central wavelength of 850 nm, the illumination source 114 may comprise at least one light source generating a wavelength of 850 nm such as at least one infrared (IR)-LED.
[0233] The detector 110 may comprise at least one transfer device 129 comprising one or more of: at least one lens, for example at least one lens selected from the group consisting of at least one focus-tunable lens, at least one aspheric lens, at least one spheric lens, at least one Fresnel lens; at least one diffractive optical element; at least one concave mirror; at least one beam deflection element, preferably at least one mirror; at least one beam splitting element, preferably at least one of a beam splitting cube or a beam splitting mirror; at least one multi-lens system. In particular, the transfer device 129 may comprise at least one collimating lens adapted to focus at least one object point in an image plane.
[0234] The evaluation device 124 is configured for evaluating the first image and the second image.
[0235] The evaluation of the first image comprises identifying at least one pre-defined or pre-determined geometrical feature. The geometrical feature may be at least one characteristic element of the object 112 selected from the group consisting of: a shape, a relative position of at least one edge, at least one borehole, at least one reflection point, at least one line, at least one surface, at least one circle, at least one disk, the full object, a part of the object and the like. The evaluation device 124 may comprise at least one data storage device 130. The data storage device 130 may comprise at least one table and/or at least one lookup table of geometrical features and/or pre-determined or predefined information about shape and/or size of the object 112. Additionally or alternatively, the detector 110 may comprise at least one user interface 132 via which a user can enter the at least one geometrical feature.
[0236] The evaluation device 124 may be configured for evaluating in a first step the second image. The evaluation of the second image may provide, as will be outlined in more detail below, 3D information of the reflection features. The evaluation device 124 may be configured for estimating a location of the geometrical feature in the first image by considering the 3D information of the reflection features. This may reduce effort of search for geometrical feature in the first image significantly.
[0237] The evaluation device 124 may be configured for identifying the geometrical feature by using at least one image processing process. The image processing process may comprise one or more of at least one template matching algorithm; at least one Hough-transformation; applying a Canny edge filter; applying a Sobel filter; applying a combination of filters. The evaluation device may be configured for performing at least one plausibility check. The plausibility check may comprise comparing the identified geometrical feature compared to at least one known geometrical feature of the object. For example, a user may enter a known geometrical feature via the user interface for the plausibility check.
[0238] The evaluation device 124 is configured for evaluating of the second image. The evaluation of the second image may comprise generating a three-dimensional image.
[0239] Each of the reflections features comprises at least one beam profile. The beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles. The evaluation device 124 is configured for determining beam profile information for each of the reflection features by analysis of their beam profiles.
[0240] The evaluation device 124 may be configured for determining the beam profile of each of the reflection features. The determining the beam profile may comprise identifying at least one reflection feature provided by the optical sensor 120 and/or selecting at least one reflection feature provided by the optical sensor 120 and evaluating at least one intensity distribution of the reflection feature. As an example, a region of the matrix may be used and evaluated for determining the intensity distribution, such as a three-dimensional intensity distribution or a two-dimensional intensity distribution, such as along an axis or line through the matrix. As an example, a center of illumination by the light beam may be determined, such as by determining the at least one pixel having the highest illumination, and a cross-sectional axis may be chosen through the center of illumination. The intensity distribution may an intensity distribution as a function of a coordinate along this cross-sectional axis through the center of illumination. Other evaluation algorithms are feasible.
[0241] The evaluation device 124 may be configured for performing at least one image analysis and/or image processing in order to identify the reflection features. The image analysis and/or image processing may use at least one feature detection algorithm. The image analysis and/or image processing may comprise one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between an image created by the sensor signals and at least one offset; an inversion of sensor signals by inverting an image created by the sensor signals; a formation of a difference image between an image created by the sensor signals at different times; a background correction; a decomposition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a blob detector; applying a corner detector; applying a Determinant of Hessian filter; applying a principle curvature-based region detector; applying a maximally stable extremal regions detector; applying a generalized Hough-transformation; applying a ridge detector; applying an affine invariant feature detector; applying an affine-adapted interest point operator; applying a Harris affine region detector; applying a Hessian affine region detector; applying a scale-invariant feature transform; applying a scale-space extrema detector; applying a local feature detector; applying speeded up robust features algorithm; applying a gradient location and orientation histogram algorithm; applying a histogram of oriented gradients descriptor; applying a Deriche edge detector; applying a differential edge detector; applying a spatio-temporal interest point detector; applying a Moravec corner detector; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transformation; applying a Hough-transformation; applying a wave-let-transformation; a thresholding; creating a binary image. The region of interest may be determined manually by a user or may be determined automatically, such as by recognizing an object within the image generated by the optical sensor 120.
[0242] For example, the illumination source 114 may be configured for generating and/or projecting a cloud of points such that a plurality of illuminated regions is generated on the optical sensor, for example the CMOS detector. Additionally, disturbances may be present on the optical sensor such as disturbances due to speckles and/or extraneous light and/or multiple reflections. The evaluation device 124 may be adapted to determine at least one region of interest, for example one or more pixels illuminated by the light beam which are used for determination of the longitudinal coordinate of the object 112. For example, the evaluation device 124 may be adapted to perform a filtering method, for example, a blob-analysis and/or an edge filter and/or object recognition method.
[0243] The evaluation device 124 may be configured for performing at least one image correction. The image correction may comprise at least one background subtraction. The evaluation device 124 may be adapted to remove influences from background light from the reflection beam profile, for example, by an imaging without further illumination.
[0244] The analysis of the beam profile may comprise evaluating of the beam profile. The analysis of the beam profile may comprise at least one mathematical operation and/or at least one comparison and/or at least symmetrizing and/or at least one filtering and/or at least one normalizing. For example, the analysis of the beam profile may comprise at least one of a histogram analysis step, a calculation of a difference measure, application of a neural network, application of a machine learning algorithm. The evaluation device 124 may be configured for symmetrizing and/or for normalizing and/or for filtering the beam profile, in particular to remove noise or asymmetries from recording under larger angles, recording edges or the like. The evaluation device 124 may filter the beam profile by removing high spatial frequencies such as by spatial frequency analysis and/or median filtering or the like. Summarization may be performed by center of intensity of the light spot and averaging all intensities at the same distance to the center. The evaluation device 124 may be configured for normalizing the beam profile to a maximum intensity, in particular to account for intensity differences due to the recorded distance. The evaluation device 124 may be configured for removing influences from background light from the reflection beam profile, for example, by an imaging without illumination.
[0245] The reflection feature may cover or may extend over at least one pixel of the image. For example, the reflection feature may cover or may extend over plurality of pixels. The evaluation device 124 may be configured for determining and/or for selecting all pixels connected to and/or belonging to the reflection feature, e.g. a light spot. The evaluation device 124 may be configured for determining the center of intensity by
wherein R.sub.coi is a position of center of intensity, r.sub.pixel is the pixel position and l=Σ.sub.jI.sub.total with j being the number of pixels j connected to and/or belonging to the reflection feature and I.sub.total being the total intensity.
[0246] The evaluation device 124 is configured for determining the beam profile information for each of the reflection features by analysis of their beam profiles. The beam profile information may comprise information about the longitudinal coordinate of the surface point or region having reflected the illumination feature. Additionally, the beam profile information may comprise information about a material property of said surface point or region having reflected the illumination feature.
[0247] The beam profile information may be the longitudinal coordinate of the surface point or region having reflected the illumination feature. The evaluation device 124 may be configured for determining the beam profile information for each of the reflection features by using depth-from-photon-ratio technique. With respect to depth-from-photon-ratio (DPR) technique reference is made to WO 2018/091649 A1, WO 2018/091638 A1 and WO 2018/091640 A1, the full content of which is included by reference.
[0248] The analysis of the beam profile of one of the reflection features may comprise determining at least one first area and at least one second area of the beam profile. The first area of the beam profile may be an area A1 and the second area of the beam profile may be an area A2. The evaluation device 124 may be configured for integrating the first area and the second area. The evaluation device 124 may be configured to derive a combined signal, in particular a quotient Q, by one or more of dividing the integrated first area and the integrated second area, dividing multiples of the integrated first area and the integrated second area, dividing linear combinations of the integrated first area and the integrated second area. The evaluation device 124 may configured for determining at least two areas of the beam profile and/or to segment the beam profile in at least two segments comprising different areas of the beam profile, wherein overlapping of the areas may be possible as long as the areas are not congruent. For example, the evaluation device 124 may be configured for determining a plurality of areas such as two, three, four, five, or up to ten areas. The evaluation device 124 may be configured for segmenting the light spot into at least two areas of the beam profile and/or to segment the beam profile in at least two segments comprising different areas of the beam profile. The evaluation device 124 may be configured for determining for at least two of the areas an integral of the beam profile over the respective area. The evaluation device 124 may be configured for comparing at least two of the determined integrals. Specifically, the evaluation device 124 may be configured for determining at least one first area and at least one second area of the reflection beam profile. The first area of the beam profile and the second area of the reflection beam profile may be one or both of adjacent or overlapping regions. The first area of the beam profile and the second area of the beam profile may be not congruent in area. For example, the evaluation device 124 may be configured for dividing a sensor region of the CMOS sensor into at least two sub-regions, wherein the evaluation device may be configured for dividing the sensor region of the CMOS sensor into at least one left part and at least one right part and/or at least one upper part and at least one lower part and/or at least one inner and at least one outer part. Additionally or alternatively, the detector 110 may comprise at least two optical sensors 120, wherein the light-sensitive areas 122 of a first optical sensor and of a second optical sensor may be arranged such that the first optical sensor is adapted to determine the first area of the reflection beam profile of the reflection feature and that the second optical sensor is adapted to determine the second area of the reflection beam profile of the reflection feature. The evaluation device 124 may be adapted to integrate the first area and the second area. The evaluation device 124 may be configured for using at least one predetermined relationship between the quotient Q and the longitudinal coordinate for determining the longitudinal coordinate. The predetermined relationship may be one or more of an empiric relationship, a semi-empiric relationship and an analytically derived relationship. The evaluation device 124 may comprise at least one data storage device for storing the predetermined relationship, such as a lookup list or a lookup table.
[0249] The first area of the beam profile may comprise essentially edge information of the beam profile and the second area of the beam profile comprises essentially center information of the beam profile, and/or the first area of the beam profile may comprise essentially information about a left part of the beam profile and the second area of the beam profile comprises essentially information about a right part of the beam profile. The beam profile may have a center, i.e. a maximum value of the beam profile and/or a center point of a plateau of the beam profile and/or a geometrical center of the light spot, and falling edges extending from the center. The second region may comprise inner regions of the cross section and the first region may comprise outer regions of the cross section. Preferably, the center information has a proportion of edge information of less than 10%, more preferably of less than 5%, most preferably the center information comprises no edge content. The edge information may comprise information of the whole beam profile, in particular from center and edge regions. The edge information may have a proportion of center information of less than 10%, preferably of less than 5%, more preferably the edge information comprises no center content. At least one area of the beam profile may be determined and/or selected as second area of the beam profile if it is close or around the center and comprises essentially center information. At least one area of the beam profile may be determined and/or selected as first area of the beam profile if it comprises at least parts of the falling edges of the cross section. For example, the whole area of the cross section may be determined as first region.
[0250] Other selections of the first area A1 and second area A2 may be feasible. For example, the first area may comprise essentially outer regions of the beam profile and the second area may comprise essentially inner regions of the beam profile. For example, in case of a two-dimensional beam profile, the beam profile may be divided in a left part and a right part, wherein the first area may comprise essentially areas of the left part of the beam profile and the second area may comprise essentially areas of the right part of the beam profile.
[0251] The evaluation device 124 may be configured to derive the quotient Q by one or more of dividing the first area and the second area, dividing multiples of the first area and the second area, dividing linear combinations of the first area and the second area. The evaluation device 124 may be configured for deriving the quotient Q by
wherein x and y are transversal coordinates, A1 and A2 are the first and second area of the beam profile, respectively, and E(x,y) denotes the beam profile.
[0252] The evaluation device 124 may be configured for determining at least one three-dimensional image and/or 3D-data using the determined beam profile information. The image or images recorded by the camera comprising the reflection pattern may be a two-dimensional image or two-dimensional images. As outlined above, the evaluation device 124 may be configured for determining for each of the reflection features a longitudinal coordinate. The evaluation device 124 may be configured for generating 3D-data and/or the three-dimensional image by merging the two-dimensional image or images of the reflection pattern with the determined longitudinal coordinate of the respective reflection feature.
[0253] The evaluation device 124 may be configured for merging and/or fusing the determined 3D-data and/or the three-dimensional image and the information determined from the first image, i.e. the at least one geometrical feature and its location, in order to identify the object in a scene, in particular in the area.
[0254] The evaluation device 124 is configured for identifying the reflection features which are located inside an image region the geometrical feature and/or for identifying the reflection features which are located outside the image region of the geometrical feature. The evaluation device 124 may be configured for determining an image position of the identified geometrical feature in the first image. The image position may be defined by pixel coordinates, e.g. x and y coordinates, of pixels of the geometrical feature. The evaluation device 124 may be configured for determining and/or assigning and/or selecting at least one border and/or limit of the geometrical feature in the first image. The border and/or limit may be given by at least one edge or at least one contours of the geometrical feature. The evaluation device 124 may be configured for determining the pixels of the first image inside the border and/or limit and their image position in the first image. The evaluation device 124 may be configured for determining at least one image region of the second image corresponding to the geometrical feature in the first image by identifying the pixels of the second image corresponding to the pixels of the first image inside the border and/or limit of the geometrical feature.
[0255] The evaluation device 124 is configured for determining the at least one depth level from the beam profile information of the reflection features located inside and/or outside of the image region of the geometrical feature. The area comprising the object may comprise a plurality of elements at different depth levels. The depth level may be a bin or step of a depth map of the pixels of the second image. As outlined above, the evaluation device 124 may be configured for determining for each of the reflection features a longitudinal coordinate from their beam profiles. The evaluation device 124 may be configured for determining the depth levels from the longitudinal coordinates of the reflection features located inside and/or outside of the image region of the geometrical feature. Metallic objects often cannot be identified in the second image correctly. However, levels can be correctly identified, which may be defined by the ground or cover of said metallic objects since these often are made of cardboard.
[0256] The evaluation device 124 is configured for determining the position and/or the orientation of the object by considering the depth level and pre-determined or predefined information about shape and/or size of the object 112. For example, the information about shape and/or size may be entered by a user via the user interface 132. For example, the information about shape and size may be measured in an additional measurement. As outlined above, the evaluation device 124 is configured for determining the depth level on which the object 112 is located. If in addition, the shape and/or size of the object 112 are known the evaluation device 124 can determine the position and orientation of the object.
[0257] For example, in case a task may be to detect and measure with the detector 110 at least one object 112 such as bottles in a box. The detector 110, in particular the optical sensor 120, may be installed on a robot arm 142 such that the detector 110 can move to different positions with respect to the objects in the box. The task may be that the robot should move to the objects 112 and take it out of the box. Additionally, the user knows the object 112, in this example the bottles, in detail, such that the size, form and shape may be also known and may be programmed into the evaluation device 124.
[0258] The optical sensor 120 may determine the two dimensional image and a resulting 3d depth map. The depth map may estimate the position of the detector 110 and the objects 112. The depth map can also be distorted by different effects like to shiny objects, e.g. metal, and/or the 3d depth map may be to sparse. The present invention propose to get additional information by a 2d image that corresponds to the 3d depth map. In the example with the bottles, the task is to detect bottles in a box. In addition, it may be known that the bottles are rotationally symmetric. Certain features of the botte can helps for object detection, e.g. round bottle caps. This may lead to search for circles or ellipsoids in the 2d image for the object detection with image processing algorithms. A rough estimation of the size of the ellipsoids may be computed by the 3d depth information. For a detailed object detection, the detected ellipsoids in the 2d image and the known relation of the projection between detector 110 and the real world can be used to determine the size and position of the circles in the real word. A relationship between the projection between detector 110 and the real world can be used to determine size, position and orientation by using at least one system of equations.
[0259] The evaluation device 124 is configured for determining at least one material property of the object from the beam profile information of the reflection features located inside and/or outside of the image region of the geometrical feature. The beam profile information may comprise information about a material property of the surface point or region having reflected the illumination feature. The object 112 may comprise at least one surface on which the illumination pattern is projected. The surface may be adapted to at least partially reflect the illumination pattern back towards the detector 110. For example, the material property may be a property selected from the group consisting of: roughness, penetration depth of light into the material, a property characterizing the material as biological or non-biological material, a reflectivity, a specular reflectivity, a diffuse reflectivity, a surface property, a measure for translucence, a scattering, specifically a back-scattering behavior or the like. The at least one material property may be a property selected from the group consisting of: a scattering coefficient, a translucency, a transparency, a deviation from a Lambertian surface reflection, a speckle, and the like.
[0260] The evaluation device 124 may be configured for determining the material property of the surface point having reflected the illumination feature. The detector 110 may comprise at least one database 136 comprising a list and/or table, such as a lookup list or a lookup table, of predefined and/or predetermined material properties. The list and/or table of material properties may be determined and/or generated by performing at least one test measurement using the detector 110 according to the present invention, for example by performing material tests using samples having known material properties. The list and/or table of material properties may be determined and/or generated at the manufacturer site and/or by the user of the detector 110. The material property may additionally be assigned to a material classifier such as one or more of a material name, a material group such as biological or non-biological material, translucent or non-translucent materials, metal or non-metal, skin or non-skin, fur or non-fur, carpet or non-carpet, reflective or non-reflective, specular reflective or non-specular reflective, foam or non-foam, hair or non-hair, roughness groups or the like. The database 136 may comprise a list and/or table comprising the material properties and associated material name and/or material group.
[0261] The evaluation device 124 may be configured for determining the material property m by evaluation of the respective beam profiles of the reflection features. The evaluation device 124 may be configured for determining at least one material feature ϕ.sub.2m by applying at least one material dependent image filter ϕ.sub.2 to the reflection feature. The image may be a two-dimensional function, f(x,y), wherein brightness and/or color values are given for any x,y-position in the image. The position may be discretized corresponding to the recording pixels. The brightness and/or color may be discretized corresponding to a bit-depth of the optical sensors. The image filter may be at least one mathematical operation applied to the beam profile and/or to the at least one specific region of the beam profile. Specifically, the image filter ϕ maps an image f, or a region of interest in the image, onto a real number, ϕ(f(x,y))=φ, wherein φ denotes a feature, in particular a distance feature in case of distance dependent image filters and a material feature in case of material dependent image filters. Images may be subject to noise and the same holds true for features. Therefore, features may be random variables. The features may be normally distributed. If features are not normally distributed, they may be transformed to be normally distributed such as by a Box-Cox-Transformation. The evaluation device 124 may be configured for determining the material property m by evaluating the material feature ϕ.sub.2m. The material feature may be or may comprise at least one information about the at least one material property of the object 112.
[0262] The material dependent image filter may be at least one filter selected from the group consisting of: a luminance filter; a spot shape filter; a squared norm gradient; a standard deviation; a smoothness filter such as a Gaussian filter or median filter; a grey-level-occurrence-based contrast filter; a grey-level-occurrence-based energy filter; a grey-level-occurrence-based homogeneity filter; a grey-level-occurrence-based dissimilarity filter; a Law's energy filter; a threshold area filter; or a linear combination thereof; or a further material dependent image filter ϕ.sub.2other which correlates to one or more of the luminance filter, the spot shape filter, the squared norm gradient, the standard deviation, the smoothness filter, the grey-level-occurrence-based energy filter, the grey-level-occurrence-based homogeneity filter, the grey-level-occurrence-based dissimilarity filter, the Law's energy filter, or the threshold area filter, or a linear combination thereof by |ρ.sub.ϕ2other, ϕm|≥0.40 with ϕ.sub.m being one of the luminance filter, the spot shape filter, the squared norm gradient, the standard deviation, the smoothness filter, the grey-level-occurrence-based energy filter, the grey-level-occurrence-based homogeneity filter, the grey-level-occurrence-based dissimilarity filter, the Law's energy filter, or the threshold area filter, or a linear combination thereof. The further material dependent image filter ϕ.sub.2other may correlate to one or more of the material dependent image filters ϕ.sub.m by |ρ.sub.ϕ2other, ϕm|≥0.60, preferably by |ρ.sub.ϕ2other, ϕm|≥0.80.
[0263] As outlined above, the detector 110 may be configured for classify the material of the elements of the area 116 comprising the object 112. In contrast to structured light the detector 110 according to the present invention may be configured for evaluating each of the reflection features of the second image such that for each reflection feature it may be possible to determine information about its material property.
[0264] The evaluation device 124 is configured for determining at least one position and/or orientation of the object by considering the material property and the pre-determined or predefined information about shape and/or size of the object. Generally, identification of the object 112 may be possible using only of the 2d image information or the 3D depth map. However, quality can be enhanced by fusion of 2d and 3d information. Reflecting surfaces are generally problematic for optical 3D measurements. In case of reflecting surfaces using 2d image information only may be possible. In case of objects, which are highly reflective, 3d measurements may relate to erroneous depth map. For identification of such object the 2d information may be essential.
[0265] The detector 110 may fully or partially be integrated into at least one housing 138. As depicted in
[0266] The components of the evaluation device 124 may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector 110. Besides the possibility of fully or partially combining two or more components, the optical sensor 120 and one or more of the components of the evaluation device 124 may be interconnected by one or more connectors 154 and/or by one or more interfaces, as symbolically depicted in
[0267] With regard to the coordinate system for determining the position of the object 112, which may be a coordinate system of the detector 110, the detector may constitute a coordinate system 140 in which an optical axis of the detector 110 forms the z-axis and in which, additionally, an x-axis and a y-axis may be provided which are perpendicular to the z-axis and which are perpendicular to each other. As an example, the detector 110 and/or a part of the detector may rest at a specific point in this coordinate system, such as at the origin of this coordinate system. In this coordinate system, a direction parallel or antiparallel to the z-axis may be regarded as a longitudinal direction, and a coordinate along the z-axis may be considered a longitudinal coordinate. An arbitrary direction perpendicular to the longitudinal direction may be considered a transversal direction, and an x- and/or y-coordinate may be considered a transversal coordinate.
[0268] The present invention may be applied in the field of machine control such as for robotic application. For example, as shown in
LIST OF REFERENCE NUMBERS
[0269] 110 detector
[0270] 112 object
[0271] 114 illumination source
[0272] 116 area
[0273] 118 further illumination source
[0274] 120 optical sensor
[0275] 122 light-sensitive area
[0276] 124 evaluation device
[0277] 126 control unit
[0278] 128 first filter element
[0279] 129 transfer device
[0280] 130 data storage device
[0281] 132 user interface
[0282] 134 surface
[0283] 136 database
[0284] 138 detector system
[0285] 140 coordinate system
[0286] 142 robot arm
[0287] 154 connector
[0288] 162 opening
CITED REFERENCES
[0289] US 2016/0238377 A1
[0290] WO 2018/091649 A1
[0291] WO 2018/091638 A1
[0292] WO 2018/091640 A1
[0293] “Lasertechnik in der Medizin: Grundlagen, Systeme, Anwendungen”, “Wirkung von Laserstrahlung auf Gewebe”, 1991, pages 171 to 266, Jürgen Eichler, Theo Seiler, Springer Verlag, ISBN 0939-097
[0294] R. A. Street (Ed.): Technology and Applications of Amorphous Silicon, Springer-Verlag Heidelberg, 2010, pp. 346-349
[0295] WO 2014/198629 A1
[0296] Chen Guo-Hua et al. “Transparent object detection and location based on RGB-D cam-era”, JOURNAL OF PHYSICS: CONFERENCE SERIES, vol. 1183, 1 Mar. 2019, page 012011, XP055707266, GB ISSN: 1742-6588, DOI: 10.1088/1742-6596/1183/1/012011