System for creating a vehicle surroundings model
11530931 · 2022-12-20
Assignee
Inventors
Cpc classification
G01C21/3602
PHYSICS
G01C21/367
PHYSICS
G01C21/3841
PHYSICS
International classification
Abstract
System for creating a surroundings model of a motor vehicle, which is or can be connected with: at least one navigation unit, which is equipped to provide information about the instantaneous position of the vehicle and information about at least one segment of road in front of the vehicle in time and space, wherein the navigation unit provides the information in a digital map format and/or in absolute position information, at least one interface, which is equipped to communicate with at least one object to be merged in the surroundings of the vehicle, wherein the information received by the interface includes absolute position information on the at least one object to be merged, and/or at least one sensor unit, which is equipped to detect at least one object to be merged in the surroundings of the vehicle, wherein the at least one sensor unit is additionally equipped to provide relative position information on the at least one object to be merged relative to the vehicle, wherein the system is equipped to ascertain the geometry of a segment of road in front of the vehicle by using the information supplied by the at least one navigation unit about the segment of road in front of the vehicle, wherein the system is equipped to merge the absolute position information and/or the relative position information on the at least one object to be merged with information provided by the at least one navigation unit in the digital map format to create a vehicle surroundings model based on the geometry of the road thereby ascertained.
Claims
1. A system for creating a surroundings model for a vehicle, the system comprising: at least one navigation unit configured to provide instantaneous position information for the vehicle and road segment information about at least one segment of road in front of the vehicle in time and space, wherein the navigation unit provides the road segment information in a digital map format, the road segment information comprising course information about a course of one of a road border and a lane marking of the at least one segment of the road; at least one sensor unit configured to detect at least one object and to provide relative position information indicative of a distance between the vehicle and the at least one object, wherein the system is configured to ascertain a geometry of the at least one segment of the road in front of the vehicle based on the course information, the geometry of the at least one segment of the road comprising geometry points distributed and representative of one of the road border and the lane marking, each geometry point comprising absolute position information, wherein the system is configured to merge absolute position information for the at least one object with information provided by the at least one navigation unit in the digital map format to create a vehicle surroundings model, the merging comprising: identifying a plurality of alternative points for the at least one object based on the relative position information, the plurality of alternative points being located on a circle around the vehicle, the circle having a radius that corresponds to a distance from the vehicle to the at least one object; identifying a respective alternative point of the plurality of alternative points for the at least one object based on an evaluation of the plurality of alternative points and geometry points, wherein the respective alternative point comprises absolute alternative point information corresponding to the absolute position information for the at least one object; and creating the vehicle surroundings model based on the geometry of a road and the absolute position information for the at least one object, where the vehicle surroundings model includes at least one object model representative of the at least one object, and wherein the vehicle is controlled based on the vehicle surroundings model.
2. The system according to claim 1, wherein the system is configured to transform the relative or absolute position information on the at least one object to be merged into information in the digital map format and/or wherein the system is configured to transform the relative or absolute position information on the at least one object to be merged and the information in the digital map format into a predefined coordinate format.
3. The system according to claim 2, wherein the system is configured to ascertain the absolute position information based on a distance and additional information with respect to the segment of road.
4. The system according to claim 1, wherein the system is configured to ascertain geometry points whose absolute position information and/or whose position in the digital map format are known by using the information provided by the at least one navigation unit for ascertaining the geometry of the road.
5. The system according to claim 1, wherein the system is configured to ascertain an offset of the at least one object to be merged by merging instantaneous position information for the vehicle in the digital map format and the absolute or relative position information for the at least one object to be merged.
6. The system according to claim 5, wherein the system is configured to ascertain the offset of the at least one object to be merged by using a respective geometry point thereby ascertained or the geometry points thereby ascertained.
7. The system according to claim 1, wherein the system is configured to ascertain at least one node point whose absolute or relative position information and/or whose position in the digital map format is/are known by using the course information provided by the at least one navigation unit for ascertaining the geometry of the road.
8. The system according to claim 7, wherein the system is configured to estimate the geometry of the segment of road between the at least one object to be merged and the at least one node point closest to the object to be merged, wherein the system is further configured to estimate a distance between the at least one object to be merged and the at least one node point based on the estimated geometry of that segment of road.
9. The system according to claim 8, wherein the system is configured to ascertain an offset between the at least one node point and the at least one object to be merged based on the estimated distance.
10. The system according to claim 8, wherein the system is configured to estimate the course of the segment of road based on information detected by at least one sensor unit and/or based on information provided by at least one interface.
11. The system according to claim 1, wherein the system is configured to ascertain whether the at least one object to be merged is located on a same path or in a same lane as the vehicle.
12. The system according to claim 1, wherein the system is configured to ascertain information in the digital map format of an object to be merged whose absolute position information is known by means of a relative displacement vector starting from the instantaneous absolute position information on the vehicle.
13. A vehicle comprising a system according to claim 1.
14. The system according to claim 1, wherein a driver assistance system is configured to access the vehicle surroundings model for controlling the vehicle.
15. The system according to claim 1, wherein the system is a driver assistance system.
16. A method for creating a surroundings model of a vehicle, wherein the method comprises the steps: providing instantaneous position information for the vehicle and road segment information about at least one segment of road in front of the vehicle in time and space, wherein the road segment information is provided in a digital map format and comprises course information about a course of one of a road border and a lane marking of the at least one segment of the road; detecting at least one object to be merged in a surroundings of the vehicle to provide relative position information for the at least one object indicative of a distance between the vehicle and the at least one object; ascertaining a geometry of at least one segment of road in front of the vehicle based on the course information about the segment of road in front of the vehicle, the geometry of the at least one segment of the road comprising geometry points distributed and representative of one of the road border and the lane marking, each geometry point comprising absolute position information; merging absolute position information for the at least one object with information provided by at least one navigation unit in the digital map format to create a surroundings model, the merging comprising identifying a plurality of alternative points for the at least one object based on the relative position information, the plurality of alternative points being located on a circle around the vehicle, the circle having a radius that corresponds to a distance from the vehicle to the at least one object; identifying a respective alternative point of the plurality of alternative points for the at least one object based on an evaluation of the plurality of alternative points and geometry points, wherein the respective alternative point comprises absolute alternative point information corresponding to the absolute position information for the at least one object; creating the surroundings model based on the geometry of the road and the absolute position information for the at least one object, where the surroundings model includes at least one object model representative of the at least one object; and controlling the vehicle based on the vehicle surroundings model.
17. The method according to claim 16, wherein the method comprises at least one of the steps: transforming the relative or absolute position information on the at least one object to be merged into information in the digital map format and/or transforming the relative of absolute position information on the at least one object to be merged and the information in the digital map format into a predefined coordinate format.
18. The method according to claim 17, wherein the absolute position information is ascertained based on additional information with respect to the segment of road and/or the at least one object to be merged.
19. The method according to claim 16, wherein the geometry points whose absolute position information and/or whose position in the digital map format are known are ascertained by using the information provided by the at least one navigation unit for ascertaining the geometry of the road.
20. The method according to claim 16, wherein an offset of the at least one object to be merged is ascertained by merging the instantaneous position information for the vehicle in the digital map format and the absolute or relative position information on the at least one object to be merged.
21. The method according to claim 20, wherein an offset of the at least one object to be merged is ascertained by using the geometry point thereby ascertained.
22. The method according to claim 16, wherein node points whose absolute or relative position information and/or whose position in the digital map format is/are known are ascertained by using the information provided by the at least one navigation unit for ascertaining the geometry of the road.
23. The method according to claim 22, wherein the geometry of the segment of road between the at least one object to be merged and a respective node point closest to the at least one object to be merged is estimated such that a distance between the at least one object to be merged and the respective node point is estimated to provide an estimated distance based on the estimated geometry of the segment of road.
24. The method according to claim 23, wherein an offset between a respective node point and the at least one object to be merged is ascertained based on the estimated distance.
25. The method according to claim 23, wherein the course of the segment of road is estimated based on information detected by at least one sensor unit and/or based on at least one item of information provided by at least one interface.
26. The method according to claim 16, wherein it is ascertained whether the at least one object to be merged is located on a same path or in a same lane as the vehicle.
27. The method according to claim 16, wherein the information in the digital map format of an object to be merged whose absolute position information is known is ascertained by means of a relative displacement vector starting from the instantaneous absolute position information on the vehicle.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1) Additional details, features, advantages and effects of the method and devices described here can be derived from the following description of variants currently preferred as well as from the drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION OF THE DRAWINGS
(16)
(17) The sensor unit 110 may be, for example, a camera unit, a radar unit, a lidar unit or the like. However, the system 120 may also be connected to a plurality of sensor units 110, i.e., the system 120 may be connected to a camera unit, a radar unit and a lidar unit. The sensor unit 110 supplies relative position information on an object to be merged (not shown) in the surroundings of the vehicle to the system 120. If the sensor unit 110 is a camera unit, it may be a time-of-flight (TOF) camera unit. A time-of-flight camera can detect the surroundings of the vehicle in 3D based on the distance measurement method it carries out. A time-of-flight camera illuminates the surroundings of the vehicle with pulses of light, with the camera unit measuring the time needed by the light to travel to the object and back for each pixel. The required time is then used to determine the distance from the object detected. The sensor unit 110 can be additionally equipped to detect the course of a road border and/or a lane marking. Furthermore, the sensor unit 110 may be equipped to detect the width of the road.
(18) The navigation unit 130 is equipped to supply information about the instantaneous position of the vehicle and at least one segment of road in front of the vehicle in time and space based on position information on the vehicle and/or map information. This information can be supplied in a digital map format. The navigation unit 130 may be equipped accordingly to ascertain the instantaneous position of the motor vehicle based on a signal, in particular a GPS signal. In addition, the navigation unit 130 may access map data in a digital map format stored in a memory in the navigation unit 130, supplied in the form of a external data medium and/or a Cloud system. The map data may also contain information about the course of the road border and/or the course of the lane marking and/or the width of the road. The current vehicle position can be supplied to the navigation unit 130 in a digital map format. The map data may also include information about the geometry of the road and the topology of the segment of road in front of the vehicle.
(19) The interface 140 is equipped to communicate with at least one object to be merged in the surroundings of the vehicle. The information received by the interface 140 includes absolute position information on the at least one object to be merged. The interface 140 may also be an interface for the so-called “V2X” communication. V2X refers to the communication of a vehicle with objects. This expression thus includes communication of the vehicle with other vehicles, with infrastructure objects, but also with humans (pedestrians). Infrastructure objects may be, for example, traffic lights, traffic signs, mobile and stationary road surface borders, buildings, signs, billboards or the like.
(20) The system 120 is equipped to ascertain geometry points and/or node points with known absolute position information and/or with known position information in the path/offset format from the information supplied by the navigation unit 130. With the geometry points and/or node points thereby ascertained, the system 120 can ascertain the geometry of the segment of road in front of the vehicle. The system is additionally equipped to merge the absolute position information and/or the relative position information on the at least one object to be merged with the information supplied by the at least one navigation unit 130 in a path/offset format based on the geometry of the road thereby ascertained in order to create a vehicle surroundings model. The map data may also include information about the course of the road border and/or the course of the lane marking and/or the width of the road.
(21)
(22) An embodiment of a method for creating a surroundings model of a vehicle, which can be carried out by the system 120, for example, is described below with reference to
(23)
Offset.sub.object=Offset.sub.ego vehicle+ΔOffset
(24) The general procedure in merger of information in the path/offset format with relative position information was explained above. A prerequisite for merger according to the example shown in
(25)
(26) The ego vehicle 10 has a sensor unit (not shown) such as a camera unit or a radar unit, for example, that serves to detect objects to be merged such as the traffic sign 22 (speed limit 60 km/h). According to this example the traffic sign represents the object 22 to be merged. However, an object to be merged may also be another traffic participant, a street light, a traffic sign or pedestrians.
(27) The sensor unit (not shown) can supply relative position information with only one coordinate with respect to the object 22 to be merged. This position information may be, for example, the distance of the object 22 relative to the ego vehicle 10 (e.g., object is 10 meters away). However, it is also possible for the sensor unit to supply more accurate position information with at least two coordinates. Such coordinates may be given, for example, in polar coordinates or Cartesian coordinates. The position information may then include a distance and an angle such as, for example, the object is 50 meters away at an angle of 5° to the direction of travel.
(28) The variables shown in the following table are used.
(29) TABLE-US-00001 Symbol Meaning x.sub.w, y.sub.w World coordinates, e.g., WGS84 coordinates x.sub.o Offset coordinate on a path in a path/offset format x.sub.wo, y.sub.wo World coordinates or offset coordinate of the object to or x.sub.oo be merged O Object point of the object to be merged (e.g., traffic sign) either in world coordinates O(x.sub.wo, y.sub.wo) or with offset coordinates O(x.sub.oo) G Geometry point (e.g., lane marking) either in world coordinates G(x.sub.w, y.sub.w) or with offset coordinate G(x.sub.0) S Node point for the merger (e.g., landmark) either with world coordinates S(x.sub.w, y.sub.w) or with offset coordinate S(x.sub.o)
(30) The digital map may provide information with respect to the geometry of the road usually in absolute position information such as in world coordinates, for example. The geometry of the road includes, among other things, the geometry of the road markings and the geometry of lane markings. The geometry points G of a lane marking are shown in
(31) The sensor unit detects the object 22 to be merged and represents its relative position information in relation to the ego vehicle 10 either one-dimensionally, specifying only the distance from the ego vehicle 10, or by more accurate position information, for example, the angle and distance in relation to the ego vehicle 10 and with a displacement vector.
(32) First, absolute position information is ascertained from the relative position information on the object 22 supplied by the sensor unit. The absolute position information can be given in world coordinates. The world coordinates for the object 22 (O(x.sub.wO, y.sub.wO)) are ascertained from the relative position information on the object 22. World coordinates are not usually given as Cartesian coordinates but instead are given as spherical coordinates in first approximation. The WGS84 model uses an oblate spheroid to describe the earth's surface. For a simpler illustration of merging, a spherical representation of the earth is assumed. This approximation is accurate enough for the short distances between the vehicle and the object for merger.
(33) If the sensor unit supplies the angle α and the distance d from the object 22 to be merged, the result is the world coordinates of the object 22 (O(x.sub.wO, y.sub.wO)) from the world coordinates of the ego vehicle E(x.sub.wE, y.sub.wE):
(34)
(35) where d is the distance from the ego vehicle 10 to the object 22, α is the angle in the direction of the object 22, measured from the connecting line between the ego vehicle and the north pole and R is the radius of earth.
(36) If the sensor unit supplies a displacement vector {dot over (v)}=(a, b), then d and α must first be calculated from this vector. Next, O(x.sub.wO, y.sub.wO) can be determined using the equations given above. For example, the geometric relationships can yield the following:
d=√{square root over (a.sup.2+b.sup.2)}
α=α.sub.F+α.sub.O
(37) where α.sub.F is the orientation of the ego vehicle (measured from the connecting line between the ego vehicle and the north pole) and α.sub.O is the angle between the longitudinal axis of the ego vehicle 10 and the object 22. This angle is derived from the vector {right arrow over (v)} thereby ascertained.
(38)
(39) The sensor unit (not shown) supplies the distance d between the ego vehicle 10 and the object 22 to be merged, which is a traffic sign according to the example of a diagram in
(40) If, as described above, the absolute position information O(x.sub.wO, y.sub.wO) on the object 22 to be merged cannot be determined directly in world coordinates because of inadequate sensor information, then various types of additional information may be used to determine the best possible alternative point for the absolute position information on the object 22 to be merged. This additional information can be ascertained or detected by the following steps, for example: Determining the type of object to be merged (for example, traffic signs, other traffic participants), for example, by evaluating the camera information and/or position information. Calculating a plurality of possible alternative points for O.sub.possible(x.sub.w, y.sub.w). This was described above with reference to the example of a diagram according to
(41) With the help of this information, a suitable alternative point is selected from the possible alternative points.
(42)
(43) After calculating the absolute position information in world coordinates of the object to be merged O(x.sub.wO, y.sub.wO), for example, a suitable geometry point G.sub.search must be selected and must correspond to O(x.sub.wO, y.sub.wO) as well as possible. If they do correspond, then the geometry point G, which is the shortest distance away from O(x.sub.wO, y.sub.wO), for example, may be intended. If there is a large number of geometry points G (high data density), then in the simplest case, a geometry point G may be assigned directly to the object to be merged O(x.sub.wO, y.sub.wO):
O(x.sub.wO,y.sub.wO)=G.sub.search(x.sub.w,y.sub.w)
(44) If the distance of the object to be merged O(x.sub.wO, y.sub.wO) from the nearest geometry point G is greater than a predetermined threshold, then a direct correspondence cannot be found. In this case, a new geometry point G.sub.interpol(x.sub.wO, y.sub.wO), which has a better correspondence with the object point O, may be determined by interpolation (or extrapolation) between two or more geometry points, so that it is then possible to determine:
O(x.sub.wO,y.sub.wO)=G.sub.interpolo(x.sub.w,y.sub.w)
(45) Depending on the course of the road geometry, the density of the geometry points G, the required precision and additional criteria, a linear method or a higher-order polynomial or some other suitable method may be used for interpolation and/or extrapolation. Because of the prerequisites described here, both the absolute position information in world coordinates x.sub.w, y.sub.w and the course of the offset x.sub.o are known from the geometry point G.sub.search. For this reason, the offset of the object 22 to be merged corresponds to the offset of G.sub.search:
G.sub.search(x.sub.o)=O(x.sub.oo)
(46) The offset of the object point O of the object to be merged is determined in this way. If an interpolated or extrapolated geometry point G.sub.interpol is used as the reference, then the offset for G.sub.interpolo must first be interpolated and extrapolated. The basis is one or more known neighboring geometry points of the geometry of the road. Then the following assignment can be made:
G.sub.interpol(x.sub.o)=O(x.sub.oo)
(47) With reference to
G.sub.105(x.sub.o,105)=O(x.sub.oo)
(48)
(49)
(50) Again in the present case, the absolute position information on the object to be merged O(x.sub.wO, y.sub.wO) can also be determined first in the present case if this is not known. This was already described in detail above.
(51) After determination of the absolute position information on the object to be merged O(x.sub.wO, y.sub.wO), the node point S.sub.search(x.sub.w, y.sub.w) representing the smallest distance from the object O(x.sub.wO, y.sub.wO) can be found. In the example according to
(52)
(53) For these equations, a spherical model of the earth with the radius R is assumed as the basis. Such a model of the earth is expected to meet the requirements for precision for most applications. If a greater precision is nevertheless required, other models of the earth (e.g., rotational ellipsoid) may be used. For many applications, it is sufficient if node points and objects in a Cartesian coordinate system are referenced in the surroundings of the ego vehicle 10. This is true in particular of node points and objects in a near circle around the ego vehicle 10. When using a Cartesian coordinate system, d is obtained from
d=√{square root over ((x.sub.wO−x.sub.w).sup.2+(y.sub.wO−y.sub.w).sup.2)}
(54) Which model of the earth and which coordinate system are used for reference in the node points and objects will depend mainly on the precision requirements of the respective application and also on the available resources (processor speed, memory, available computation time, etc.). The distance d between the object to be merged and O(x.sub.wO, y.sub.wO) and node point S.sub.n is an important criterion for selection of a suitable node point S.sub.n. For example, the node point S.sub.n at the smallest distance from the object to be merged O(x.sub.wO, y.sub.wO) may be selected but other parameters can also have an influence on the choice of a suitable node point S.sub.n. The node points S.sub.n may thus have one (or more) confidence indicators. This indicator may indicate, for example, how high the confidence is that the node point is actually located at the stored position. A high confidence is obtained, for example, by the fact that the position of the node point S.sub.n has been reported by an official authority (e.g., highway authority reports the position of a speed limit sign in world coordinates) or when the position of the node point S.sub.n has been confirmed by many different participants. If the confidence level is low, a node point for further processing can be ruled out, and a node point S.sub.n at a greater distance d but with a higher confidence may be selected. Either one or more confidence parameters supplied by a data provider may be used as the confidence interval. Furthermore, it is possible to calculate the confidence parameter before using the node point itself. For example, time stamps (e.g., the last confirmation of the position of the node point S.sub.n), control parameters (e.g., variance of the measured node point position), type of data source (e.g., other traffic participants or public authority) and type of node point S.sub.n (e.g., traffic sign erected temporarily or permanently) may be used as input variables for calculating a confidence parameter.
(55) Because of the low data density of the node points S, it is to be expected that the offset O(x.sub.o,o) of the object to be merged O(x.sub.wO, y.sub.wO) cannot be deduced from the offset S.sub.n(x.sub.o) of the selected node point S. An allocation of the offset of the selected node point S as an offset of the object to be merged O(x.sub.wO, y.sub.wO) is therefore impossible in most cases so that the following holds:
S.sub.search(x.sub.o)≠O(x.sub.o,o)
(56)
x.sub.o,o=x.sub.o,S.sub.
(57) It may optionally be necessary to take into account the fact that the object 22 to be merged and the node point S.sub.search are not located on the same side of the road. In
(58) In the case of b.sub.road, this may involve other geometric variables in addition to the width of the road, such as the lane width or other distances that can be derived from the sensor information detected by the at least one sensor unit or from digital maps. Furthermore, the sensor units (for example, camera unit, radar unit) on the vehicle may be used to verify whether the prerequisites of a straight road course are met. If the road has a tight curve, the determination of
(59)
x.sub.o,o=x.sub.o,S.sub.
(60) The most precise possible method of determining
(61)
ΔOffset=∫.sub.a.sup.bds
(62) where a denotes the starting point of the path integration (e.g., node point, vehicle) and b denotes the location of the object 22 to be merged. For the solution to this problem, different coordinate systems (e.g., Cartesian, polar coordinates) may be used. The choice of the coordinate system depends on the respective conditions, i.e., in particular, which input parameters can be used to estimate the course of the geometry of the road.
(63) As shown in
s.sub.g=f(x.sub.F)
(64) In the present case, this yields:
(65)
(66) If the distance d and angle α of the object 22 to be merged from the ego vehicle 10 are known (e.g., from the information from at least one sensor unit, such as the radar unit), then the following holds for b in the vehicle coordinate system:
b=d cos α
(67)
(68) The path/offset display of the relevant object 24 (e.g., the traffic light system) can be found in different ways. In the simplest case, for example, when the distance between the vehicle and the relative object is small and when the course of the road is approximately straight, a relative displacement vector (vehicle 10 to the object 24 to be merged) can be calculated from the absolute position of the vehicle and the absolute position of the object 24 to be merged. The relative displacement between the vehicle 10 and the object 24 is thus known. It was explained above with reference to
(69) Both the position information in the path/offset format and the absolute position information in world coordinates (WGS84 coordinates) are known by the ego vehicle 10. The relative displacement vector between the ego vehicle 10 and the traffic light system 24 can be calculated from these world coordinates of the ego vehicle 10 and the traffic light system 24 which represents the object to be merged. This vector can then be used to calculate the position information of the traffic light system in the path/offset format from the position information in the path/offset format of the ego vehicle 10. After calculating the position information in the path/offset format, the traffic light system can be inserted into a surroundings model of the vehicle. This vehicle surroundings model may be an electronic horizon (e.g., according to the ADASIS protocol), for example.
(70)
(71)
(72) In addition to this approach, as shown in
(73) According to the embodiment of the method for creating a surroundings model of a vehicle as described with reference to
(74)
(75) By integrating these different types of information into a surroundings model (e.g., ADASIS), it is possible to implement the following applications in the ADAS/AD range, for example: Warning the driver of hazardous sites or traffic obstacles along the path traveled. Due to the merger of information it is possible to ascertain whether the hazardous locations are on the path currently being driven and whether these are relevant for the vehicle. Taking into account information about the instantaneous condition of traffic lights and information about variable speed limits in the choice of speed for an intelligent cruise control (e.g., green ACC). Through merger of information, the real-time data from sources such as V2X cannot be merged and correlated with the other objects (traffic lights, traffic signs) from a digital map to form a shared surroundings model and/or horizon. Without this correlation, the information received from a V2X data source cannot be interpreted. Displaying information about the instantaneous traffic light condition and information about variable speed limits on a human-machine interface (HMI) in the vehicle. By merging information, it is possible to correlate and interpret data from different sources in a shared surroundings model. Choosing an alternative route due to traffic obstacles. Through merger of information it is possible to determine whether the traffic obstacle is relevant for the vehicle.
(76) The variants of the method or the devices described above as well as their functional aspects and operational aspects serve only to provide a better understanding of the structure, functioning and properties. They do not limit the disclosure to these embodiments. The figures are partially drawn schematically with a definite emphasis on important properties and effects in some cases to illustrate the functions, active principles, technical embodiments and features. Any functioning, any principle, any technical embodiment and any feature which is/are disclosed in the figures or text may be combined freely and at will with any claims, any feature in the text and/or in the other figures, other functioning, principles, technical embodiments and features contained in this disclosure or derivable therefrom so that all conceivable combinations can be attributed to and associated with the methods and/or devices as described. This also includes combinations of all the individual embodiments in the text, i.e., in any section of the description, the claims and also combinations of different variants in the text, the claims and the figures. For the value ranges given here, it holds that all numerical values in between are also disclosed.
(77) The claims also do not limit the disclosure content and thus the possible combinations of all the features presented here among one another. All the features disclosed are also disclosed explicitly individually and in combination with any other features.