Method and arrangement for determining a condition of a road surface
12559110 · 2026-02-24
Assignee
Inventors
Cpc classification
G01S7/4802
PHYSICS
G08B19/02
PHYSICS
International classification
G01S17/86
PHYSICS
Abstract
A method for determining a classification of a condition of a road surface for vehicle traffic includes: defining a reference surface of the road surface ahead; illuminating the reference surface with a light source on the vehicle; detecting light reflected off the reference surface; and determining classification of the condition of the road surface by analyzing detected information related to the reference surface. Furthermore, the method includes: illuminating the reference surface using laser or LED light; providing image information for the reference surface by detecting the reflected light with a Lidar unit; providing further image information for the reference surface by scanning the reference surface using an RGB sensor; and classifying the road surface by combining image information from the Lidar unit and the RGB sensor. The disclosure also relates to an arrangement for determining a classification of a condition of a road surface for vehicle traffic.
Claims
1. A method for determining a classification of a condition of a road surface for vehicle traffic, said method comprising: defining a reference surface of said road surface ahead of the vehicle; illuminating said reference surface with at least one light source positioned on the vehicle, the at least one light source being a laser or LED light; detecting light which is reflected off said reference surface by a Lidar unit to provide image information related to the reference surface; scanning said reference surface by an RGB sensor to provide additional image information related to said reference surface; determining said classification of the condition of the road surface by combining the image information and the additional image information related to said reference surface from said Lidar unit and said RBG sensor; defining a first image of the road surface by the image information provided by the Lidar unit; defining a second image of said road surface by the additional image information provided by the RGB sensor; and defining a combined image from the first and second images.
2. The method according to claim 1, further comprising: providing said first image by detected information regarding radiation reflected off the road surface within a predetermined wavelength interval.
3. The method according to claim 2, further comprising: providing said second image by detected information regarding edges, colors and contours and similar visual properties relating to the road surface.
4. The method according to claim 1, further comprising: providing said second image by detected information regarding edges, colors and contours and similar visual properties relating to the road surface.
5. The method according to claim 1, further comprising: modelling the laser or LED light from said light source in the form of either one or more light points or one or more lines.
6. The method according to claim 1, further comprising: determining a road surface condition selected from at least one of the following: a dry and non-covered road surface, a road surface which is covered with water, a road surface which is covered with snow, and a road surface which is covered with ice.
7. The method according to claim 1, further comprising: identifying, by said image data, one or more of the following road area sections: a left side lane, a right side lane, a left wheel track, a right wheel track; and a middle road section.
8. The method according to claim 1, further comprising: determining said road condition by using measurements of additional operational conditions related to said vehicle.
9. The method according to claim 1, further comprising: combining information, obtained within said reference surface ahead of said vehicle and related to color or similar optical properties detected by said RGB sensor with light intensity information detected by said Lidar unit, in order to provide said classification of the road surface.
10. The method according to claim 1, further comprising: determining which of the following conditions applies to the road surface: a dry and non-covered road surface, a road surface which is covered with water, a road surface which is covered with snow, and a road surface which is covered with ice.
11. The method according to claim 1, further comprising: identifying, by said image data, a plurality of different road area sections forming said road surface, and determining which of the following conditions applies to each one of the road area sections: a dry and non-covered road surface, a road surface which is covered with water, a road surface which is covered with snow, and a road surface which is covered with ice.
12. The method according to claim 1, further comprising: determining said classification of the road surface by combining image information related to said reference surface in the form of light intensity values from said Lidar unit and light intensity values, corresponding to a number of colors or color ranges, from said RGB sensor.
13. The method according to claim 1, further comprising: using color data in three channels in the form of different wavelength intervals generated by said RGB sensor, in said determining said classification of the road surface.
14. The method according to claim 13, further comprising: using additional color data in a fourth channel corresponding to 980 nm.
15. An arrangement for determining a classification of a condition of a road surface for vehicle traffic and within a reference surface of said road surface ahead of the vehicle, said arrangement comprising: at least one light source positioned on said vehicle and being configured to illuminate said reference surface; a sensor configured to detect light which is reflected off said reference surface; a light unit configured to illuminate said reference surface; a Lidar unit configured to provide image information related to said reference surface by detecting said reflected light; an RGB sensor configured to provide further image information related to said reference surface by scanning said reference surface by the RGB sensor; and a controller configured to determine said classification of the road surface by combining image information related to said reference surface from said Lidar unit and said RGB sensor by defining a first image of the road surface by the image information provided by the Lidar unit, defining a second image of said road surface by the additional image information provided by the RGB sensor, and defining a combined image from the first and second images.
16. The arrangement according to claim 15, wherein the controller is configured to determine which of the following conditions applies to the road surface: a dry and non-covered road surface, a road surface which is covered with water, a road surface which is covered with snow, and a road surface which is covered with ice.
17. A vehicle comprising: the arrangement for classification of a condition of the road surface according to claim 15.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further objects, features, and advantages of the present disclosure will appear from the following detailed description, wherein certain aspects of the disclosure will be described in more detail with reference to the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) Different embodiments of the present invention will now be described with reference to the accompanying drawings. The arrangements described below and defined in the appended claims can be realized in different forms and should not be construed as being limited to the embodiments described below.
(11) With initial reference to
(12) In the embodiments below, it is assumed that the road surface 3 is constituted by asphalt.
(13) This disclosure is based on a requirement to determine a classification of a condition of the road surface 3. In particular, there is a need to determine which condition or conditions are present across the entire width of the road 2. As discussed initially, and as described in
(14) The purpose of
(15) In order to determine the road condition for the entire road surface 3, i.e. for all the road sections 3a-3e, the vehicle 1 is equipped with an arrangement comprising a light source 4, suitably in the form of a laser light source or an LED light source, a Lidar (light detection and ranging) unit 5 and an RGB camera 6, as shown in the embodiment of
(16) The purpose of the light source 4 is to generate light (i.e. laser light, alternatively LED light) which is used to illuminate the road surface 3. According to an embodiment, the light source 4 should be configured for a wavelength which is within an interval of approximately 900-2000 nm, and preferably a reference wavelength which is approximately 980 nm. In particular, it should be noted that light from the light source 4 at the wavelength 980 nm is not influenced by light absorption in water, snow or ice in the same manner as the light from the Lidar unit 5 (which is preferably of the wavelength 1550 nm). For this reason, the light from the light source 4 will be used as a reference. According to further embodiments, the wavelength can extend to lower values, towards approximately 400 nm, in order to provide illumination also in a visible wavelength range.
(17) According to an embodiment shown in
(18) Furthermore, the Lidar unit 5 is based on technology which is known as such and which is configured for detecting an object (and measuring the distance to the object) by first illuminating the object with laser light at a particular distance, and then measuring the time for detecting a reflection of the emitted light off the object which returns to the Lidar sensor. Lidar technology can be used to produce three-dimensional images of objects, and is suitable for example in the field of object recognition, obstacle detection and navigation systems in vehicles, both autonomous vehicles and regular, non-autonomous vehicles.
(19) The Lidar unit 5 is based on certain components, such as a laser emitter, a photodetector and control electronics. The laser emitter is configured for scanning a particular area, target or environment, and the photodetector is configured for detecting laser light which is reflected off any object being scanned by laser light and which is returned to the photodetector. In this manner, an image of the object can be produced in the Lidar unit 5. The basic technology behind a Lidar unit is previously known as such, and for this reason it is not described in greater detail here.
(20) According to an embodiment, the Lidar unit 5 is configured with a laser light source having a wavelength of 1550 nm, which is a wavelength which is eye-safe at relatively high power levels which are suitable for the embodiments according to this disclosure. However, the concept according to the disclosure is not limited to this particular wavelength only.
(21) Generally, and according to further embodiments, the Lidar unit 5 may be configured with a laser light source which has a wavelength which is 1450 nm or more.
(22) Furthermore, the RGB camera 6 comprises an image sensor of generally conventional type, i.e. which is arranged for sensing and capturing image information. One suitable type of image sensor is the so-called CMOS sensor. The image information is provided in accordance with the so-called RGB colour model based the principle that red, green and blue light is combined in order to produce a large number of colours. The technology related to today's RGB sensors is well-known, and for this reason it is not described in further detail here.
(23) In the following, it is assumed that the system comprises a separate light source 4, either in the form of a laser or LED light source. The light source 4 is suitably used in order to improve measurements when the vehicle is used during nighttime. However, according to a further embodiment (not shown in the drawings), this light source can be omitted.
(24) With reference to
(25) The Lidar unit 5 has a laser emitter device which is configured for generating a beam of modulated laser light towards the road surface 3 during operation of the vehicle 1. More precisely, the laser light is arranged so as to strike the road surface 3 at a confined area, i.e. a reference area 9, as shown in
(26) The disclosure is not limited to an embodiment in which one single laser light unit in a Lidar unit is used for illuminating the road surface 3. According to further embodiments, two or three such laser light units can be used.
(27) The laser light which strikes the road surface 3 will be reflected off the road surface 3 and will be scattered in different directions. A certain amount of the reflected light will be detected by the Lidar unit 5. The scattering of light from the road surface 3 and the absorption of light by the road surface 3 varies depending on the wavelength of the light and on the condition of the road surface 3a. This means that the road condition can be determined by measuring the light intensity as a function of the wavelength.
(28) The Lidar unit 5 is arranged to capture point data within a wavelength interval which is adapted to the emitted light from the laser light unit in the Lidar unit 5, i.e. the data is the intensity of the reflected light from the illumination source.
(29) According to different embodiments, the wavelength can be chosen to be 980 nm, 1550 nm or 1940 nm. According to particular embodiments, the wavelengths can is be chosen within the visible range, i.e. from approximately 400 nm and within the visible range of wavelength.
(30) The positioning of the Lidar unit 5, and consequently also its laser light source, is suitably chosen so that it is situated relatively high up on the vehicle 1. In this manner, it will be arranged so that the light strikes the road surface 3 ahead of the vehicle 1 and relatively close to the vehicle 1 or far away from the vehicle 1, and so that the intensity of the reflected light is sufficient for detecting by means of the Lidar unit 5. The Lidar unit 5 can be configured so that the laser light strikes the road surface 3 within a large range, from approximately 2-5 meters ahead of the vehicle 1 and up to approximately 200-250 meters ahead of the vehicle 1, depending for example on the power of the laser light unit in the Lidar unit 5.
(31) Reflected laser light is measured using the Lidar unit 5. Based on the detected signal, it can for example be determined whether the road surface is covered with snow, ice or water.
(32) According to an embodiment, the Lidar unit 5 is used for determining whether the road surface 3, i.e. each one of the road surface sections 3a-3e, has one of a number of possible road surface conditions. For example: i) the road surface 3 may be dry and non-covered, i.e. which corresponds to a relatively warm and dry weather without any snow, ice or water which covers the road surface 3; or ii) the road surface 3 may be covered with water, i.e. which can be the case just after a rainfall; or iii) the road surface 3 may be covered with snow, which can be the case after a snowfall; or iv) the road surface 3 may be covered with ice, i.e. in case that snow or water covering the road surface 3 has transformed to ice.
(33) According to an aspect of the invention, a method is provided which is configured for determining which particular road condition of the above-mentioned alternative road conditions exists at a given point in time on the road surface 3. According to an aspect, the invention is configured for identifying which one of the above-mentioned road conditions exists in each one of a number of road area sections. According to an aspect, these road area sections are constituted by a left side lane 3a, a right side lane 3b, a left wheel track 3c, a right wheel track 3c and a middle road section 3e. Other configurations of road area sections may occur within the principles of the invention.
(34) In addition to the above-mentioned four main types of road surface 3 coverings, the road surface 3 can be covered by combinations or mixtures of different types, for example a mixture of snow and water, i.e. sleet or slush, or a mixture of ice and water, i.e. a road surface covered with ice which in turn is covered with a layer of water. Furthermore, a particular type of covering is so-called black ice, i.e. clear ice which is generally transparent so that the road surface 3 can be seen through it.
(35) In case of snow covering the road surface 3, the snow can be for example in the form of bright white snow, which corresponds to a case where snow has just fallen, or it can be grey or dark, which corresponds to a case where the snow has been covering the road surface 3 for a relatively long period of time so that it is dirty from pollution and other substances. Both these conditions are relevant when determining the friction of the road surface 3 and for determining for example whether the road surface condition requires caution for drivers travelling along such roads.
(36) Furthermore, according to an embodiment, the vehicle 1 is equipped with the above-mentioned RGB sensor 6 for capturing digital images and storing image data related to said images for later analysis and image processing. The RGB sensor 6 is arranged in the vehicle 1 so as to generate said image data within the above-mentioned reference surface 9 which is located ahead of the vehicle 1. The direction of scanning of the RGB sensor 6 defines a predetermined angle with respect to a horizontal plane along which the road 2 is oriented.
(37) The RGB sensor 6 is used to define a scanning window which corresponds to a digital image which is formed by an array of a large number of image pixels. The scanning window is configured so as to cover the road surface 3 ahead of the vehicle 1, as described in
(38) According to an embodiment, the RGB sensor 6 and the control unit 7 are configured for detecting the RGB colour code for each image pixel in its scanning window. This corresponds to the optical properties of the image in question. In this manner, the control unit 7 may differentiate between different areas of the road surface 3 by comparing RGB color codes for the pixels corresponding to the entire scanning window.
(39) In particular, the RGB sensor 6 is arranged for scanning the entire transversal width of the road 2. Also, the image data generated by the RGB sensor 6 is combined with the image data generated by the Lidar unit 5 so as to determine a classification of the condition of the entire road surface 3. This will be described in greater detail below.
(40) As mentioned, the Lidar unit 5 and the RGB sensor 6 are connected to the control unit 7 which is arranged for analyzing incoming sensor data so as to determine which road condition applies in each road section 3a-3e. In particular, the control unit 7 comprises stored software for digital image treatment which is used for treatment of the image data from the RGB sensor 6. Accordingly, the RGB sensor 6 is configured for detecting certain characteristics on the road surface 3 and for identifying elements in a particular scene. For example, transitions between different road sections such as for example an asphalt road area and an adjacent roadside with another surface covering, for example grass, can be detected. Also, the RGB sensor 6 can for example be used for distinguishing between side lanes covered with snow and dry wheel tracks. The RGB sensor 6 can also be used for recognizing other visual variationssuch as edges and boundariesin the scene which is represented by the road surface 3. Furthermore, the control unit 7 can be provided with software for example for filtering and enhancing image data from the RGB sensor 6 in order to contribute to an accurate image of the road surface 3. Also, the RGB sensor 6 can be used for detecting other obstacles in the vicinity of the road surface 3, such as for example other vehicles, bicycles and pedestrians.
(41) Generally, the image treatment software used in the control unit 7 can be used for identifying different road area sections by recognizing optical properties related to brightness or colour, or positions of edges and borders, or pattern recognition, extraction of image features or other image treatment in the different road areas. In this manner, the different road area sections 3a-e can be separated and identified based on their optical properties, as detected through the image data contained in the images as captured by the RGB camera unit 6.
(42) Through the use of image data from the RGB sensor 6 and digital image treatment in the control unit 7, a precise image of the road surface 3 can be defined (as will be further described below with reference to
(43)
(44) The image data which is captured by the Lidar unit 5 corresponds to the detected light intensity resulting from reflections of the laser light from the laser light unit 4, as reflected in the road surface 3. The reflected laser light corresponds as such to a spectral response which in turn depends on the material of the road 3 and any material located on top of the road surface 3, such as ice, water or snow. This means that the Lidar unit 5 can be used to detect and distinguish between different areas or sections of the road surface 3 having different road conditions. However, the boundaries between the different areas 10, 11 are not entirely sharp but can rather be described as continuous transitions, which makes it difficult to provide an accurate image of the entire road surface 3.
(45)
(46) The RGB sensor 6 is configured for providing measurement data in four channels, more precisely in four different wavelength ranges corresponding to red, green and blue and also a NIR (near infrared) range. The measurement data is in particular constituted by values of the measured light intensity. The measurement data is provided in the form of images with spectral information in each of said wavelength ranges. To this end, a CMOS-based sensor is preferably used. Also, it is preferable that the RGB sensor 6 is of the type which is configured for so-called hyperspectral imaging. In particular, a wavelength of 980 nm is advantageous to use, primarily since it will not be visible to the human eye and consequently will not disturb other drivers or pedestrians.
(47) The image from the Lidar unit 5 and the spectral images from the RGB sensor 6 are combined or superposed by means of the control unit 7. This means that the combined information from the combined images can be used for example to add information from the RGB sensor 6 image regarding edges and boundaries to the Lidar unit 5 image. This combining of image data from the Lidar unit 5 and the RGB sensor 6 are suitably arranged as layers which are arranged on top of each other and which are aligned with each other as regards the position of the road surface 3 and its various sections and transitions. In this manner, corresponding objects in the image data from the Lidar unit 5 and the RGB sensor 6 can be aligned. For example, data representing for example a left wheel track 3c from the Lidar unit 5 can then be aligned with data representing the same left wheel track 3c from the RGB sensor 6. Such layered information can be displayed clearly on a computer screen so that the information from both the Lidar unit 5 and the RGB sensor 6 is clearly visible at the same time and in an aligned manner. In this manner, higher accuracy, improved resolution and improved contrast as compared with previously known systems can be obtained. This leads to an increased accuracy and consequently to improvements as regards road safety.
(48) In summary, the system and method shown in
(49) An important invention with the present invention is that it uses a reference area 9 which covers a high number of laser lines, for example 256 laser lines, each of which consists of 256 measuring spots. The specific number of laser lines and measuring spots may obviously vary depending on which type of Lidar unit is used. This allows measurements along a relatively large part of a road surface, and at a relatively long distance from the vehicle 1. Consequently, early information regarding the road surface condition ahead of the vehicle 1 can be provided.
(50) According to an aspect of the invention, different wavelengths are used for road condition classification. When using light with different wavelengths for road condition classification, one of the main physical properties that is exploited is light scattering which is dependent on the surface roughness. For example, for a smooth surface there will be a specular reflection and for a rough surface there will be a diffuse reflection, changing the reflection intensities for a detector such as a photodetector, lidar or camera. This can be used for classifying wet and dry asphalt.
(51) In order to enable classification of several road conditions such as dry asphalt or asphalt covered with either water, ice or snow, or any combination of these different conditions, the light absorption is the main physical property that is exploited.
(52) In an RBG camera, the wavelength for Red (0.67 m), Blue (0.47 m) and Green (0.55 m) are combined to an image but by separating each part and combining them with a Lidar wavelength (suitably around 0.8-0.9 m or 1.55 m) it is possible to calculate parameters that enable classification of different road conditions, and especially road conditions that are slippery. See the markings of the different wavelengths in the form of corresponding vertical lines in
(53) According to an embodiment, a process of separating each part from the RGB sensor 6 and combining them with the Lidar wavelength may comprise the following steps: i) the light intensity for three selected frequencies or frequency intervals are detected by the RGB sensor 6; ii) the light intensity for one of the Lidar wavelengths (either 0.8-0.9 m or 1.55 m) is detected; iii) quotients are calculated by combining the measured light intensity of each one of the selected RGB sensor 6 frequencies with the measured light intensity from the Lidar unit 5; and iv) determining, from such a calculation step, a number of different quotients by means of which different road conditions can be determined (for example by defining limits for each one of the quotients which correspond to certain road conditions or other signal processing methods).
(54)
(55) As another example, if 1690 nm is not available, 980 nm can be used as well giving the results shown in
(56) Quotients which correspond to the different angles in the 3D space are different for each one of the road conditions. This allows a classification so as to distinguish between said road conditions. A further parameter which can be used in the distance from each cluster to the origin. It if however preferable to use the quotients in order to determine the road conditions. The term mV which is indicated in
(57) The values representing the wavelengths shown in
(58)
(59) According to an embodiment, the RGB sensor 6 generatesfor each detected pixelcolour data in three channels in the form of different wavelength intervals, and if needed, the addition of a further channel corresponding to 980 nm. Referring to embodiments above, the wavelength may in some cases extend down to approximately 400 nm, i.e. within the range of visible light. Furthermore, the Lidar unit 5 will measure the intensity in the reflected light from the road surface. Different intensity values will be provided depending on the condition of the road surface, i.e. depending on whether there is for example ice, snow or water on the road surface. The information from the RGB sensor 6 will then be combined with the information from the Lidar unit 5.
(60) If, for example, the RGB sensor 6 should provide information stating that the detected colour in a given pixel is white (i.e. an RGB colour code corresponding to white), the information will then be combined with the information from the Lidar unit 5, i.e. intensity-based information which will determine whether the detected white colour corresponds for example to snow or ice (which both may appear to have the same white colour). In this situation, the Lidar unit 5 preferably uses the 1550 nm wavelength. The Lidar unit 5 will detect intensity-based values in which a relatively low intensity corresponds to snow, a medium intensity corresponds to ice and a higher intensity corresponds to water. An even higher intensity will correspond to a dry road surface.
(61) The colour values detected by the RGB sensor 6 (in the form of light intensity values) could consequently be combined with the intensity values from the Lidar unit 5 in order to accurately determine the road condition in each detected pixel. Values related to detected light intensity being generated by the Lidar unit 5 can be used in several intensity ranges which are used in combination with a number of colours or colour ranges (represented by detected light intensity values) being detected by the RGB sensor 6.
(62) According to a further embodiment, a light source such as a LED light having the wavelength of 980 nm is used for transmitting such light on the road surface 3. Such light is not visible to the human eye and will consequently not disturb or distract other vehicle drivers or pedestrians. Furthermore, when such 980 nm light is used for illuminating snow or ice, there will be stronger reflections and a more accurate detection of colour values by means of the RGB sensor 6. Consequently, a higher quality of the process for determining the road surface condition will be provided.
(63) As another option, the control unit 9 may be configured for transmitting information regarding the road surface condition to external companies, for example road freight companies. Such information can be of assistance for example when planning which routes to travel.
(64) According to a further embodiment, self learning algorithms can be used on both the lidar and the RGB data, and also the superimposed image-related data, in order to get an improved classification of the road surface. Also, other data such as air temperature from the vehicle and similar information could be used.
(65) In addition, the classification of the road surface condition can be further improved using other means of measurements, data and parameters which relate to the operation and condition of the vehicle 1. For example, it can be determined whether the windshield wipers are actuated in the vehicle. In such case, it can be assumed that there is either snow or rain falling on the road surface 3. According to a further example, it can be detected whether an arrangement of anti-lock braking system (ABS) (not shown in the drawings) arranged in the vehicle 1 is actuated. In such case, it can be assumed that the friction between the wheels and the road surface is relatively low, which may be the result of ice or snow covering the road surface. Other units, such as a traction control system (TCS) or an electronic stability control (ESC) system, determining parameters relating to the operation of the vehicle, can be used in order to determine the road surface condition, i.e. to determine whether the road surface 3 is covered with ice, water, snow or whether it is dry. This information can also be used for providing information related to the friction of the road surface 3 and its sections.
(66) The invention is not limited to the embodiments described above, but can be varied within the scope of the appended claims. For example, the invention is not limited to processing image data according to the RGB colour coding system. Another useful system is the so-called CMYK system, which is a subtractive colour system which uses four colours (cyan, magenta, yellow and black), which are normally used during colour printing. The CMYK system is based on a principle in which colours are partially or entirely masked on a white background.
(67) Also, data related to the classification of the road surface condition can be associated with a time stamp and also with position data. In other words, information can be generated which indicates when and where the road surface condition was classified. This is particularly useful if said data is to be used in applications for example for generating maps with information relating to the road surface condition along certain roads on such maps. Such map-generating applications can for example be used in other vehicles, in order to present relevant road-related status information.
(68) For example, other parameters than data from the Lidar unit 5 and the RGB sensor 6 can be used. Such an example is data related to the temperature of the road surface 3, which can be crucial when determining for example the friction of the different road area sections 3a-e. As an example, if the Lidar unit 5 indicates that the road surface condition (in the right wheel track 3d) corresponds to a dry surface and the RGB sensor 6 indicates that the middle road section 13 is darker than the right wheel track 12, it can be assumed that the middle road section 13 is covered with water. If a temperature sensor also indicates that the temperature is relatively low, possibly also that the temperature is rapidly decreasing over time, there may be a considerable risk for very slippery road conditions.
(69) According to a further example, if the road condition sensor and the camera unit indicate that the wheel tracks are covered with water even though the temperature is below zero degrees Centigrade, it can be assumed that the wet road surface is the result of a use of road salt having been spread out on the road surface.
(70) Furthermore, the RGB sensor 6 can be used for generating image data also relating to the sky (see
(71) Also, the image data mentioned above can be data generated both in the form of still pictures and a video signal.
(72) Finally, the inventive concept is not limited to use in vehicles such as cars, trucks and buses, but can be used in fixed, i.e. non-movable, monitoring stations for carrying out measurements in the same manner as explained above.