Method for visualising sensor data and/or measurement data
11858347 ยท 2024-01-02
Assignee
Inventors
Cpc classification
B60Q3/66
PERFORMING OPERATIONS; TRANSPORTING
B60K35/29
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/28
PERFORMING OPERATIONS; TRANSPORTING
B60Q3/745
PERFORMING OPERATIONS; TRANSPORTING
B60Q3/20
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60Q3/20
PERFORMING OPERATIONS; TRANSPORTING
B60Q3/66
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for visualizing sensor data from the surroundings of a vehicle and/or measuring data from the vehicle uses light modules in the interior of the vehicle for the visualization of the sensor data. The sensor data is detected as video data, after which the video data is analyzed in relation to relevant recognizable structures, after which the relevant structures are transferred to a video sequence with a format fitting for the respective light module, and/or sensor data not detected as video data and/or measuring data is recalculated into video sequences via an algorithm, after which the video sequences from the different data are superimposed and displayed on the light modules.
Claims
1. A method for visualizing sensor data from the surroundings of a vehicle or measuring data from the vehicle, wherein the sensor data is visualized using light modules in an interior of the vehicle, the method comprising: detecting the sensor data, wherein the detected sensor data includes as-video data from a camera of the vehicle and additional sensor data or measuring data from at least one additional sensor, wherein the at least one additional sensor detects settings inside of an interior of the vehicle, is a sensor of a driver assistance system of the vehicle other than the camera, or is a sensor of a telematics system of the vehicle; analyzing the video data to identify recognizable structures; transferring the recognizable structures to a video sequence with a format for the light modules in the interior of the vehicle or recalculating sensor data not detected as video data or measuring data into a video sequence using an algorithm; requesting, from a storage in the vehicle and based on the additional sensor data or the measuring data from the at least one additional sensor, pre-stored video sequences; and superimposing the video sequence with the pre-stored video sequences and displaying the video sequence superimposed with the pre-stored video sequences using the light modules.
2. The method of claim 1, wherein the video sequence is displayed using the light modules as live data.
3. The method of claim 1, wherein the vehicle interior further comprises ambient illumination lights, which are controlled depending on the video sequence displayed using the light modules.
4. The method of claim 1, wherein the superimposition of the video sequence is performed depending on a prioritization of a sources of the sensor data.
5. The method of claim 1, wherein the light modules are controlled by a transducer.
6. A device for visualizing sensor data from the surroundings of a vehicle or measuring data from the vehicle, wherein the sensor data is visualized using light modules in an interior of the vehicle, the device comprising: an ambient vehicle illumination, which comprises the light modules; a plurality of sensors configured to detect the sensor data, wherein one of the plurality of sensors is a camera that captures as video data as the sensor data, wherein a second one of the plurality of sensors is configured to detect settings inside of an interior of the vehicle, is a sensor of a driver assistance system of the vehicle other than the camera, or is a sensor of a telematics system of the vehicle; a storage configured to store pre-stored video sequences related to additional sensor data or measurement data from the second one of the plurality of sensors; and a central controller configured to analyze the video data to identify relevant recognizable structures; transfer the recognizable structures to a video sequence with a format for the light modules in the interior of the vehicle or recalculate sensor data not detected as video data or measuring data into a video sequence using an algorithm; and superimpose the video sequence with the pre-stored video sequences, wherein the light modules are configured to display the video sequence superimposed with the pre-stored video sequences using the light modules, wherein the central controller is directly or indirectly connected to the light modules using a solitary high-speed bus, wherein the storage is in the central controller or is directly connected to the central control device.
7. The device according to claim 6, wherein the light modules are a light band on a periphery of the interior of the vehicle.
8. The device according to claim 6, wherein the ambient vehicle illumination comprises further comprises ambient illumination elements, the ambient illumination elements are to a base controller that has a data connection to the central controller.
9. The method of claim 1, wherein when the storage in the vehicle does not have pre-stored video sequences for the sensor data or the measuring data from the at least one additional sensor, the method further comprises: generating, using the algorithm, a new video sequence; and storing the new video sequence in the storage as one of the pre-stored video sequences.
Description
BRIEF DESCRIPTION OF THE DRAWING FIGURES
(1) Here are shown:
(2)
(3)
(4)
DETAILED DESCRIPTION
(5)
(6) In the depiction of
(7) For its part, the central control device 9 is connected to a base control device 11 for the ambient vehicle illumination via the ethernet bus 10. A control of the individual illumination elements in terms of color and intensity depending on the location at which the individual illumination elements 4 are arranged can be carried out via a linear bus 12 for up to 150 illumination elements 4, for example, that can be individually addressed via this control device 11 for the ambient illumination of the vehicle interior 1.
(8) Moreover, the central control device 9 is connected to the individual light modules 3, here thus the eight light modules 3 of the light band 2, via a solitary high-speed CAN-FD bus as a video link. In doing so, it is possible to very quickly control up to 100 LEDs, for example, per light module 3 in a single or multi-line video display.
(9) The central control device 9 now substantially assumes three different tasks, which are schematically indicated in the depiction of
(10) Moreover, it is thus that, as already mentioned, data from the region also processes comfort (5), driver assistance (6) and telematics (7) via the ethernet bus 10. This data can also be processed as needed via algorithms in video sequences, which are each formatted to be adapted to the control of the individual light modules 3 of the light band 2. The video sequences from the video processor 14 then reach a video parser 15 in which they are superimposed. The whole superimposition of the videos, for example the superimpositions of up to five individual videos, which have been compiled from different data sources in the regions 14.1 and/or 14.2 of the video processor 14, can thus be superimposed in a priority-controlled manner in relation to an overall video sequence. The priority control is useful here in order prioritize information relevant to safety more highly and to weight it more highly than information relevant to comfort. In doing so, an overall video emerges which can, in principle, make all information relevant, which, however, prioritizes information more important to the user of the vehicle more highly and thus makes it easier to recognize by means of a corresponding choice light intensities and contrasts in the whole video. The data of this whole video is then transferred directly or, as in the exemplary embodiment depicted in
(11) The transducer or mapper 16 quasi maps the individual video pixel of the whole video coming from the video parser 15 onto the individual light modules 3 or the CAN-FD busses 13 allocated to them. In doing so, light modules of different lengths can also be controlled without the video sequences having to already take this into consideration. The individual video pixels of the whole video are thus directly prepared for the light modules 3 via the central control device 9 with the mapper 16, such that the light modules 3 can be conceived overall exceptionally simply.
(12) Optionally, an external storage 17 can also be provided which is arranged in the central control device 9 or is directly connected to this. This central storage 17 can comprise pre-stored video sequences, which, in certain situations can be detected by sensor data and/or measuring data, make a useful visualization of this data possible. In this case, the computational cost in the portion 14.2 of the video processor 14 is saved. Here, these videos from the storage 17 are also superimposed in to the video parser 15 in addition to the other videos. This is correspondingly indicated in the depiction of
(13) Moreover, a newly generated video sequence can be stored in the storage 17 via the video parser 15 or also via the video processor 14 in order to be able to use them as the pre-stored video sequence at a future point in time.
(14) Moreover, the central control device 9 is connected to the base control device 11 via the ethernet bus 10, as also correspondingly emerges from the depiction of
(15) Of course, the mapper 16 can also be dispensed with when a corresponding processing of the video sequences has already been carried out in the region of the video processor 14 and the whole video in the video parser 15. The data can then be transferred directly from the video parser 15 to the individual light modules 3, in particular when these are formed identically one below the other and have the same measurements and pixel resolution.
(16) Although the invention has been illustrated and described in detail by way of preferred embodiments, the invention is not limited by the examples disclosed, and other variations can be derived from these by the person skilled in the art without leaving the scope of the invention. It is therefore clear that there is a plurality of possible variations. It is also clear that embodiments stated by way of example are only really examples that are not to be seen as limiting the scope, application possibilities or configuration of the invention in any way. In fact, the preceding description and the description of the figures enable the person skilled in the art to implement the exemplary embodiments in concrete manner, wherein, with the knowledge of the disclosed inventive concept, the person skilled in the art is able to undertake various changes, for example, with regard to the functioning or arrangement of individual elements stated in an exemplary embodiment without leaving the scope of the invention, which is defined by the claims and their legal equivalents, such as further explanations in the description.