METHOD FOR GENERATING A DIGITAL MODEL-BASED REPRESENTATION OF A VEHICLE
20230229120 ยท 2023-07-20
Inventors
Cpc classification
International classification
Abstract
A method for generating a digital model-based representation of a vehicle. The method includes: receiving sensor data of a plurality of acoustic sensors of a vehicle, wherein the sensor data describes sounds of the vehicle and/or sounds of an environment of the vehicle, and wherein the sensor data has been recorded for a plurality of trips of the vehicle; evaluating the sensor data and the creation of relations between the received sounds of the vehicle and/or the environment and the particular sound-causing statuses of the vehicle and/or the environment; and storing in a model-based representation of the vehicle and/or the environment, the determined relations between the sounds of the vehicle and/or the environment in a model-based representation of the vehicle.
Claims
1. A method of generating a digital model-based representation of a vehicle, comprising the following steps: receiving sensor data of a plurality of acoustic sensors of the vehicle, wherein the sensor data describe sounds of the vehicle and/or sounds of an environment of the vehicle, and wherein the sensor data are recorded for a plurality of trips of the vehicle; evaluating the sensor data and determining relations between: (i) the recorded sounds of the vehicle and/or of the environment, and (ii) respective states of the vehicle and/or of the environment causing the respective sounds; and storing, in a model-based representation of the vehicle, the determined relations between the sounds of the vehicle and/or of the environment and the respective states of the vehicle and/or of the environment.
2. The method of claim 1, wherein the sounds of the vehicle include: sounds of a motor and/or a transmission and/or a chassis and/or a shock absorption and/or a wheel suspension and/or of brakes, and/or of tires and/or a body of the vehicle, and wherein the respective states of the vehicle include: functional states of the motor and/or the transmission and/or the chassis and/or the shock absorption and/or the wheel suspension and/or the tires and/or the body and/or a speed and/or a loading state of the vehicle and/or a rolling resistance of the tires on a travel lane and a state of the travel lane and/or a coating of the body with moisture or snow or dust or dust or leaves.
3. The method of claim 1, wherein the sounds of the environment include: sounds of further vehicles and/or sounds of pedestrians and/or sounds of animals and/or sounds of the vehicle reflected by buildings or vegetation situated in the environment and/or sounds of precipitation and/or sounds of snowfall and/or sounds of hail and/or sounds of wind, and wherein states of the environment of the vehicle include: a presence of vehicles and/or a presence of pedestrians and/or a presence of buildings and/or a presence of vegetation and/or a presence of precipitation and/or a presence of hail and/or a presence of snow.
4. The method of claim 3, further comprising detection of the objects in the environment including a position determination of the objects in the environment and/or a determination of a distance of the objects and/or a determination of a speed of the objects relative to the vehicle and/or a characterization of the objects.
5. The method of claim 1, wherein the sensor data include acoustic data of a plurality of microphones and/or data of a plurality of ultrasonic sensors.
6. The method of claim 1, wherein the determining of the relations between the sounds of the vehicle and/or of the environment and the respective states of the vehicle and/or of the environment includes performing machine learning techniques on the sensor data, and wherein the storing of the determined relations includes storing a correspondingly trained artificial intelligence or a plurality of correspondingly trained artificial intelligences.
7. The method of claim 1, wherein the model-based representation of the vehicle is formed as a digital twin of the vehicle based on the acoustic sensor data.
8. A method of controlling a vehicle, comprising the following steps: receiving acoustic sensor data of a plurality of acoustic sensors of the vehicle, wherein the acoutsic sensor data describe sounds of the vehicle and/or sound of an environment of the vehicle; executing a model-based representation of the vehicle on the acoustic sensor data, wherein the model-based representation of the vehicle is generated by: receiving sensor data of a plurality of acoustic sensors of the vehicle, wherein the sensor data describe sounds of the vehicle and/or sounds of an environment of the vehicle, and wherein the sensor data are recorded for a plurality of trips of the vehicle, evaluating the sensor data and determining relations between: (i) the recorded sounds of the vehicle and/or of the environment, and (ii) respective states of the vehicle and/or of the environment causing the respective sounds, and storing, in the model-based representation of the vehicle, the determined relations between the sounds of the vehicle and/or of the environment and the respective states of the vehicle and/or of the environment; determining a state of the vehicle and/or a state of the environment of the vehicle based on the acoustic sensor data of the vehicle and the relations stored in the model-based representation of the vehicle; and outputting control signals for controlling the vehicle taking into account the determined state of the vehicle and/or the determined state of the environment of the vehicle.
9. A computing unit configured to generate a digital model-based representation of a vehicle, the computing unit configured to: receive sensor data of a plurality of acoustic sensors of the vehicle, wherein the sensor data describe sounds of the vehicle and/or sounds of an environment of the vehicle, and wherein the sensor data are recorded for a plurality of trips of the vehicle; evaluate the sensor data and determining relations between: (i) the recorded sounds of the vehicle and/or of the environment, and (ii) respective states of the vehicle and/or of the environment causing the respective sounds; and store, in a model-based representation of the vehicle, the determined relations between the sounds of the vehicle and/or of the environment and the respective states of the vehicle and/or of the environment.
10. A computer-readable storage medium on which is stored a computer program for generating a digital model-based representation of a vehicle, the computer program, when executed by a data processor, causing the data processor to perform the following steps: receiving sensor data of a plurality of acoustic sensors of the vehicle, wherein the sensor data describe sounds of the vehicle and/or sounds of an environment of the vehicle, and wherein the sensor data are recorded for a plurality of trips of the vehicle; evaluating the sensor data and determining relations between: (i) the recorded sounds of the vehicle and/or of the environment, and (ii) respective states of the vehicle and/or of the environment causing the respective sounds; and storing, in a model-based representation of the vehicle, the determined relations between the sounds of the vehicle and/or of the environment and the respective states of the vehicle and/or of the environment.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0036]
[0037]
[0038] In diagram a), the scene of a vehicle 301 traveling in a lane 305 is shown. The vehicle 301 includes a plurality of acoustic sensors 303 formed at various locations of the vehicle 301. For example, the acoustic sensors 303 may be configured as microphones or ultrasonic sensors. The acoustic sensors 303 may be arranged on the vehicle such that these sounds within the vehicle or sounds within an environment 312 of the vehicle may be recorded. Distances or speeds of objects in the environment of the vehicle 301 may be determined by the ultrasonic sensors.
[0039] In order to generate a model-based representation 311 of the vehicle 301 based on the acoustic data of the acoustic sensors 303 of the vehicle 301, a plurality of different trips along different travel lanes 305 are performed by the vehicle 301, according to the present invention. For example, the trips may be configured as test drives and serve solely to record corresponding data through the acoustic sensors 303 of the vehicle 301. Thus, based on the plurality of trips, corresponding sensor data 304 are recorded by the acoustic sensors 303 formed at various locations in the vehicle 301. Various sounds may be recorded within the vehicle or environment of the vehicle by the sensor data 304. Alternatively or additionally, when the acoustic sensors 303 are configured as ultrasonic sensors, distance or speed information of objects 313 situated in the environment 312 relative to the vehicle 301 is determined by the sensor date 304.
[0040] Depending on the placement of the acoustic sensors 303 at different positions of the vehicle, different sounds of different parts of the vehicle may be recorded by the measurements of the acoustic sensors. For example, sounds of the motor, transmission, chassis, shock absorption, wheel suspension, brakes, tires, and/or body of the vehicle may be recorded. In graph c) of
[0041] By acoustic sensors 303 arranged correspondingly towards the environment 312 of the vehicle 301, sounds within the environment, for example by objects 313 within the environment, can be recorded during travel. For example, in
[0042] To evaluate the sensor data 304 of the plurality of acoustic sensors 303 recorded during the test trips of the vehicle 301 and to generate a model-based representation 311 of the vehicle 301, the sensor data 304 are transmitted to a computing unit 302. The computing unit 302 is configured to perform the method according to the present invention of generating a model-based representation 311 of the vehicle 301. In addition to the sensor data 304 of the plurality of acoustic sensors 303, state data 315 are also provided. State data 315 describe states of the vehicle 301 or states of the environment 312 of the vehicle 301 that the vehicle 301 or the environment of the vehicle experienced during the recording of the sensor data 304 by the acoustic sensors 303.
[0043] Based on the sensor data 304 and the state data 315, a plurality of relations between the sensor data 304 and the corresponding state data 315 may be achieved to generate the model-based representation 311 of the vehicle 301. The relations between the sensor data 304 and the state data 315 describe connections between the sounds of the vehicle 301 or of the environment 312 recorded in the sensor data 304 and the states of the vehicle 301 or environment 312 causing the sounds of the vehicle 301 and the environment 312, respectively.
[0044] For the example of the acoustic sensor 303 arranged near the tire 307 of the vehicle 301, the state data 315 may include a tire pressure or model of tire 307 or information relating to the roadway surface 306 of the travel lane 305. Thus, by evaluating the corresponding sensor data 304 of the acoustic sensor 303 arranged on the tire 307 and the corresponding state data 315, relations may be established between tire sounds of the tire 307 recorded by the respective acoustic sensor 303 and, for example, the tire pressure of the tire 307 or the tire model of the tire 307 or the respective lane surface 306 of lane 305, which was traveled by the vehicle 301 at the time of recording the sensor data 304. Thus, by way of the corresponding relations between the recorded tire sounds and the respective state of the tire 307, upon successful generation of the model-based representation 311 of the vehicle 301 based on the tire sounds of the tire 307 recorded by the acoustic sensor 303, it is possible to draw conclusions as to the state of the tire 307. Thus, speed information of the vehicle included in the state data 315 and/or information regarding the lane surface 306 can be considered as well.
[0045] Analogously, the sensor data 304 of acoustic sensors 303 arranged on the motor of the vehicle and state data 315 comprising motor states, may be used to establish corresponding relations between the motor sounds recorded by the sensor data 304 and corresponding motor states.
[0046] The established relations yield mappings between the sounds of the sensor data 304 and the states of the status data 315. The mappings can be used to determine corresponding states based on recorded sensor data 304 upon successful generation of the model-based representation 311.
[0047] Analogously, via sounds from the environment 312 recorded by corresponding sensor data 304 from the acoustic sensors 303 facing the environment 312 and state data 315 describing environmental states, which comprise information regarding buildings, vegetation, or road routing obtained from representations of the respective travel lanes 305, it is possible to derive corresponding relations between the sounds of the environment 313 recorded in the sensor data 304 and the states of the environment 312 described in the state data 315.
[0048] For the shown example of the further vehicle 314 located in travel lane 305, state data 315 may include information of a camera data-based object detection, in which it was possible to detect the further vehicle 314 traveling in lane 305. The sounds recorded in the corresponding sensor data 304 within the environment 312 may thus be uniquely associated with the respective environmental state 312, taking into account the state data 315 describing the further vehicle 314, wherein, for example, the state of the environment 312 describes the presence of the further vehicle 314.
[0049] Thus, by evaluating the sensor data 304 recorded during test drives and the state data 315 accordingly provided, the relations between vehicle sounds and/or environmental sound recorded in the sensor data 304 and the respective states of the vehicle and/or states of the environment can be determined. The determined and stored relations between recorded vehicle or environmental sounds and the respectively associated states of the vehicle or of the environment represent, according to the present invention, the model-based representation 311 of the vehicle 301.
[0050] According to one embodiment, the evaluation of the sensor data 304 or state data 315 and the generation of the relations between the recorded vehicle or environmental sound and the respectively associated states of the vehicle or of the environment may be achieved by applying machine learning techniques. For example, for the plurality of acoustic sensors 303, a correspondingly constructed neural network, or a plurality of neural networks, may be used to generate the relations between the vehicle or environmental sounds recorded in the sensor data 304 and the respective states of the vehicle or of the environment. For a corresponding training of the neural network or the plurality of neural networks, the relations between the recorded sounds and the states of the vehicle or of the environment can be generated.
[0051] Relations between distance and speed information and corresponding states of the environment can be determined analogously based on distance or speed information of sensor data from acoustic sensors 303 configured as ultrasonic sensors.
[0052] According to one embodiment, the sounds of the vehicle recorded by the sensor data 304 may include sounds of a motor, a transmission, a chassis, a shock absorption, a wheel suspension, brakes, tires, and a body of the vehicle, as well as other sounds of other components of the vehicle. The respective states of the vehicle described in the state data 315 can in this case include functional states of the motor, the transmission, the chassis, the shock absorption, the wheel suspension, the tires, the body, as well as information regarding the speed, the loading state, the rolling resistance of the tires 307 on a travel lane 305, as well as a state of the travel lane 305 or information regarding a coating of the body with moisture, snow, hail, leaves, or dust or other states of the vehicle 301.
[0053] According to one embodiment, the environmental sounds 312 recorded by the sensor data 304 of the acoustic sensors 303 may include sounds of further vehicles 314, sounds of pedestrians, animals, sounds of the vehicle 301 reflected by buildings or vegetation situated in the environment 312, sounds of precipitation, snowfall, hail, wind, and other sounds detectable in the environment of the vehicle 312. The states of the environment 312 may therefore include the presence of objects 313, such as other vehicles 314, pedestrians, buildings, vegetation, or the presence of specific weather conditions such as precipitation, hail, snowfall, or other states of the environment.
[0054] Diagram b) of
[0055] According to one embodiment, the model-based representation 311 can be configured as a digital twin of the vehicle 301.
[0056] In the embodiment shown, the computing unit 302 configured to generate the model-based representation 311 of the vehicle 301 is configured as an external computing unit. For example, this may be configured in the form of a data center to create the model-based representations 311, which is structured as, for example, a corresponding server unit that can communicate with the respective vehicle 301 via data transmission. In the embodiment shown, the computing unit 310 configured to execute the generated model-based representation 311 of the vehicle 301 is configured in the vehicle 301. Alternatively, the computing unit 310 may also be configured as an external computing unit. To this end, data communication between the external computing unit 310 and the vehicle 301, in which the states of the vehicle 301 or of the environment 312 determined by executing the model-based representation 311 on the sensor data 304 of the vehicle 301, can be provided to the vehicle 301 while traveling.
[0057]
[0058] According to the present invention, in a first method step 101, sensor data 304 of a plurality of acoustic sensors 303 of the vehicle 301 are received. For this purpose, the sensor data 304 are recorded during a plurality of trips, for example, test trips, of the vehicle 301 along a plurality of different travel lanes 305. The sensor data 304 here describe vehicle sounds and environmental sounds 312 of the vehicle 301 and may additionally and/or alternatively include distance or speed information of objects 313 in the environment 312 of the vehicle 301.
[0059] In a further method step 103, the sensor data 304 are evaluated and relations between the recorded sounds of the sensor data 304 and the distance and speed information of the sensor data 304 and states of the vehicle 301 and states of the environment of the vehicle 301 are determined. To this end, state data 315 may be considered that include information regarding the particular states of the vehicle or the states of the environment 312 of the vehicle 301.
[0060] In a further method step 105, the relations between the sounds and/or the distance and speed information of the sensor data 304 and the states of the vehicle 301 and/or the states of the environment 312 of the state data 315 are stored in the corresponding model-based representation 311 of the vehicle 301.
[0061]
[0062] According to the present invention, in a first method step 201, sensor data 304 of a plurality of acoustic sensors 303 of the vehicle 301 are recorded while driving the vehicle 301.
[0063] In a further method step 203, a model-based representation 311 of the vehicle 301 generated according to the method 100 according to the present invention for generating a model-based representation 311 of a vehicle 301 is performed on the recorded sensor data 304.
[0064] In a further method step 205, by performing the model-based representation 311 of the vehicle 301 on the sensor data 304 of the acoustic sensors 301, at least one status of the vehicle 301 or one status of the environment 312 of the vehicle 301 is determined.
[0065] In a further method step 207, corresponding control signals are output based on the determined state of the vehicle 301 or the state of the environment 312 of the vehicle 301. The control signals are configured to control the vehicle 301 taking into account the determined states of the vehicle 301 and of the environment 312, respectively.
[0066]
[0067] The computer program product 400 in the shown embodiment is stored on a storage medium 401. The storage medium 401 can be any storage medium available in the related art.