Method for generating an audio signal, in particular for active control of the sound of the engine of a land vehicle, and corresponding apparatus

11282494 · 2022-03-22

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for generating an audio signal representing the dynamic state of a land vehicle as a function of the value of at least one first parameter comprises the steps of receiving an input audio signal and dividing the input audio signal into a set of audio frames of given duration; selecting, an audio frame representing the input audio signal; subdividing the selected audio frame into a set of audio sub-frames; determining a probability of transition between orderly pairs of the audio sub-frames and synthesizing the audio signal representing the dynamic state of a land vehicle by generating a sequence of audio sub-frames; computing a centre frequency of the selected audio frame and carrying out a frequency shift of the centre frequency as a function of the centre frequency of the selected audio frame and of the value of at least one first parameter that indicates the dynamic state of the vehicle, thereby generating a frequency-shifted subsequent audio sub-frame.

Claims

1. A method for generating an audio signal (SG) representing a dynamic state of a land vehicle as a function of a value of at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle and on the basis of an input audio signal, said method comprising the steps of: receiving an input audio signal (GSS) and dividing said input audio signal (GSS) into a set of audio frames (F.sub.1, F.sub.2, . . . , F.sub.M) of given duration; selecting, in said set of audio frames (F.sub.1, F.sub.2, . . . , F.sub.M), an audio frame (SF) representing said input audio signal (GSS); subdividing said selected audio frame (SF) into a set of audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N) of given duration; determining probabilities of transition (TP) between orderly pairs of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N); synthesizing said audio signal (SG) representing the dynamic state of a land vehicle by generating a sequence of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N), each subsequent audio sub-frame (SSBF) in said sequence being selected as a function of a current audio sub-frame and of the transition probability (TP) associated thereto; computing a centre frequency (FCF) of a frequency spectrum of said selected audio frame (SF); and said operation of synthesizing said audio signal (SG) representing the dynamic state of a land vehicle further comprising carrying out a frequency shift of a centre frequency of said subsequent audio sub-frame (SSBF) as a function of said centre frequency (FCF) of the frequency spectrum of said selected audio frame (SF) and of the value of said at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle, thereby generating a frequency-shifted subsequent audio sub-frame (PSBF).

2. The method as set forth in claim 1, wherein said centre frequency (FCF) of the frequency spectrum of said selected audio frame (SF) is computed as a spectral centroid of the selected audio frame (SF).

3. The method as set forth in claim 1, wherein said at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle comprises one between an engine speed (ESP) of said land vehicle and a speed (VSP) of said land vehicle, and carrying out a frequency shift of the centre frequency of said subsequent audio sub-frame (SSBF), thereby generating a frequency-shifted subsequent audio sub-frame (PSBF), comprises: receiving the value of said at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle and determining a characteristic frequency as a function of said value of said at least one first parameter (ESP, VSP); determining a frequency scale factor (FSF) between said characteristic frequency and said centre frequency (FCF) of the frequency spectrum of said selected audio frame (SF); and applying said frequency scale factor (FSF) to the frequency spectrum of said subsequent audio sub-frame (SSBF), thereby generating a frequency-shifted subsequent audio sub-frame (PSBF).

4. The method as set forth in claim 1, wherein said at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle comprises one between an engine speed (ESP) of said land vehicle and a speed (VSP) of said land vehicle, and carrying out a frequency shift of the centre frequency of said subsequent audio sub-frame (SSBF), thereby generating a frequency-shifted subsequent audio sub-frame (PSBF), comprises: receiving the value of said at least one first parameter (ESP, VSP) that indicates the dynamic state of said land vehicle; associating a frequency scale factor (FSF) to said value of said at least one first parameter (ESP, VSP) as a function of a configurable map; and applying said frequency scale factor (FSF) to the frequency spectrum of said subsequent audio sub-frame (SSBF), thereby generating a frequency-shifted subsequent audio sub-frame (PSBF).

5. The method as set forth in claim 1, comprising applying to said frequency-shifted subsequent audio sub-frame (PSBF) at least one amplitude gain as a function of a value of a second parameter (ESP, VSP, GPP, ET, EST) that indicates the dynamic state of said land vehicle.

6. The method as set forth in claim 5, wherein said second parameter (ESP, VSP, GPP, ET, EST) that indicates the dynamic state of said land vehicle comprises at least one from among: an engine speed (ESP) of said land vehicle, a speed (VSP) of said land vehicle, a position of an accelerator pedal (GPP) of said land vehicle, an engine torque (ET) of an engine of said land vehicle, and a state (EST) of said engine of said land vehicle.

7. The method as set forth in claim 5, wherein said at least one amplitude gain is a function of the value of said second parameter (ESP, VSP, GPP, ET, EST) that indicates the dynamic state of said land vehicle according to a configurable map.

8. The method as set forth in claim 1, wherein: said operation of selecting, in said set of audio frames (F.sub.1, F.sub.2, . . . , F.sub.M), an audio frame (SF) representing said input audio signal (GSS) comprises computing, as a function of a values of at least one characteristic quantity computed for said input audio signal (GSS) and for each of said audio frames (F.sub.1, F.sub.2, . . . , F.sub.M), a measurement of similarity between each of said audio frames in said set of audio frames (F.sub.1, F.sub.2, . . . , F.sub.M) and said input audio signal (GSS), and selecting said selected audio frame (SF) in said set of audio frames (F.sub.1, F.sub.2, . . . , F.sub.M) as a function of said measurement of similarity between each of said audio frames (F.sub.1, F.sub.2, . . . , F.sub.M) and said input audio signal (GSS); and said operation of determining probabilities of transition (TP) between orderly pairs of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N) comprises computing, as a function of the values of at least one characteristic quantity computed for said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N), a measurement of similarity between pairs of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N), and determining said probabilities of transition (TP) between orderly pairs of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N) as a function of said measurements of similarity between pairs of said audio sub-frames (SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N).

9. The method as set forth in claim 1, comprising modifying spectral characteristics and/or amplitude characteristics of said audio signal (SG) representing the dynamic state of a land vehicle as a function of at least one from among a state (EST) of an engine of said land vehicle, a state of said land vehicle, conditions external to said land vehicle, a location of said land vehicle, and a schedule of use of said land vehicle.

10. The method as set forth in claim 1, comprising receiving different input audio signals (GSS) and/or selecting each subsequent audio sub-frame (SSBF) in said sequence as a function of the current audio sub-frames and of the transition probabilities (TP) associated thereto according to different rules of selection as a function of at least one from among a state (EST) of an engine of said land vehicle, a state of said land vehicle, conditions external to said land vehicle, a location of said land vehicle, and a schedule of use of said land vehicle.

11. A method for active control of sound of an engine of a land vehicle comprising using the audio signal (SG) representing the dynamic state of a land vehicle generated by applying the method as set forth in claim 1 as a driving audio signal (SG) for driving one or more acoustic actuators provided in said land vehicle so as to reproduce sound (OS) of the engine represented by said audio signal (SG).

12. An apparatus for active control of sound of a land vehicle, comprising at least one processor that generates an audio signal (SG) representing the dynamic state of said land vehicle by applying the method as set forth in claim 1 and for using said audio signal (SG) representing the dynamic state of said land vehicle as a driving audio signal (SG) for driving one or more acoustic actuators of said land vehicle so as to reproduce sound (OS) of the engine represented by said audio signal (SG).

13. The apparatus as set forth in claim 12, wherein said land vehicle comprises a plurality of acoustic actuators, and said at least one processor modulates differently said driving audio signal (SG) according to the acoustic actuator to which it is sent.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Other advantages of the invention will be readily appreciated as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:

(2) FIG. 1 is a block diagram exemplifying embodiments;

(3) FIG. 2 is a block diagram exemplifying possible details of implementation of embodiments; and

(4) FIG. 3 exemplifies a possible context of use of embodiments.

DETAILED DESCRIPTION OF THE INVENTION

(5) In the ensuing description, one or more specific details are illustrated, aimed at providing an in-depth understanding of examples of embodiments of this description. The embodiments may be provided without one or more of the specific details, or with other methods, components, materials, etc. In other cases, known structures, materials, or operations are not illustrated or described in detail so that certain aspects of the embodiments will not be obscured.

(6) Reference to “an embodiment” or “one embodiment” in the framework of the present description is intended to indicate that a particular configuration, structure, or characteristic described in relation to the embodiment is comprised in at least one embodiment. Hence, phrases such as “in an embodiment”, “in one embodiment”, or the like that may be present in one or more points of the present description do not necessarily refer to one and the same embodiment.

(7) Moreover, particular conformations, structures, or characteristics may be combined in any appropriate way in one or more embodiments.

(8) The references used herein are provided merely for convenience and consequently do not define the sphere of protection or the scope of the embodiments.

(9) In summary, the present invention is directed toward a method and an apparatus for active control of the sound of an (electric) land vehicle that can receive as input signal an audio signal, which may be arbitrary, and reproduce an output signal in a way consistent with the state of the vehicle and/or driving condition.

(10) The solution described herein envisages operating on the aforesaid input signal using an audio-texture procedure, by synthesizing an audio sequence, or stream, on the basis of a given standard audio signal.

(11) In the aforesaid context, it is envisaged that an input audio signal, i.e., a standard signal, of possibly short duration, will be supplied at input to generate at output a smooth sound of arbitrary length, the aforesaid output sound being correlated to the dynamic behaviour of the vehicle in real time.

(12) According to the solution described herein, it is hence envisaged to apply the aforesaid audio-texture procedure, in particular, as explained in greater detail below, via the operations of receiving the aforesaid input audio signal and dividing is into a set of audio frames of given duration; selecting, in the said set of audio frames, an audio frame representing the input audio signal; subdividing said selected audio frame into a set of audio sub-frames of given duration; determining a probability of transition within each orderly pair of said audio sub-frames; and synthesizing the audio signal representing the dynamic state of a land vehicle by generating a sequence of the aforesaid audio sub-frames, each subsequent audio sub-frame in said sequence being selected as a function of the current audio sub-frame and of the transition probabilities associated thereto.

(13) In this context, FIG. 1 illustrates a principle diagram of an apparatus for active control of the sound of a land vehicle, for example an electric vehicle, designated as a whole by the reference number 1. The components and modules comprised in the aforesaid apparatus 1 are electronic modules that exchange signals, in particular electrical signals, and can be implemented via one or more processor modules, which may even be comprised in one or more vehicle electronic control units, such as the ECU (Engine Control Unit), the Head Unit (Infotainment Module), and/or electronic modules of a DSP (Digital Signal Processing) type, or microcontrollers.

(14) The aforesaid apparatus 1 is divided into in three main processing modules, designated by the reference numbers 11, 12, 13.

(15) Each of the aforesaid modules 11, 12, 13 is configured for carrying out a step of a corresponding method for generating a synthetic sound or driving audio signal.

(16) The module 11 for selection of an audio frame is configured for selecting, starting from an arbitrary audio track GSS at input coming from a generic source, an audio frame SF that will be processed by the apparatus for active control of the sound 1. This arbitrary input audio track GSS in general corresponds to an audio signal, in particular in the time domain, whereas the audio frame SF corresponds to a segment or time interval of the arbitrary input audio track GSS.

(17) The duration of the above selected audio frame SF may mainly depend upon the storage and/or processing capacity of the electronic devices that implement the processing modules 11, 12, 13.

(18) The module 12 for generation of sub-frames is configured for analysing the selected audio frame SF, supplied at output by the module 11, and divide the aforesaid selected audio frame SF into a number N of sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N of given duration.

(19) The above number N of sub-frames may be chosen, for example, as a function of the total duration of the selected audio frame SF and/or of the storage and/or calculation capacity of the electronic devices that implement the processing modules 11, 12, 13.

(20) The module 13 for generation of the output audio signal is configured for synthesizing the driving audio signal SG to be produced at output by reproducing in a certain order the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N supplied by the module 12 and processing these sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N in order to render the spectral and/or amplitude characteristics thereof consistent with the state of the vehicle, the state of the engine, and/or the driving conditions.

(21) Of the three modules 11, 12, and 13, only the module 13 for generation of the output audio signal presents stringent constraints of real-time operation, whereas the modules 11 and 12 (included in the rectangle drawn with a dashed line in FIG. 1) can carry out the corresponding processing operations in off-line mode or on-line mode, but without any constraints of real-time operation.

(22) In one or more embodiments, via the operations of processing of audio signals implemented in the processing modules 11 and 12, characteristic quantities are extracted, which represent the audio data at input to the respective processing modules.

(23) The aforesaid input audio data correspond to the audio signal GSS supplied at input to the module 11 in a first step of the method, and to the selected audio frame SF supplied at output from the module 11 to the module 12 in a second step of the method.

(24) In addition, in one or more embodiments, characteristic quantities are also extracted, which represent audio data generated by the respective processing modules.

(25) Both for the processing operations implemented in the module 11 and for those implemented in the module 12, the characteristic quantities extracted from the audio signals GSS, SF, and SBF may comprise mel-frequency cepstral coefficients (MFCCs), linear predictive coding (LPC), correlograms, or other characteristics.

(26) In a first step of the method for generating a synthetic sound, implemented in the module 11 of FIG. 1, an input audio signal GSS coming from an arbitrary source is analysed in order to extract one or more of the characteristic quantities thereof listed previously, for example MFCCs.

(27) In addition, the input audio signal GSS is divided into a number M of successive audio frames F.sub.1, F.sub.2, . . . , F.sub.M of given duration. As anticipated, the aforesaid duration may be chosen as a function of the storage and/or processing capacity of the electronic devices that implement the modules 11, 12.

(28) As for the input audio signal GSS, also for each of the audio frames F.sub.1, F.sub.2, . . . , F.sub.M, one or more of the characteristic quantities listed previously, for example MFCCs, are computed via processing in the module 11.

(29) Once again via processing in the module 11, a measurement of similarity is computed, for example a similarity index, between each of the M audio frames F.sub.1, F.sub.2, . . . , F.sub.M of given duration and the input audio signal GSS, as a function of the characteristic quantity or quantities extracted for the input audio signal GSS and for each of the audio frames F.sub.1, F.sub.2, . . . , F.sub.M.

(30) The aforesaid measurement of similarity may be expressed, for example, by a real number, such as a correlation coefficient.

(31) The above measurements of similarity calculated enable determination of which of the aforesaid frames F.sub.1, F.sub.2, . . . , F.sub.M is the most representative of the audio signal GSS and can hence be selected for being supplied at output from the module 11.

(32) Consequently, the frame SF with the highest computed measurement of similarity to the input audio signal GSS from among the frames F.sub.1, F.sub.2, . . . , F.sub.M is selected and supplied at output from the module 11 to the module 12.

(33) Moreover, the processing module 11 is also configured for computing a frame centre frequency FCF of the acoustic spectrum of the aforesaid selected frame SF, for example for computing it as a spectral centroid of the aforesaid selected frame SF.

(34) The selected frame SF and the corresponding frame centre frequency FCF represent the outputs of the module for selection of an audio frame 11, which implements a first step of the method for generating a driving audio signal SG according to one or more embodiments.

(35) In a second step of the aforesaid method, implemented in the module 12 for generation of sub-frames, the selected audio frame SF supplied at output from the module 11 is subdivided into in a number N of audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N of a duration shorter than or equal to the duration of the selected audio frame SF. If the number N is greater than 1, the module 12 for generation of sub-frames is configured to divide the selected audio frame SF into a set of audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N of given duration, comprising a plurality of sub-frames.

(36) Of course, in the limit case where the number N of audio sub-frames is equal to 1, the duration of the audio sub-frames is equal to the duration of the selected audio frame SF. In this case, no subdivision of the selected audio frame SF is made, and the audio signal representing the dynamic state of a land vehicle is synthesised by generating a repeated sequence of the selected audio frame SF.

(37) In a way similar to what has been described for the first step of the method, also for each of the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N at least one of the characteristic quantities described previously is computed, for example MFCCs.

(38) It will be noted that in general the aforesaid characteristic quantity computed for the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N may be different from the characteristic quantity computed for the audio frames F.sub.1, F.sub.2, . . . , F.sub.M. On the other hand, in a preferred embodiment, the characteristic quantity computed for the audio sub-frames is the same characteristic quantity as the one computed for the audio frames, thus determining an overall simplification of the method for processing the audio signals.

(39) Moreover, measurements of similarity between the sub-frames are calculated, for example one measurement of similarity for each pair of audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N, once again as a function of the characteristic quantity or quantities computed for each of the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N.

(40) Unlike the first step of the method, regarding processing of the frame, the second step of the method envisages an additional step of processing of the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N.

(41) In this additional step, probabilities of transition TP between sub-frames are computed, for example one transition probability for each orderly pair of audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N.

(42) The aforesaid transition probabilities TP may be calculated, for example, as a function of the measurements of similarity computed previously between pairs of audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N.

(43) The transition probabilities TP are useful for defining a possible order, or sequence, of reproduction of the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N in a third step of the method, implemented in the module 13, for generating a certain driving audio signal SG.

(44) In fact, in a third step of the method, the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N are selected, modified in real time as a function of some signals that indicate the dynamic state of the vehicle, and reproduced in a certain sequence according to a certain rule of ordering.

(45) If no rule for determining the order of reproduction of the aforesaid sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N is defined in the third step of the method, the sub-frames are reproduced in the same order with which they are extracted from the original frame SF.

(46) It will on the other hand be noted that the same result would be obtained if the rule for determining the order of reproduction of the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N were a simple rule of the most probable sub-frame, i.e., a rule such that, at the end of each sub-frame, the sub-frame that has the highest transition probability associated is reproduced because the sub-frame having the highest probability of transition with respect to the current sub-frame is (reasonably) always the immediately following sub-frame, by virtue of its greatest similarity to the current sub-frame. In practice, given any current sub-frame SBF.sub.i, the transition with the highest probability is usually the transition towards the immediately following sub-frame SBF.sub.i+1.

(47) In the case described above, i.e., in the case where a rule of ordering for the reproduction of the audio sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N is not defined or the rule, if defined, coincides with the choice of the most probable sub-frame, the driving audio signal SG generated at output from the apparatus 1 is highly likely to be a complete repetition of the selected frame SF at input to the module 12 (albeit with characteristics of frequency and/or amplitude modified as a function of some signals that indicate the dynamic state of the vehicle).

(48) Even though the spectral characteristics of the driving audio signal SG generated at output from the apparatus 1 are modified, instant by instant (or in real time), as a function of the instantaneous dynamic behaviour of the vehicle, the periodic reproduction of one and the same frame SF may result in generation of sound artefacts that can adversely affect the quality of the generated sound perceived by users.

(49) Consequently, the transition probabilities TP computed in the module 12 in a second step of the method are used in the subsequent step following upon definition of a rule of ordering of the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N that is based upon the values of the aforesaid transition probabilities TP.

(50) The set of sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N and the corresponding transition probabilities TP represent the outputs of the second step of the method, implemented in the module 12 for generation of sub-frames.

(51) A third step of the method is implemented in the module 13 for generation of the output audio signal.

(52) FIG. 2 illustrates a principle diagram of a possible implementation of the above processing module 13, which comprises some sub-modules 131, 132, 133, 134.

(53) The processing module 13 receives at input the set of sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N and the transition probabilities TP associated thereto supplied by the module 12, the frame centre frequency FCF of the acoustic spectrum of the selected frame SF supplied by the module 11, and signals indicating the values of some parameters of the vehicle, such as the engine speed or r.p.m. ESP, the vehicle speed VSP, the position of the accelerator pedal GPP, the engine torque ET, and the engine state EST.

(54) Via processing of audio signals in the module 13, a driving audio signal SG is synthesized starting from a concatenation of sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N, the order of reproduction of which is determined by a certain ordering rule based upon the transition probabilities TP associated thereto.

(55) Moreover, the frequency spectrum of the aforesaid driving audio signal SG is varied as a function of the engine speed ESP and/or of the vehicle speed VSP.

(56) In a preferred embodiment, the amplitude of the driving audio signal SG at output can be varied as a function of a gain that depends upon at least one of the following parameters: engine speed ESP, vehicle speed VSP, position of the accelerator pedal GPP, and engine torque ET.

(57) Optionally, different sound profiles can be applied to the driving audio signal SG as a function of the engine state EST, and/or of the state of the vehicle, and/or of the external conditions.

(58) The above different sound profiles can be identified as pre-defined sound configurations, or tuning, that have an effect on the driving audio signal SG and/or on other parameters, such as the input audio signal GSS.

(59) For instance, different sound profiles may be such as to create a different volume of the driving audio signal SG, or else a different sound print of the driving audio signal SG, for example obtained by selecting a different input audio track GSS, or else by imposing a different rule of ordering of the sub-frames.

(60) Consider by way of example the possibility of applying different sound profiles according to the vehicle speed (whether moderate as in the case of urban driving, or sustained as in the case of driving on a motorway), or else according to the geographical location of the vehicle (for example, limiting the intensity of the sound emissions in the proximity of protected areas, such as schools or hospitals), or else again according to the schedule, envisaging a different processing of the audio signals according to whether the vehicle is used during daytime or night-time.

(61) Processing of audio signals implemented in the processing module 13 hence comprises a plurality of steps, amongst which selection of a sub-frame in block 131, calculation of a frequency scale factor in block 132, a frequency (or pitch) shifting in block 133, and composition of sub-frames in block 134.

(62) Block 131, illustrated in FIG. 2, receives at input the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N and the transition probabilities TP computed in the module 12 and determines what is the next sub-frame SSBF that is to be reproduced as a function of the sub-frame currently being reproduced and of the transition probability associated thereto.

(63) In order to prevent, as anticipated, sound loops and/or the periodic repetition of a restricted subset of sub-frames from among the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N (given that the most probable transitions are usually the ones between consecutive sub-frames), a rule of selection of the next sub-frame SSBF is defined.

(64) Given a certain current sub-frame, the aforesaid rule of selection imposes that the next sub-frame in the sequence of reproduction is randomly chosen from among the sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N with the exclusion of the i sub-frames “closest” to the current one (to the left and to the right), where i is an integer that can vary in a given range, the range of variability of the integer i being possibly a function of the ratio between the duration of the frame SF and the duration of the sub-frames SBF.

(65) In a similar way, the aforesaid range of variability may be function of the number N of sub-frames SBF.sub.1, SBF.sub.2, . . . , SBF.sub.N.

(66) It will be noted that the expression “closest sub-frames”, used previously, is meant to indicate the sub-frames that are closest to the current one in order of time, both the previous ones and the subsequent ones.

(67) Block 132, illustrated in FIG. 2, receives at input the frame centre frequency FCF of the acoustic spectrum of the selected frame SF supplied by the module 11, and one between the value of engine speed ESP and the value of vehicle speed VSP, in order to determine a frequency scale factor FSF.

(68) It will be noted that the parameter to be used to determine a frequency scale factor FSF, namely the engine speed ESP or the vehicle speed VSP, may be fixed, for example determined on the basis of the type of vehicle on which the system is installed, or else configurable, for example selectable by the user.

(69) In one or more embodiments, by dividing the value of the engine speed ESP expressed in r.p.m. (revolutions per minute) by the factor 60 a characteristic frequency of the engine can be obtained, and the frequency scale factor FSF can be determined as the ratio between the aforesaid characteristic frequency of the engine and the frame centre frequency FCF of the acoustic spectrum of the selected frame SF.

(70) In alternative embodiments, the values of the frequency scale factor FSF can be mapped as a function of a generic input quantity, such as the vehicle speed VSP. This mapping enables association of an output value of the frequency scale factor FSF to each possible value of an input quantity via a correspondence map (for example, a look-up table), this map being possibly configurable, for example via in-field calibration.

(71) Block 133, illustrated in FIG. 2, receives at input the sub-frame SSBF selected in block 131 and the frequency scale factor FSF computed in block 132, to carry out a frequency, or pitch, shift of the aforesaid sub-frame SSBF and generate a modified, i.e., pitch-shifted, sub-frame PSBF.

(72) Consequently, the frequency spectrum of the sub-frame PSBF at output from block 133 is obtained via pitch-shifting. For instance, pitch-shifting can be carried out using re-sampling techniques, where the frequency scale factor FSF is used as re-sampling factor.

(73) Block 134, illustrated in FIG. 2, receives at input the modified sub-frame PSBF generated in block 133 and at least one parameter that indicates the operating conditions of the vehicle selected from among engine speed ESP or vehicle speed VSP, position of the accelerator pedal GPP, engine torque ET, and a signal indicating the engine state EST, and generates the driving audio signal SG via concatenation of the sub-frames received from block 133, and appropriately processed.

(74) To the modified sub-frame PSBF it is in fact possible to apply (block 134) the amplitude gains as a function of at least one of the following: engine speed ESP, vehicle speed VSP, position of the accelerator pedal GPP, engine torque ET, and engine state EST.

(75) It will be noted that different configurable maps can be defined to establishing a correspondence between the possible values of different input quantities, such as engine speed ESP, vehicle speed VSP, position of the accelerator pedal GPP, and engine torque ET, and the corresponding values of the gains applied to the modified sub-frame PSBF. These maps, which associate output values to possible input values, can be configured, for example, via calibration during testing on the actual product.

(76) Moreover, a smoothing filter is applied to the sequence of sub-frames processed, which constitutes the driving audio signal SG in order to eliminate any sharp variations of the sound signal between the end of one sub-frame and the start of the next.

(77) Via the procedure described herein, the steps of which are implemented by the modules 11, 12, 13 (in a concentrated or distributed form), an apparatus for active control of the sound of a land vehicle enables complete design of the sound OS produced by an electric vehicle, generating a “captivating” output sound both inside and outside of the passenger compartment.

(78) The aforesaid sound can be produced using an electro-acoustic device that generates a synthetic sound OS, inside and/or outside a vehicle, the electro-acoustic device being driven by a driving signal SG generated according to the method implemented by an apparatus for active control of the sound of a land vehicle, as described herein.

(79) The aforesaid output sound OS can be defined freely since there are no constraints on the characteristics of the sound source that can be used for obtaining the audio signal GSS at input to the apparatus 1.

(80) The above audio signal GSS may represent an ambient sound, such as the sound of rain, of a hurricane, of the wind, or a cinematographic sound effect, or any other sound.

(81) Of course, the audio signal GSS may be a signal representing the sound of an engine, for example an internal-combustion engine, in the case where the aim is to reproduce the sound of an internal-combustion engine on an electric vehicle.

(82) Moreover, the driving audio signal SG at output from the apparatus 1 is also such as to be correlated to the engine state, the engine speed, the vehicle speed, the position of the accelerator pedal, and/or engine torque.

(83) It will moreover be noted that such an apparatus 1 can be implemented as software code in a dedicated electronic control unit (ECU) inside a vehicle, or else in the infotainment system already present in the vehicle, or else again in any other electronic control unit present in the vehicle that is provided with an power audio output or in any case with an audio output line, it thus being possible to implement a solution completely integrated in a vehicle.

(84) FIG. 3 illustrates an example of installation of the apparatus 1 that is implemented via software in an ECU 33 of a motor vehicle 30 and drives via a driving signal SG both an acoustic actuator 31 in the passenger compartment of the motor vehicle 30 and an acoustic actuator 32 in the outer rear part of the motor vehicle, as well as an acoustic actuator 34 in the outer front part of the motor vehicle. These acoustic actuators 31, 32, 34 may, for example, be speakers or shakers for emitting the sound OS of the vehicle.

(85) Of course, the number and arrangement of the acoustic actuators provided in the vehicle 30 and driven by a driving audio signal SG may vary with respect to what has been illustrated herein purely by way of example.

(86) For instance, in the case of an electric vehicle without gas-exhaust terminals, an acoustic actuator 32 in the outer rear part of the motor vehicle can be installed in an area different from the one illustrated by way of example in FIG. 3. For instance, an acoustic actuator 32 may be installed in the area of the back lights of the motor vehicle 30.

(87) In variant embodiments, the driving audio signal SG can be modulated differently according to the acoustic actuator to which it is sent, and also the actuators in one and the same context, for example, in the passenger compartment, can receive different driving audio signals SG; for example, the right-hand speaker in the passenger compartment may receive a driving audio signal different form the one received by the left-hand speaker in the passenger compartment.

(88) It will be noted that a pure tone (for example, a sinusoidal audio signal), as likewise a composition of pure tones, can be considered as a simple audio texture, and consequently is suited to being processed according to the method as described herein for supplying a certain driving audio signal SG.

(89) Consequently, one or more embodiments may be used not only in the case of electric vehicles but also in the case of hybrid vehicles or vehicles provided with an internal-combustion engine, for example by using an audio texture made up of pure tones corresponding to the engine orders.

(90) Without prejudice to the underlying principles, the details and embodiments may vary even considerably with respect to what has been described herein exclusively by way of example, without departing from the sphere of protection, which is defined in the annexed claims. Thus, the invention has been described in an illustrative manner. It is to be understood that the terminology which has been used is intended to be in the nature of words of description rather than of limitation. Many modifications and variations of the invention are possible in light of the above teachings. Therefore, within the scope of the appended claims, the invention may be practiced other than as specifically described.