Evaluation of beats, chords and downbeats from a musical audio signal
09653056 ยท 2017-05-16
Assignee
Inventors
Cpc classification
G10H1/383
PHYSICS
G10H2210/066
PHYSICS
G10H2210/051
PHYSICS
G10H2230/015
PHYSICS
International classification
Abstract
A server system 500 is provided for receiving video clips having an associated audio/musical track for processing at the server system. The system comprises a beat tracking module for identifying beat time instants (t.sub.i) in the audio signal and a chord change estimation module for determining a chord change likelihood from chroma accent information in the audio signal at the beat time instants (t.sub.i). Further, first and second accent-based estimation modules are provided for determining respective first and second accent-based downbeat likelihood values from the audio signal at the beat time instants (t.sub.i) using respective different algorithms. A final stage of processing identifies downbeats occurring at beat time instants (t.sub.i) using a predefined score-based algorithm that takes as input numerical representations of chord change likelihood and the first and second accent-based downbeat likelihood values at the beat time instants (t.sub.i).
Claims
1. An apparatus, the apparatus having at least one processor and at least one memory having computer-readable code stored thereon which when executed causes the at least one processor to: identify beat time instants (t.sub.i) in an audio signal; determine a chord change likelihood from the audio signal at or between the beat time instants by using a predefined algorithm that takes as input a value of pitch chroma at or between the current beat time instant (t.sub.i) and one or more values of pitch chroma at or between preceding and/or succeeding beat time instants, wherein the predefined algorithm is defined as:
2. The apparatus according to claim 1, wherein the apparatus caused to identify downbeats is further caused to use a predefined score-based algorithm that takes as input numerical representations of the determined chord change likelihood and the first accent-based downbeat likelihood at or between the beat time instants (t.sub.i).
3. The apparatus according to claim 1, wherein the apparatus caused identify downbeats is further caused to use a decision-based logic circuit that takes as input numerical representations of the determined chord change likelihood and the first accent-based downbeat likelihood at or between the beat time instants (t.sub.i).
4. The apparatus according to claim 1, wherein the apparatus caused to identify beat time instants (t.sub.i) is further caused to extract accent features from the audio signal to generate an accent signal, to estimate from the accent signal the tempo of the audio signal and to estimate from the tempo and the accent signal the beat time instants (t.sub.i).
5. The apparatus according to claim 4, wherein the apparatus is caused to generate the accent signal by being further caused to extract chroma accent features based on fundamental frequency (f.sub.0) salience analysis.
6. The apparatus according to claim 4, wherein the apparatus is caused to generate the accent signal by being further caused to use a multi-rate filter bank-type decomposition of the audio signal.
7. The apparatus according to claim 5, wherein the apparatus caused to generate the accent signal is further caused to extract chroma accent features based on fundamental frequency salience analysis in combination with a multi-rate filter bank-type decomposition of the audio signal.
8. The apparatus according to claim 1, wherein the predefined algorithm takes as input values of pitch chroma at or between the current beat time instant (t.sub.i) and at or between a predefined number of preceding and succeeding beat time instants to generate a chord change likelihood using a sum of differences or similarities calculation.
9. The apparatus according to claim 1, wherein the predefined algorithm takes as input values of average pitch chroma at or between the current and preceding and/or succeeding beat time instants.
10. The apparatus according to claim 1, wherein the apparatus caused to determine the change likelihood is further caused to calculate the pitch chroma or average pitch chroma by means of extracting chroma features based on fundamental frequency (f.sub.0) salience analysis.
11. The apparatus according to claim 1, wherein the apparatus caused to determine one of the accent-based downbeat likelihoods is further caused to apply to a predetermined likelihood algorithm or transform chroma accent features extracted from the audio signal for or between the beat time instants (t.sub.i), the chroma accent features being extracted using fundamental frequency (f.sub.0) salience analysis.
12. The apparatus according to claim 11, wherein the apparatus caused to determine one of the accent-based downbeat likelihoods is further caused to apply to a predetermined likelihood algorithm or transform accent features extracted from each of a plurality of sub-bands of the audio signal.
13. The apparatus according to claim 11, wherein the apparatus caused to determine the accent-based downbeat likelihoods is further caused to apply the accent features to a linear discriminate analysis (LDA) transform at or between the beat time instants (t.sub.i) to obtain a respective accent-based numerical likelihood.
14. The apparatus according to claim 1, wherein the apparatus caused to normalise is further caused to divide each of the values with their maximum absolute value.
15. The apparatus according to claim 1, wherein the apparatus caused to identify downbeats is further caused to apply an algorithm:
16. A method comprising: identifying beat time instants (t.sub.i) in an audio signal; determining a chord change likelihood from the audio signal at or between the beat time instants by using a predefined algorithm that takes as input a value of pitch chroma at or between the current beat time instant (ti) and one or more values of pitch chroma at or between preceding and/or succeeding beat time instants, wherein the predefined algorithm is defined as:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the invention will now be described by way of non-limiting example with reference to the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION OF EMBODIMENTS
(8) Embodiments described below relate to systems and methods for audio analysis, primarily the analysis of music and its musical meter in order to identify downbeats. As noted above, downbeats are defined as the first beat in a bar or measure of music; they are considered to represent musically meaningful points that can be used for various practical applications, including music recommendation algorithms, DJ applications and automatic looping. The specific embodiments described below relate to a video editing system which automatically cuts video clips using downbeats identified in their associated audio track as video angle switching points.
(9) Referring to
(10) External terminals 100, 102, 104 in use communicate with the analysis server 500 via the network 300, in order to upload video clips having an associated audio track. In the present case, the terminals 100, 102, 104 incorporate video camera and audio capture (i.e. microphone) hardware and software for the capturing, storing and uploading and downloading of video data over the network 300.
(11) Referring to
(12)
(13) The memory 112 may be a non-volatile memory such as read only memory (ROM) a hard disk drive (HDD) or a solid state drive (SSD). The memory 112 stores, amongst other things, an operating system 126 and may store software applications 128. The RAM 114 is used by the controller 106 for the temporary storage of data. The operating system 126 may contain code which, when executed by the controller 106 in conjunction with RAM 114, controls operation of each of the hardware components of the terminal.
(14) The controller 106 may take any suitable form. For instance, it may be a microcontroller, plural microcontrollers, a processor, or plural processors.
(15) The terminal 100 may be a mobile telephone or smartphone, a personal digital assistant (PDA), a portable media player (PMP), a portable computer or any other device capable of running software applications and providing audio outputs. In some embodiments, the terminal 100 may engage in cellular communications using the wireless communications module 122 and the antenna 124. The wireless communications module 122 may be configured to communicate via several protocols such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Universal Mobile Telecommunications System (UMTS), Bluetooth and IEEE 802.11 (Wi-Fi).
(16) The display part 108 of the touch sensitive display 102 is for displaying images and text to users of the terminal and the tactile interface part 110 is for receiving touch inputs from users.
(17) As well as storing the operating system 126 and software applications 128, the memory 112 may also store multimedia files such as music and video files. A wide variety of software applications 128 may be installed on the terminal including Web browsers, radio and music players, games and utility applications. Some or all of the software applications stored on the terminal may provide audio outputs. The audio provided by the applications may be converted into sound by the speaker(s) 118 of the terminal or, if headphones or speakers have been connected to the headphone port 120, by the headphones or speakers connected to the headphone port 120.
(18) In some embodiments the terminal 100 may also be associated with external software application not stored on the terminal. These may be applications stored on a remote server device and may run partly or exclusively on the remote server device. These applications can be termed cloud-hosted applications. The terminal 100 may be in communication with the remote server device in order to utilise the software application stored there. This may include receiving audio outputs provided by the external software application.
(19) In some embodiments, the hardware keys 104 are dedicated volume control keys or switches. The hardware keys may for example comprise two adjacent keys, a single rocker switch or a rotary dial. In some embodiments, the hardware keys 104 are located on the side of the terminal 100.
(20) One of said software applications 128 stored on memory 112 is a dedicated application (or App) configured to upload captured video clips, including their associated audio track, to the analysis server 500.
(21) The analysis server 500 is configured to receive video clips from the terminals 100, 102, 104 and to identify downbeats in each associated audio track for the purposes of automatic video processing and editing, for example to join clips together at musically meaningful points. Instead of identifying downbeats in each associated audio track, the analysis server 500 may be configured to analyse the downbeats in a common audio track which has been obtained by combining parts from the audio track of one or more video clips.
(22) Referring to
(23) Users of the terminals 100, 102, 104 subsequently upload their video clips to the analysis server 500, either using their above-mentioned App or from a computer with which the terminal synchronises. At the same time, users are prompted to identify the event, either by entering a description of the event, or by selecting an already-registered event from a pull-down menu. Alternative identification methods may be envisaged, for example by using associated GPS data from the terminals 100, 102, 104 to identify the capture location.
(24) At the analysis server 500, received video clips from the terminals 100, 102, 104 are identified as being associated with a common event. Subsequent analysis of each video clip can then be performed to identify downbeats which are used as useful video angle switching points for automated video editing.
(25) Referring to
(26) The memory 206 (and mass storage device 208) may be a non-volatile memory such as read only memory (ROM) a hard disk drive (HDD) or a solid state drive (SSD). The memory 206 stores, amongst other things, an operating system 210 and may store software applications 212. RAM (not shown) is used by the controller 202 for the temporary storage of data. The operating system 210 may contain code which, when executed by the controller 202 in conjunction with RAM, controls operation of each of the hardware components.
(27) The controller 202 may take any suitable form. For instance, it may be a microcontroller, plural microcontrollers, a processor, or plural processors.
(28) The software application 212 is configured to control and perform the video processing, including processing the associated audio signal to identify downbeats.
(29) The downbeat identification process will now be described with reference to
(30) It will be seen that three processing paths are defined (left, middle, right); the reference numerals applied to each processing stage are not indicative of order of processing. In some implementations, the three processing paths might be performed in parallel allowing fast execution. In overview, beat tracking is performed to identify or estimate beat times in the audio signal. Then, at the beat times, each processing path generates a numerical value representing a differently-derived likelihood that the current beat is a downbeat. These likelihood values are normalised and then summed in a score-based decision algorithm that identifies which beat in a window of adjacent beats is a downbeat.
(31) Fundamental Frequency-Based Chroma Feature Extraction
(32) The method starts in step 6.1 by generating two signals calculated based on fundamental frequency (f.sub.0) salience estimation.
(33) One signal represents the chroma accent signal which in step 6.2 is extracted from the salience information using the method described in [2]. The chroma accent signal is considered to represent musical change as a function of time. Since this accent signal is extracted based on the f.sub.0 information, it emphasises harmonic and pitch information in the signal.
(34) The chroma accent signal serves two purposes. Firstly, it is used for estimating tempo and beat tracking. It is also used for generating a likelihood value, to be described later down.
(35) Beat Tracking
(36) The chroma accent signal is employed to calculate an estimate of the tempo (BPM) and for beat tracking. For BPM determination, the method described in [2] is also employed. Alternatively, other methods for BPM determination can be used.
(37) To obtain the beat time instants, a dynamic programming routine as described in [7] is employed. Alternatively, the beat tracking method described in [3] can be employed. Alternatively, any suitable beat tracking routine can be utilized, which is able to find the sequence of beat times over the music signal given one or more accent signals as input and at least one estimate of the BPM of the music signal. Instead of operating on the chroma accent signal, the beat tracking might operate on the multirate accent signal or any combination of the chroma accent signal and the multirate accent signal. Alternatively, any suitable accent signal analysis method, periodicity analysis method, and a beat tracking method might be used for obtaining the beats in the music signal. In some embodiments, part of the information required by the beat tracking step might originate from outside the audio signal analysis system. An example would be a method where the BPM estimate of the signal would be provided externally.
(38) The resulting beat times t.sub.i are used as input for the downbeat determination stage to be described later on and for synchronised processing of data in all three branches of the
(39) Chroma Difference Calculation & Chord Change Possibility
(40) The left-hand path (steps 6.5 and 6.6) calculates what the average pitch chroma is at the aforementioned beat locations and infers a chord change possibility which, if high, is considered indicative of a downbeat. Each step will now be described.
(41) Beat Synchronous Chroma Calculation
(42) In step 6.5, the method described in [2] is employed to obtain the chroma vectors and the average chroma vector is calculated for each beat location. Alternatively, any suitable method for obtaining the chroma vectors might be employed. For example, a computationally simple method would use the Fast Fourier Transform (FFT) to calculate the short-time spectrum of the signal in one or more frames corresponding to the music signal between two beats. The chroma vector could then be obtained by summing the magnitude bins of the FFT belonging to the same pitch class. Such a simple method may not provide the most reliable chroma and/or chord change estimates but may be a viable solution if the computational cost of the system needs to be kept very low.
(43) Instead of calculating the chroma at each beat location, a sub-beat resolution could be used. For example, two chroma vectors per each beat could be calculated.
(44) Chroma Difference Calculation
(45) Next, in step 6.6, a chord change possibility is estimated by differentiating the previously determined average chroma vectors for each beat location.
(46) Trying to detect chord changes is motivated by the musicological knowledge that chord changes often occur at downbeats. The following function is used to estimate the chord change possibility:
(47)
(48) The first sum term in Chord_change(t.sub.i) represents the sum of absolute differences between the current beat chroma vector and the three previous chroma vectors. The second sum term represents the sum of the next three chroma vectors. When a chord change occurs at beat t.sub.i, the difference between the current beat chroma vector
(49) Similar principles have been used in [1] and [6], but the actual computations differ.
(50) Alternatives and variations for the Chord_change function include, for example: using more than 12 pitch classes in the summation of j. In some embodiments, the value of pitch classes might be, e.g., 36, corresponding to a .sup.rd semitone resolution with 36 bins per octave. In addition, the function can be implemented for various time signatures. For example, in the case of a 3/4 time signature the values of k could range from 1 to 2. In some other embodiments, the amount of preceding and following beat time instants used in the chord change possibility estimation might differ. Various other distance or distortion measures could be used, such as Euclidean distance, cosine distance, Manhattan distance, Mahalanobis distance. Also statistical measures could be applied, such as divergences, including, for example, the Kullback-Leibler divergence. Alternatively, similarities could be used instead of differences. The benefit of the Chord_change function above is that it is computationally very simple.
(51) Chroma Accent and Multirate Accent Calculation
(52) Regarding the central path (steps 6.2, 6.3) the process of generating the salience-based chroma accent signal has already been described above in relation to beat tracking. The chroma accent signal is applied at the determined beat instances to a linear discriminant transform (LDA) in step 6.3, mentioned below.
(53) Regarding the right hand path (steps 6.8, 6.9) another accent signal is calculated using the accent signal analysis method described in [3]. This accent signal is calculated using a computationally efficient multi rate filter bank decomposition of the signal.
(54) When compared with the previously described F.sub.o salience-based accent signal, this multi rate accent signal relates more to drum or percussion content in the signal and does not emphasise harmonic information. Since both drum patterns and harmonic changes are known to be important for downbeat determination, it is attractive to use/combine both types of accent signals.
(55) LDA Transform of Accent Signals
(56) The next step performs separate LDA transforms at beat time instants on the accent signals generated at steps 6.2 and 6.8 to obtain from each processing path a downbeat likelihood for each beat instance.
(57) The LDA transform method can be considered as an alternative for the measure templates presented in [5]. The idea of the measure templates in [5] was to model typical accentuation patterns in music during one measure. For example, a typical pattern could be low, loud, -, loud, meaning an accent with lots of low frequency energy at the first beat, an accent with lots of energy across the frequency spectrum on the second beat, no accent on the third beat, and again an accent with lots of energy across the frequency spectrum on the fourth beat. This corresponds, for example, to the drum pattern bass, snare, -, snare.
(58) The benefit of using LDA templates compared to manually designed rhythmic templates is that they can be trained from a set of manually annotated training data, whereas the rhythmic templates were manually obtained. This increases the downbeat determination accuracy based on our simulations.
(59) Using LDA for beat determination was suggested in [1]. Thus, the main difference between [1] and the present embodiment is that here we use LDA trained templates for discriminating between downbeat and beat, whereas in [1] the discrimination was done between beat and non-beat.
(60) Referring to [1] it will be appreciated that LDA analysis involves a training phase and an evaluation phase.
(61) In the training phase, LDA analysis is performed twice, separately for the salience-based chroma accent signal (from step 6.2) and the multirate accent signal (from step 6.8).
(62) The chroma accent signal from step 6.2 is a one dimensional vector.
(63) The training method for both LDA transform stages (steps 6.3, 6.9) is as follows:
(64) 1) sample the accent signal at beat positions;
(65) 2) go through the sampled accent signal at one beat steps, taking a window of four beats in turn;
(66) 3) if the first beat in the window of four beats is a downbeat, add the sampled values of the accent signal corresponding to the four beats to a set of positive examples;
(67) 4) if the first beat in the window of four beats is not a downbeat, add the sampled values of the accent signal corresponding to the four beats to a set of negative examples;
(68) 5) store all positive and negative examples. In the case of the chroma accent signal from step 6.2, each example is a vector of length four;
(69) 6) after all the data has been collected (from a catalogue of songs with annotated beat and downbeat times), perform LDA analysis to obtain the transform matrices.
(70) When training the LDA transform, it is advantageous to take as many positive examples (of downbeats) as there are negative examples (not downbeats). This can be done by randomly picking a subset of negative examples and making the subset size match the size of the set of positive examples.
(71) 7) collect the positive and negative examples in an M by d matrix [X]. M is the number of samples and d is the data dimension. In the case of the chroma accent signal from step 6.2, d=4.
(72) 9) Normalize the matrix [X] by subtracting the mean across the rows and dividing by the standard deviation.
(73) 10) Perform LDA analysis as is known in the art to obtain the linear coefficients W. Store also the mean and standard deviation of the training data.
(74) In the online downbeat detection phase (i.e. the evaluation phases steps 6.3 and 6.9) the downbeat likelihood is obtained using the method:
(75) for each recognized beat time, construct a feature vector x of the accent signal value at the beat instant and three next beat time instants;
(76) subtract the mean and divide with the standard deviation of the training data the input feature vector x;
(77) calculate a score x*W for the beat time instant, where x is a 1 by d input feature vector and W is the linear coefficient vector of size d by 1.
(78) A high score may indicate a high downbeat likelihood and a low score may indicate a low downbeat likelihood.
(79) In the case of the chroma accent signal from step 6.2, the dimension d of the feature vector is 4, corresponding to one accent signal sample per beat. In the case of the multirate accent signal from step 6.8, the accent has four frequency bands and the dimension of the feature vector is 16.
(80) The feature vector is constructed by unraveling the matrix of bandwise feature values into a vector.
(81) In the case of time signatures other than 4/4, the above processing is modified accordingly. For example, when training a LDA transform matrix for a 3/4 time signature, the accent signal is travelled in windows of three beats. Several such transform matrices may be trained, for example, one corresponding to each time signature the system needs to be able to operate under.
(82) Various alternatives to the LDA transform are possible. These include, for example, training any classifier, predictor, or regression model which is able to model the dependency between accent signal values and downbeat likelihood. Examples include, for example, support vector machines with various kernels, Gaussian or other probabilistic distributions, mixtures of probability distributions, k-nearest neighbour regression, neural networks, fuzzy logic systems, decision trees, and so on. The benefit of the LDA is that it is straightforward to implement and computationally simple.
(83) Downbeat Candidate Scoring and Downbeat Determination
(84) When the audio has been processed using the above-described steps, an estimate for the downbeat is generated by applying the chord change likelihood and the first and second accent-based likelihood values in a non-causal manner to a score-based algorithm. Before computing the final score, the chord change possibility and the two downbeat likelihood signals are normalized by dividing with their maximum absolute value (see steps 6.4, 6.7 and 6.10).
(85) The possible first downbeats are t.sub.1, t.sub.2, t.sub.3, t.sub.4, and the one that is selected is the one maximizing:
(86)
S(t.sub.n) is the set of beat times t.sub.n, t.sub.n+4, t.sub.n+g, . . . .
(87) w.sub.c, w.sub.a, and w.sub.m are the weights for the chord change possibility, chroma accent based downbeat likelihood, and multirate accent based downbeat likelihood, respectively. Step 6.11 represents the above summation and step 6.12 the determination based on the highest score for the window of possible downbeats.
(88) Note that the above scoring function was presented in the case of a 4/4 time signature. In the case of a 3/4 time signature, for example, the summation could be done across every three beats. Various modifications are possible and apparent, such as using a product of the chord change possibilities based on the different accent signals instead of the sum, or using a median instead of the average. Moreover, more complex decision logic could be implemented, for example, one possibility could be to train a classifier which would input the score(t.sub.n) and output the decision for the downbeat. As another example, a classifier could be trained which would input chord change possibility, chroma accent based downbeat likelihood, and/or multirate accent based downbeat likelihood, and which would output the decision for the downbeat. For example, a neural network could be used to learn the mapping between the downbeat likelihood curves and the downbeat positions, including the weights w.sub.c, w.sub.a, and w.sub.m. In general, the determination of the downbeat could be done by any decision logic which is able to take the chord change possibility and downbeat likelihood curves as input and produce the downbeat location as output. In addition, in the case where we can assume that the music contains only full measures at a certain time signature, the above score may be calculated over all the beats in the signal. As another example, the above score could be calculated at sub-beat resolution, for example, at every half beat. In cases where not all measures are full, the above score may be calculated in windows of certain duration over the signal. The benefit of the above scoring method is that it is computationally very simple.
(89) Having identified downbeats within the audio track of the video, a set of meaningful edit points are available to the software application 212 in the analysis server for making musically meaningful cuts to videos.
(90) It will be appreciated that the above described embodiments are purely illustrative and are not limiting on the scope of the invention. Other variations and modifications will be apparent to persons skilled in the art upon reading the present application.
(91) Moreover, the disclosure of the present application should be understood to include any novel features or any novel combination of features either explicitly or implicitly disclosed herein or any generalization thereof and during the prosecution of the present application or of any application derived therefrom, new claims may be formulated to cover any such features and/or combination of such features.