G10H2210/051

Evaluating percussive performances
11790801 · 2023-10-17 · ·

Measures (for example, methods, systems and computer programs) are provided to evaluate a percussive performance. Percussive performance data captured by one or more sensors is received. The percussive performance data represents one or more impact waveforms of one or more hits on a performance surface. The one or more impact waveforms are analysed. The analysing comprises: (i) identifying one or more characteristics of the one or more impact waveforms; (ii) classifying the one or more hits as one or more percussive hit-types based on the one or more characteristics; and (iii) evaluating the one or more percussive hit-types against performance target data. Performance evaluation data is output based on said evaluating.

Media content identification on mobile devices

A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.

Media content identification on mobile devices

A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.

Systems and methods for generating a visual color display of audio-file data

Systems and methods for generating a visual color display of audio-file data are provided. The system includes a processor that performs a method including receiving audio-file data; generating filtered-audio data by processing the audio-file data by frequency-band filters. The frequency band filters have different frequency bands. The method includes generating one or more waveforms corresponding to the filtered-audio data and displaying the waveforms superimposed in unique color relative to one another. The method includes downsampling the waveforms. The method includes processing the waveforms through an envelope detector. The method includes processing the waveforms through an expander and applying a gain factor. The waveforms have transparency levels at sections that are proportional or inversely proportional to amplitudes at the sections.

METHOD OF PERFORMING A PIECE OF MUSIC
20230343313 · 2023-10-26 ·

A method for performing a piece of music comprising the steps of: receiving (310) an input signal from a musical instrument (220), the inputs signal encoding the notes played on the instrument (220), matching (320) a note or combination of notes in the input signal to a respective trigger in a predefined set of triggers stored in a memory (130), each trigger being associated with a respective fragment of music that makes up a part of the piece of music, each fragment having a predefined length and starting from a predefined position relative to the start of a bar, and at least one of the fragments being more complex than the associated trigger, wherein when a note or combination of notes that matches a trigger is played by the user the method outputs (330) at least part of the matched fragment that starts at the time that the note or combination of notes is played.

Systems and methods for generating a playback-information display during time compression or expansion of an audio signal

Systems and methods for generating a playback-information display during time compression or expansion of an audio signal are provided. The system includes a processor that performs a method including displaying a first remaining playback-time associated with an audio file; adjusting the playback speed of the audio file during playback of the audio file; and, in response to the playback speed being adjusted, automatically displaying a second remaining playback-time associated with the audio file during playback of the audio file.

METHOD AND APPARATUS FOR DISPLAYING MUSIC POINTS, AND ELECTRONIC DEVICE AND MEDIUM
20220293136 · 2022-09-15 ·

Disclosed are a method and apparatus for displaying music points, and an electronic device and a medium. One specific embodiment of the method includes: acquiring audio material; analyzing initial music points in the audio material, wherein the initial music points include beat points and/or note starting points in the audio material; and on an operation interface of video clipping, displaying, according to the position of the audio material on a clip timeline and the positions of target music points in the audio material, identifiers of the target music points on the clip timeline, wherein the target music points are some of or all of the initial music points. According to the embodiment, the time for a user to process audio material and to make music points is reduced, and the flexibility of tools is also guaranteed.

Media content identification on mobile devices

A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.

AUTOMATIC MUSICAL PERFORMANCE DEVICE, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND AUTOMATIC MUSICAL PERFORMANCE METHOD
20220301527 · 2022-09-22 · ·

An automatic musical performance device includes: a storage part, storing musical performance patterns; a musical performance part, performing musical performance on the basis of the musical performance patterns stored in the storage part; an input part, to which musical performance information is input a setting part, setting a mode as to whether to switch the musical performance; a selection part, selecting a musical performance pattern estimated to have a maximum likelihood among the musical performance patterns stored in the storage part on the basis of the musical performance information input to the input part when a mode of switching the musical performance by the musical performance part is set by the setting part; and a switching part, switching at least one musical expression of the musical performance pattern played by the musical performance part to a musical expression of the musical performance pattern selected by the selection part.

Systems and methods for capturing and interpreting audio

A device is provided for capturing vibrations produced by an object such as a musical instrument such as a cymbal of a drum kit. The device comprises a detectable element, such as a ferromagnetic element, such as a metal shim and a sensor spaced apart from and located relative to the musical instrument. The detectable element is located between the sensor and the musical instrument. When the musical instrument vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the musical instrument.