G10H2250/235

Haptic feedback method
11430307 · 2022-08-30 · ·

Provided a haptic feedback method, including: step S1 of algorithmically training an audio clip containing a known audio event type to obtain an algorithm model; and step S2 of obtaining an audio, identifying the audio by the algorithm model to obtain different audio event types in this audio, matching, according to a preset rule, the audio event types with different vibration effects as a haptic feedback and outputting the haptic feedback. Compared with the related art, the present haptic feedback method provides users with real-time haptic feedback when applied to a mobile electronic product, thereby achieving excellent use experience of the mobile electronic product.

SYSTEMS AND METHODS FOR ANALYZING COMPONENTS OF AUDIO TRACKS
20170236504 · 2017-08-17 ·

A method is described comprising receiving a stem signal and an audio mix signal, wherein the audio mix signal comprises information of the stem signal. The method includes applying a first transform to the stem signal to provide a first stem spectrum, applying a second transform to the stem signal to provide a second stem spectrum, generating a plurality of mix signals using the audio mix signal, applying a first transform to each mix signal of the plurality of mix signals to provide a corresponding first mix signal spectrum, applying a second transform to each mix signal of the plurality of mix signals to provide a corresponding second mix signal spectrum, and using information of the first stem spectrum, the second stem spectrum, a first mix signal spectrum, or a second mix signal spectrum to detect the information of the stem signal in the audio mix signal.

Memory device, waveform data editing method

Provided are a memory device and waveform data editing method and editing program thereof. Waveform data obtained by sampling a musical sound is acquired, and a difference between a harmonic frequency of an n.sup.th harmonic of the waveform data and a resonance sound frequency of the n.sup.th harmonic sound of a resonance sound generation circuit is calculated, and if the difference is 1 Hz or more, a waveform of a frequency component of 20 Hz centered on a central of the frequency of the n.sup.th harmonic of a frequency spectrum is clipped. The difference calculated in regard to the clipped waveform is reduced. The waveform and the clipped original waveform are combined to edit the waveform data. Thus, in the waveform data, the difference between the harmonic frequencies of the resonance characteristic is eliminated, and resonance is facilitated and occurrence of beat of the sound is prevented.

Media Content Identification on Mobile Devices

A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.

Transducer apparatus for an edge-blown aerophone and an edge-blown aerophone having the transducer apparatus
11200872 · 2021-12-14 · ·

This disclosure provides a transducer apparatus for an edge-blown aerophone, the edge-blown aerophone having an aerophone embouchure hole. An aerophone speaker delivers sound to a resonant chamber of the aerophone via the aerophone embouchure hole. An aerophone microphone receives, via the aerophone embouchure hole, sound in the resonant chamber. A housing provides a lip plate with a housing embouchure hole independent and separate from the aerophone embouchure hole. Breath sensors sense breath applied across the housing embouchure hole. An electronic processor, connected to the speaker, receives signals from the microphone and the breath sensors. The breath sensors provide signals indicative of breath strength. The electronic processor generates an excitation signal which is delivered as an acoustic excitation signal to the resonant chamber by the aerophone speaker. The electronic processor uses the signals it receives to determine a desired musical note which a player of the aerophone wishes to play.

SOUND SIGNAL GENERATION METHOD, GENERATIVE MODEL TRAINING METHOD, SOUND SIGNAL GENERATION SYSTEM, AND RECORDING MEDIUM
20210383816 · 2021-12-09 ·

A computer-implemented sound signal generation method includes: obtaining a first sound source spectrum of a sound signal to be generated; obtaining a first spectral envelope of the sound signal; and estimating fragment data representative of samples of the sound signal based on the obtained first sound source spectrum and the obtained first spectral envelope.

SOUND SIGNAL SYNTHESIS METHOD, GENERATIVE MODEL TRAINING METHOD, SOUND SIGNAL SYNTHESIS SYSTEM, AND RECORDING MEDIUM
20210375248 · 2021-12-02 ·

A computer-implemented sound signal synthesis method includes: generating, based on first control data representative of a plurality of conditions of a sound signal to be generated, (i) first data representative of a sound source spectrum of the sound signal, and (ii) second data representative of a spectral envelope of the sound signal; and synthesizing the sound signal based on the sound source spectrum indicated by the first data and the spectral envelope indicated by the second data.

SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO
20220199059 · 2022-06-23 ·

A device is provided for capturing vibrations produced by an object such as a musical instrument such as a drum head of a drum kit. The device comprises a detectable element, such as a ferromagnetic element, such as a metal shim and a sensor spaced apart from and located relative to the musical instrument. The detectable element is located between the sensor and the musical instrument. When the musical instrument vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the musical instrument.

AUTOMATIC CONVERSION OF SPEECH INTO SONG, RAP OR OTHER AUDIBLE EXPRESSION HAVING TARGET METER OR RHYTHM

Captured vocals may be automatically transformed using advanced digital signal processing techniques that provide captivating applications, and even purpose-built devices, in which mere novice user-musicians may generate, audibly render and share musical performances. In some cases, the automated transformations allow spoken vocals to be segmented, arranged, temporally aligned with a target rhythm, meter or accompanying backing tracks and pitch corrected in accord with a score or note sequence. Speech-to-song music applications are one such example. In some cases, spoken vocals may be transformed in accord with musical genres such as rap using automated segmentation and temporal alignment techniques, often without pitch correction. Such applications, which may employ different signal processing and different automated transformations, may nonetheless be understood as speech-to-rap variations on the theme.

Method and system for accelerated decomposing of audio data using intermediate data

A method for processing audio data, comprising providing song identification data identifying a particular song from among a plurality of songs or identifying a particular position within a particular song, loading intermediate data associated with the song identification data from a storage medium or from a remote device. The method also comprises obtaining input audio data representing audio signals of the song as identified by the song identification data. The audio signals comprise a mixture of different musical timbres, including at least a first musical timbre and a second musical timbre different from said first musical timbre. The method comprises combining the input audio data and the intermediate data with one another to obtain output audio data. The audio data represent audio signals of the first musical timbre separated from the second musical timbre.