Patent classifications
G10H1/383
METHOD FOR CHORUS MIXING, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure provides a method for chorus mixing, an apparatus, an electronic device and storage media. The method includes converting a main vocal audio signal and a chorus audio signal into signals in frequency domain, respectively, wherein the chorus audio signal comprises main vocal audio played by a speaker; determining a delay between the main vocal audio signal and the chorus audio signal based on a frequency-domain signal of the main vocal audio signal and a frequency-domain signal of the main vocal audio played by the speaker included in a frequency-domain signal of the chorus audio signal; aligning the chorus audio signal with the main vocal audio signal based on the determined delay; performing an echo cancellation on the aligned chorus audio signal; and mixing audio of the main vocal audio signal and the echo-canceled chorus audio signal.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus according to the present disclosure includes: an acquisition unit that acquires music information; an extraction unit that extracts a plurality of types of feature amounts from the music information acquired by the acquisition unit; and a generation unit that generates information in which the plurality of types of feature amounts extracted by the extraction unit is associated with predetermined identification information as music feature information to be used as learning data in composition processing using machine learning.
ELECTRONIC DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND METHOD THEREFOR
In an electronic device for an electronic musical instrument, a determination grace period during which a plurality of operations on the electronic musical instrument by a user are determined to be simultaneously performed for the first section is set based on the data included in a first section of the song having a plurality of sections. Automatic accompaniment is advanced from the first section to the next second section when a user operation of the electronic musical instrument is detected outside of the determination grace period for the first section during the playback of the first section of the accompaniment, and automatic accompaniment is not advanced from the first section to the second section when the user operation of the electronic musical instrument is detected within the determination grace period for the first section during the playback of the first section of the accompaniment.
METHOD AND INSTALLATION FOR PROCESSING A SEQUENCE OF SIGNALS FOR POLYPHONIC NOTE RECOGNITION
This is a method and installation in which a time-domain digital audio signal is split into a plurality of narrow-band time-domain digital audio signals confined to specific frequency bands, short-term segments of which are temporarily stored in memory. The method comprises the use of signal processing algorithms for extracting multiple signal features from said short-term segments in a fixed sequence or upon request from a decision-making algorithm. Said decision-making algorithm makes tentative or final decisions about the type of occupancy of frequency bands resulting from the extracted features. Said decision-making algorithm may request from said signal processing algorithms further specific feature extractions from specific short-term segments and make further tentative or final decisions about the type of occupancy of frequency bands resulting from the requested features. Next, said decision-making algorithm stores its tentative decisions and makes final decisions about band occupancy for processing together with results from later short-term segments. Eventually, said decision-making algorithm outputs final decisions derived from current and past short-segments in the form of a set of notes having been played over some recent time interval, together with information as to the timing of each note from the set.
Audiovisual capture and sharing framework with coordinated, user-selectable audio and video effects filters
Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.
INTELLIGENT ACCOMPANIMENT GENERATING SYSTEM AND METHOD OF ASSISTING A USER TO PLAY AN INSTRUMENT IN A SYSTEM
The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern.
Dynamically adapted pitch correction based on audio input
Systems and methods for adjusting pitch of an audio signal include detecting input notes in the audio signal, mapping the input notes to corresponding output notes, each output note having an associated upper note boundary and lower note boundary, and modifying at least one of the upper note boundary and the lower note boundary of at least one output note in response to previously received input notes. Pitch of the input notes may be shifted to match an associated pitch of corresponding output notes. Delay of the pitch shifting process may be dynamically adjusted based on detected stability of the input notes.
INFORMATION PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND PROGRAM
[Object] To propose an image processing method, image processing apparatus and program which are capable of exciting the emotions of a viewer more effectively. [Solution] An information processing method including: analyzing a beat of input music; extracting a plurality of unit images from an input image; and generating, by a processor, editing information for switching the extracted unit images depending on the analyzed beat.
Intelligent accompaniment generating system and method of assisting a user to play an instrument in a system
The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern.
METHOD, DEVICE AND SOFTWARE FOR APPLYING AN AUDIO EFFECT
The present invention provides a method for processing music audio data, comprising the steps of providing input audio data representing a first piece of music containing a mixture of predetermined musical timbres, decomposing the input audio data to generate at least a first audio track representing a first musical timbre selected from the predetermined musical timbres, and a second audio track representing a second musical timbre selected from the predetermined musical timbres, applying a predetermined first audio effect to the first audio track, applying no audio effect or a predetermined second audio effect, which is different from the first audio effect, to the second audio track, and obtaining recombined audio data by recombining the first audio track with the second audio track.