Patent classifications
G10H1/38
Method for offsetting pitch data in an audio file
A method is provided of aligning pitch data with audio data in a computing device, the method comprising the computer implemented steps of compiling a plurality of pitch data related to an audio file, each pitch data including information about at least one distinct pitch which is capable of being used by an electronic device to emulate said pitch, said plurality of pitch data compiled in a chronological order relating to said audio file, and arranging the compiled pitch data with the corresponding audio file containing audio data having at least one chord change, wherein the pitch data is offset from the audio data by a predetermined time margin. Further, an audio file is provided, stored on a non-transitory computer readable medium, having pitch data corresponding to and offset from chord changes in audio data by a predetermined time margin advance, and a non-transitory computer readable medium is provided, having stored thereon a set of computer executable instructions.
Method for offsetting pitch data in an audio file
A method is provided of aligning pitch data with audio data in a computing device, the method comprising the computer implemented steps of compiling a plurality of pitch data related to an audio file, each pitch data including information about at least one distinct pitch which is capable of being used by an electronic device to emulate said pitch, said plurality of pitch data compiled in a chronological order relating to said audio file, and arranging the compiled pitch data with the corresponding audio file containing audio data having at least one chord change, wherein the pitch data is offset from the audio data by a predetermined time margin. Further, an audio file is provided, stored on a non-transitory computer readable medium, having pitch data corresponding to and offset from chord changes in audio data by a predetermined time margin advance, and a non-transitory computer readable medium is provided, having stored thereon a set of computer executable instructions.
Audiovisual capture and sharing framework with coordinated, user-selectable audio and video effects filters
Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.
Real-time speech to singing conversion
A method of converting a frame of a voice sample to a singing frame includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.
Motor noise masking
A sound synthesis system is provided with a loudspeaker to project sound indicative of synthesized motor sound in response to receiving a synthesized sound (SS) signal, and a processor. The processor is programmed to: estimate motor sound based on a sensor signal indicative of sound present within a passenger compartment; identify a dominant motor harmonic of the motor sound with an amplitude and a frequency; determine an enrichment value of the motor sound; determine if the motor sound is unenriched based on a comparison of the enrichment value to an enrichment threshold value; generate at least one additional motor harmonic with a first frequency that is different than the frequency of the dominant motor harmonic in response to the motor sound being unenriched; and provide the SS signal to the loudspeaker, wherein the SS signal is indicative of the at least one additional motor harmonic.
ENHANCED VISUALIZATION OF AREAS OF INTEREST IN IMAGE DATA
A method for generating visual enhancement of areas of interest in images includes receiving data representing a plurality of images in a sequence of images; analyzing the plurality images to identify respective three dimensional (3D) locations of one or more areas of interest in the plurality of images; visually enhancing the one or more of the identified areas of interest in the plurality of the images in the sequence of images; and communicating the visually enhanced image data to a display device to be displayed.
Communicating data with audible harmonies
In some implementations, a process for communicating data over audio is performed. In one aspect, one or more ordered sequences of audio attribute values that are selected based on a musical relationship between the audio attribute values and associated with data values may be played by a first device and received by a second device. This technique may allow for sound-based communications to take place between devices that listeners may find pleasant.
Communicating data with audible harmonies
In some implementations, a process for communicating data over audio is performed. In one aspect, one or more ordered sequences of audio attribute values that are selected based on a musical relationship between the audio attribute values and associated with data values may be played by a first device and received by a second device. This technique may allow for sound-based communications to take place between devices that listeners may find pleasant.
INTELLIGENT ACCOMPANIMENT GENERATING SYSTEM AND METHOD OF ASSISTING A USER TO PLAY AN INSTRUMENT IN A SYSTEM
The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern.
Dynamically adapted pitch correction based on audio input
Systems and methods for adjusting pitch of an audio signal include detecting input notes in the audio signal, mapping the input notes to corresponding output notes, each output note having an associated upper note boundary and lower note boundary, and modifying at least one of the upper note boundary and the lower note boundary of at least one output note in response to previously received input notes. Pitch of the input notes may be shifted to match an associated pitch of corresponding output notes. Delay of the pitch shifting process may be dynamically adjusted based on detected stability of the input notes.