Patent classifications
G10H2210/051
SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO
A device is provided for capturing vibrations produced by an object such as a musical instrument such as a cymbal of a drum kit. The device comprises a detectable element, such as a ferromagnetic element, such as a metal shim and a sensor spaced apart from and located relative to the musical instrument. The detectable element is located between the sensor and the musical instrument. When the musical instrument vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the musical instrument.
AUTOMATIC CONVERSION OF SPEECH INTO SONG, RAP OR OTHER AUDIBLE EXPRESSION HAVING TARGET METER OR RHYTHM
Captured vocals may be automatically transformed using advanced digital signal processing techniques that provide captivating applications, and even purpose-built devices, in which mere novice user-musicians may generate, audibly render and share musical performances. In some cases, the automated transformations allow spoken vocals to be segmented, arranged, temporally aligned with a target rhythm, meter or accompanying backing tracks and pitch corrected in accord with a score or note sequence. Speech-to-song music applications are one such example. In some cases, spoken vocals may be transformed in accord with musical genres such as rap using automated segmentation and temporal alignment techniques, often without pitch correction. Such applications, which may employ different signal processing and different automated transformations, may nonetheless be understood as speech-to-rap variations on the theme.
CONTROLLER FOR REAL-TIME VISUAL DISPLAY OF MUSIC
A controller for real-time visual display of music includes a music analysis module and a display control module. The music analysis module receives an audio input, determines human perceived musical structures, human felt affect and emotion as a function of the audio input, and outputs a signal corresponding to the determined structure, affect and emotion. The display control module is operatively coupled to the music analysis module and receives the signal and controls a visual display as a function thereof to express the determined musical structure, affect and emotion in a visual manner.
COMPUTATIONALLY-ASSISTED MUSICAL SEQUENCING AND/OR COMPOSITION TECHNIQUES FOR SOCIAL MUSIC CHALLENGE OR COMPETITION
An application that manipulates audio (or audiovisual) content, automated music creation technologies may be employed to generate new musical content using digital signal processing software hosted on handheld and/or server (or cloud-based) compute platforms to intelligently process and combine a set of audio content captured and submitted by users of modern mobile phones or other handheld compute platforms. The user-submitted recordings may contain speech, singing, musical instruments, or a wide variety of other sound sources, and the recordings may optionally be preprocessed by the handheld devices prior to submission.
MUSIC CONTEXT SYSTEM AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT HAVING REGARD TO MUSICAL TIMING
Due to discrepancies in musical timing signatures, the invention assesses whether a recorded displacement, expressed in terms of beats and fractions, between exit and entry points for a potential musical splice or cut, corresponds to permit a seamless music splicing of different musical sections. Assessment is achieved by establishing a third time base of pulses having a length dependent upon a lowest common multiple of fractions within respective bars for different sections, with the bars of the respective sections then partitioned into an equal number of fixed length pulses. A coefficient aligns different time signatures; it is a ratio between pulses within the different sections. The coefficient identifies corresponding locations of a cut point, related to a suitable anacrusis, in terms of respectively an aligned bar, beat, quaver and fraction in differing time signatures. The coefficient ensures that the time anacrusis in one time signature is interchangeable with others.
AUTO-GENERATED ACCOMPANIMENT FROM SINGING A MELODY
A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
SYSTEMS AND METHODS FOR RESPONDING TO ELECTRICAL-POWER LOSS IN A DJ MEDIA PLAYER
A DJ media player is provided. The DJ media player includes a display to show audio playback information; a platter used to control audio playback; one or more energy-storing devices; and a processor for responding to DJ media-player electrical-power loss. Responding to DJ media-player electrical-power loss includes storing electrical charge on energy-storing devices; sensing a disruption in electrical current supplied by a power source to the DJ media player; powering the DJ media player using the electrical charge on the energy-storing devices; and displaying a notification on a DJ media-player display that there was a disruption of power. Responding to electrical-power loss comprises performing a safe shutdown procedure for avoiding file and system corruption.
SYSTEMS AND METHODS FOR SELECTING AN AUDIO TRACK BY PERFORMING A GETURE ON A TRACK-LIST IMAGE
Systems and methods for selecting an audio track by performing a gesture on a track-list image are provided. The system includes a processor that performs a method including displaying the audio-track list, detecting a contact with the touchscreen display at a location corresponding to the audio track, detecting a continuous movement of the contact in a direction, detecting a length of the continuous movement, and selecting the audio track if the continuous movement has a length longer than a threshold length. The method includes shifting text associated with the audio track based on the length and direction of the continuous movement. The method includes determining that the selection is a command to queue the audio track for playback or add it to a preparation track list. This determination may be based on the direction of the continuous movement.
SYSTEMS AND METHODS FOR MUSICAL TEMPO DETECTION
Systems and methods for musical tempo detection are provided. The method includes detecting peaks and their locations in a waveform of a digital audio track, and dividing the track into first measures with a first-measure length based on a first estimated tempo. The method includes determining distances between a beginning of the first measures and each peak location, and determining a first number of peaks having the same distance from the beginning of the first measures. The method includes dividing the track into second measures with a second-measure length based on a second estimated tempo; determining distances between a beginning of the second measures and each peak location; and determining a second number of peaks having the same distance from the beginning of each of the second measures. The method includes estimating an accurate tempo by comparing the first number of peaks and the second number of peaks.
BEAT DECOMPOSITION TO FACILITATE AUTOMATIC VIDEO EDITING
The disclosed technology relates to a process for detecting musical artifacts within a musical composition. The detection of musical artifacts is based on analyzing the energy and frequency of the digital signal of the musical composition. The identification of musical artifacts within a musical composition would be used in connection with audio-video editing.