Patent classifications
G10H2210/051
CONTEXT-DEPENDENT PIANO MUSIC TRANSCRIPTION WITH CONVOLUTIONAL SPARSE CODING
The present disclosure presents a novel approach to automatic transcription of piano music in a context-dependent setting. Embodiments described herein may employ an efficient algorithm for convolutional sparse coding to approximate a music waveform as a summation of piano note waveforms convolved with associated temporal activations. The piano note waveforms may be pre-recorded for a particular piano that is to be transcribed and may optionally be pre-recorded in the specific environment where the piano performance is to be performed. During transcription, the note waveforms may be fixed and associated temporal activations may be estimated and post-processed to obtain the pitch and onset transcription. Experiments have shown that embodiments of the disclosure significantly outperform state-of-the-art music transcription methods trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.
Rhythm Point Detection Method and Apparatus and Electronic Device
The present disclosure provides a rhythm point detection method and apparatus and an electronic device, and relates to the technical field of music analysis. The method includes that: an audio signal to be detected is acquired, and an audio feature curve is generated according to the audio signal to be detected; a music style of the audio signal to be detected is determined; a detection peak threshold and a detection frame width threshold are determined according to the music style of the audio signal to be detected; and a rhythm point of the audio feature curve is determined according to the detection peak threshold and the detection frame width threshold.
METHOD, APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE MEDIUM FOR DISPLAYING SPECIAL EFFECTS
The disclosure provides a method, an apparatus, an electronic device, and a computer readable medium for displaying special effects and relates to the technical field of special effect display. The method includes: obtaining musical features of music played in a special effect display interface; and displaying special effects on the special effect display interface based on the musical features. In the embodiment of the present disclosure, by obtaining musical features of music, and displaying special effects generated based on the musical features on the special effect display interface, the change of the display of the special effects is associated to the musical features, the display of the special effect is more diverse, and the special effects is combined with the musical features, thereby increasing the user's immersive experience.
METHODS, INFORMATION PROCESSING DEVICE, PERFORMANCE DATA DISPLAY SYSTEM, AND STORAGE MEDIA FOR ELECTRONIC MUSICAL INSTRUMENT
A method performed by one or more processors in an information processing device for an electronic musical instrument includes, via the one or more processors: receiving performance data generated by a user performance of the electronic musical instrument; extracting time-series characteristics of a sequence of notes from the performance data; detecting a performance technique from the extracted characteristics; and generating an image data reflecting the detected performance technique and outputting the generated image data.
CONTENT CONTROL DEVICE AND STORAGE MEDIUM
A content control device includes: a plurality of controls to which a plurality of parameters for controlling properties of a content containing at least one of sound and video are respectively assigned, each of the plurality of controls outputting a first indicated value in accordance with an operation amount of the control; and a processor configured to previously create setting information used to determine respective values of the plurality of parameters in accordance with the second indicated value; determine the values of the plurality of parameters corresponding to the second indicated value respectively in accordance with the second indicated value and the setting information; and revise each of the values of the parameters to be determined in accordance with the first indicated value outputted for the control assigned to the parameter.
Systems and methods for selecting an audio track by performing a gesture on a track-list image
Systems and methods for selecting an audio track by performing a gesture on a track-list image are provided. The system includes a processor that performs a method including displaying the audio-track list, detecting a contact with the touchscreen display at a location corresponding to the audio track, detecting a continuous movement of the contact in a direction, detecting a length of the continuous movement, and selecting the audio track if the continuous movement has a length longer than a threshold length. The method includes shifting text associated with the audio track based on the length and direction of the continuous movement. The method includes determining that the selection is a command to queue the audio track for playback or add it to a preparation track list. This determination may be based on the direction of the continuous movement.
Audio effect utilizing series of waveform reversals
The invention is a process for the creation of an audio effect in the context of an audio editing software. The effect is created by applying a series or sequence of reversal instances across a sample or waveform in time.
MUSIC CONTEXT SYSTEM AUDIO TRACK STRUCTURE AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT
A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transition—achieved using a timing offset, such as relative advancement of an significant musical “onset,” that is inserted to align with a pre-existing but identified music signature, beat or timebase—between potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.
Media Content Identification on Mobile Devices
A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
SYSTEMS AND METHODS FOR PROVIDING AUDIO-FILE LOOP-PLAYBACK FUNCTIONALITY
Systems and methods for providing audio-file loop-playback functionality are provided. The system includes a processor that performs a method including setting a playback loop start-point based on a first selection of a button; setting a loop end-point, associating a loop with an audio file, and entering into the loop based on a second selection of the button; and exiting the loop based on a third selection of the button. Associating the loop with the audio file includes adding metadata to the audio file. The metadata associates the loop with a button. The method includes reentering the loop based on a fourth selection of the button and exiting the loop based on a fifth selection of the button.