Patent classifications
G10H2220/355
Dynamic beat optimization
Aspects of the present invention provide an approach for dynamically optimizing a beat. In an embodiment, a current movement rate and biometric data for each user in a group performing a physical activity are collected. An upcoming movement rate for each user is predicted based on the collected current movement rates and biometric data. Music having an optimized beat is then generated based on a lowest upcoming movement rate among the predicted upcoming movement rates.
METHOD FOR EMBEDDING AND EXECUTING AUDIO SEMANTICS
Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.
Computationally efficient language based user interface event sound selection
A computer user interface (UI) is capable of generating a sound when a predetermined event occurs. The sound generated when the predetermined event occurs may possess at least some characteristics of a predominant natural language used by a user and/or a location of a computer implementing the UI. This enables the user to quickly assimilate the sound generated when the predetermined event occurs. Because the user quickly assimilates the sound generated when the predetermined event occurs, the user is able to rapidly respond to the predetermined event, at times using the computer UI, which reduces undesirable memory use, processor use and/or battery drain associated with a computing device that implements the computer UI.
Cloud-based media synchronization system for generating a synchronization interface and performing media synchronization
A computer implemented media synchronization platform has a receiver that receives, from a user-operated computing device located at a live performance, a device media file. Furthermore, the computer implemented media synchronization platform receives, from a soundboard positioned at the live event, a soundboard audio file. The computer implemented media synchronization platform synchronizes, and receives, from the user-operated computing device via a synchronization interface, a realignment of a soundboard audio track. The device media file has a device video track and the device audio track. The soundboard audio file has the soundboard audio track. The computer implemented platform media synchronization platform has a processor that synchronizes the soundboard audio track with the video track, generates dual audio track A/V data based on the synchronization, generates the synchronization interface, and generates a single audio track A/V file based on the realignment. The processor is remotely positioned from the live performance.
MOBILE SYSTEM ALLOWING ADAPTATION OF THE RUNNER'S CADENCE
A mobile music listening device synchronizing in a personalized way music and movement, and dedicated to improving the kinematics of the runner. Thanks to inertial units connected to a smartphone, the runner's steps are detected in real time by the mobile application. A dedicated algorithm adapts the pulsation of the musical excerpts in such a way as to bring the runner to a suitable cadence, capable of preventing injuries.
A method for the synchronization of the rhythmic stimulation with the biological variability using a Kuramoto model characterized in that phase oscillator with a coupling term from the movement dynamics with parameters of, coupling strength, maximum and minimum frequencies for a fraction of the unmodified song frequency, maximum difference between the tempo and target frequency, Target the target frequency.
AUTOMATED GENERATION OF COORDINATED AUDIOVISUAL WORK BASED ON CONTENT CAPTURED GEOGRAPHICALLY DISTRIBUTED PERFORMERS
Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.
Music detection and identification
A sensor processing unit comprises a sensor processor. The sensor processor is configured to communicatively couple with a microphone. The sensor processor is configured to acquire, from the microphone, a sample captured by the microphone from an environment in which the microphone is disposed. The sensor processor is configured to perform music activity detection on the audio sample to detect for music within the audio sample. Responsive to detection of music within the audio sample, the sensor processor is configured to send a music detection signal to an external processor located external to the sensor processing unit, the music detection signal indicating that music has been detected in the environment.
Pace-Aware Music Player
An electronic device may comprise audio processing circuitry, pace tracking circuitry, and positioning circuitry. The pace tracking circuitry may be operable to selects a tempo of songs for playback, by the audio processing circuitry based on position data generated by the positioning circuitry, a desired tempo, and whether the songs are stored locally or network-accessible. The position data may indicate the pace of a runner during a preceding, determined time interval. The pace tracking circuitry may control the song selection and/or time stretching based on a runner profile data stored in memory of the music device. The profile data may include runner's distance-per-stride data. The electronic device may include sensors operable to function as a pedometer. The pace tracking circuitry may update the distance-per-stride data based on the position data and based on data output by the one or more sensors.
Cadence-Based Selection, Playback, and Transition Between Song Versions
A system and methods for acquiring cadence and selecting a song version based on the acquired cadence are disclosed. If the system detects a new cadence, then a new song version that corresponds to the new cadence can be played. The new song version playback can start in a corresponding position as the location of playback in a currently-playing song version. Each related song version shares one or more characteristics, such as melody, but is different in at least one characteristic, such as tempo.