Patent classifications
G10H2210/076
SYSTEMS AND METHODS FOR PROVIDING AUDIO-FILE LOOP-PLAYBACK FUNCTIONALITY
Systems and methods for providing audio-file loop-playback functionality are provided. The system includes a processor that performs a method including setting a playback loop start-point based on a first selection of a button; setting a loop end-point, associating a loop with an audio file, and entering into the loop based on a second selection of the button; and exiting the loop based on a third selection of the button. Associating the loop with the audio file includes adding metadata to the audio file. The metadata associates the loop with a button. The method includes reentering the loop based on a fourth selection of the button and exiting the loop based on a fifth selection of the button.
APPARATUS, METHOD, AND COMPUTER-READABLE MEDIUM FOR CUE POINT GENERATION
An apparatus, method, and computer-readable storage medium that generate at least a cue point in a musical piece. The method includes generating a beat grid representing the musical piece, determining values for the beat grid, the values corresponding to an audio feature of the musical piece, and each value representing an entire duration of each beat in the beat grid of the musical piece, calculating a score for the audio feature at each of a plurality of positions in the beat grid of the musical piece, using some or all of the determined values, and generating the cue point at a particular position of the plurality of positions, based on the calculated scores.
Estimating a tempo metric from an audio bit-stream
The invention relates to estimating tempo information directly from a bitstream encoding audio information, preferably music. Said tempo information is derived from at least one periodicity derived from a detection of at least two onsets included in the audio information. Such onsets are detected via a detection of long to short block transitions (in the bitstream) or/and via a detection of a changing bit allocation (change of cost) regarding encoding/transmitting the exponents of transform coefficients encoded in the bitstream.
Beat decomposition to facilitate automatic video editing
The disclosed technology relates to a process for detecting musical artifacts within a musical composition. The detection of musical artifacts is based on analyzing the energy and frequency of the digital signal of the musical composition. The identification of musical artifacts within a musical composition would be used in connection with audio-video editing.
Improvised guitar simulation
The present disclosure is directed at methods, apparatus and systems for implementing an improvised guitar playing feature on a rhythm-action game. The improvised guitar playing feature allows players to manipulate a guitar controller to produce a pleasing, musical-sounding improvised play even if the players have little experience or skill at improvising music. This feature uses quantized 8.sup.th and 16.sup.th note musical phrases, or “licks”, strung together to form authentic, melodic, and rhythmically musical and impressive guitar lines, regardless of the player's ability. The improvised guitar playing feature can also display cues directing the player to improvise in a certain manner, while still providing players a degree of musical freedom in selecting how to play. In some embodiments, the present disclosure is also directed at scoring mechanisms for evaluating improvised guitar play.
Re-timing a video sequence to an audio sequence based on motion and audio beat detection
Embodiments are disclosed for re-timing a video sequence to an audio sequence based on the detection of motion beats in the video sequence and audio beats in the audio sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input, the first input including a video sequence, detecting motion beats in the video sequence, receiving a second input, the second input including an audio sequence, detecting audio beats in the audio sequence, modifying the video sequence by matching the detected motions beats in the video sequence to the detected audio beats in the audio sequence, and outputting the modified video sequence.
SYSTEMS AND METHODS FOR EMBEDDING DATA IN MEDIA CONTENT
A method is provided for modifying a first media content item by superimposing a first set of data over a first audio event having an amplitude that satisfies a first threshold. The first audio event has a first audio profile, the first set of data has a second audio profile, playback of the second audio profile is configured to be masked by the first audio profile during playback of the first media content item, and the first set of data includes playlist information. The method includes transmitting, to a second electronic device, the modified first media content item.
BEAT SOUND GENERATION TIMING GENERATING DEVICE, BEAT SOUND GENERATION TIMING GENERATING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
An information processing device executes: a process of generating, from inputted data of a musical piece, a plurality of intensity data in a predetermined time interval, each of the plurality of intensity data indicating a timing governing a beat of the musical piece and a power at the timing; a process of calculating a cycle and a phase of the beat of the musical piece by using the plurality of intensity data for each of the time intervals; a process of detecting a generation timing of a beat sound based on the calculated cycle and the calculated phase of the beat of the musical piece; and a process of setting one of a wide range and a narrow range narrower than the wide range as a BPM range to be used for calculation of the cycle and the phase of the beat for each of the time intervals.
SCALABLE SIMILARITY-BASED GENERATION OF COMPATIBLE MUSIC MIXES
Scalable similarity-based generation of compatible music mixes. Music clips are projected in a pitch interval space for computing musical compatibility between the clips as distances or similarities in the pitch interval space. The distance or similarity between clips reflects the degree to which clips are harmonically compatible. The distance or similarity in the pitch interval space between a candidate music clip and a partial mix can be used to determine if the candidate music clip is harmonically compatible with the partial mix. An indexable feature space may be both beats-per-minute (BPM)-agnostic and musical key-agnostic such that harmonic compatibility can be quickly determined among potentially millions of music clips. A graphical user interface-based user application allows users to easily discover combinations of clips from a library that result in a perceptually high-quality mix that is highly consonant and pleasant-sounding and reflects the principles of musical harmony.
VEHICLE SYSTEMS AND RELATED METHODS
Vehicle machine learning methods include providing one or more computer processors communicatively coupled with a vehicle. Using data gathered from biometric sensors and/or vehicle sensors, a machine learning model is trained to determine a mental state of a driver and/or a driving state corresponding with a portion of a trip. In implementations the mental or driving state may be determined without a machine learning model. Based at least in part on the determined mental state and the determined driving state, one or more interventions are automatically initiated to alter the mental state of the driver. The interventions may include preparing (or modifying) and initiating a music playlist, altering a lighting condition within the vehicle, altering an audio condition within the vehicle, altering a temperature condition within the vehicle, and initiating, altering, or withholding conversation from a conversational agent. Vehicle machine learning systems perform the vehicle machine learning methods.