Patent classifications
G10H2210/571
INTERACTIVE MOVEMENT AUDIO ENGINE
A method for generating an audio output is described. Image inputs of interactive movements by a user captured by an image sensor are received. The interactive movements are mapped to a sequence of audio element identifiers. The sequence of audio element identifiers are processed to generate a musical sequence by performing music theory rule enforcement on the sequence of audio element identifiers. An audio output that represents the musical sequence is generated.
AUTOMATIC MUSIC PLAYING CONTROL DEVICE, ELECTRONIC MUSICAL INSTRUMENT, METHOD OF PLAYING AUTOMATIC MUSIC PLAYING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
Provided is an automatic music playing control device that provides an instruction of playing music, the control device implementing natural music playing capable of expressing the timing and voicing in live music playing of a musical instrument by a player. The automatic music playing control device includes at least one processor, and the at least one processor selects a voicing pattern corresponding to a combination of the probabilistically-selected number of sounds to be emitted and a decided voicing type corresponding to a range from among a plurality of voicing patterns based on a scale decided according to the tune and chords of a music piece and instructs a sound source to emit a chord voiced based on the selected voicing pattern.
METHOD AND APPARATUS FOR MAKING MUSIC SELECTION BASED ON ACOUSTIC FEATURES
A method of making audio music selection and creating a mixtape, comprising importing song files from a song repository; sorting and filtering the song files based on selection criteria; and creating the mixtape from the song files sorting and filtering results. The sorting and filtering of the song files comprise: spectral analyzing each of the song files to extract low level acoustic feature parameters of the song file; from the low level acoustic feature parameter values, determining the high level acoustic feature parameters of the analyzed song file; determining a similarity score of each of the analyzed song files by comparing the acoustic feature parameter values of the analyzed song file against desired acoustic feature parameter values determined from the selection criteria; and sorting the analyzed song files according to their similarity scores; and filtering out the analyzed song files with first similarity scores lower than a filter threshold.
TRANSITIONS BETWEEN MEDIA CONTENT ITEMS
A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.
Motor noise masking
A sound synthesis system is provided with a loudspeaker to project sound indicative of synthesized motor sound in response to receiving a synthesized sound (SS) signal, and a processor. The processor is programmed to: estimate motor sound based on a sensor signal indicative of sound present within a passenger compartment; identify a dominant motor harmonic of the motor sound with an amplitude and a frequency; determine an enrichment value of the motor sound; determine if the motor sound is unenriched based on a comparison of the enrichment value to an enrichment threshold value; generate at least one additional motor harmonic with a first frequency that is different than the frequency of the dominant motor harmonic in response to the motor sound being unenriched; and provide the SS signal to the loudspeaker, wherein the SS signal is indicative of the at least one additional motor harmonic.
Input Support Apparatus and Method Therefor
An input support method is provided for use in an input support apparatus that supports input of a music note. The method includes: controlling a display unit to display a pitch-time plane that includes a pitch-axis and a time-axis, a chord sequence that is associated with the time-axis of the pitch-time plane, and a pointer that indicates a position on the time-axis along the chord sequence; identifying constituent music notes that form a chord corresponding to a display position of the pointer along the chord sequence; and controlling the display unit to display areas on the pitch-time plane, each displayed area indicating a corresponding one of the identified constituent music notes, differently from other areas on the pitch-time plane.
ELECTRONIC MUSICAL INSTRUMENT, SOUND PRODUCTION METHOD FOR ELECTRONIC MUSICAL INSTRUMENT, AND STORAGE MEDIUM
An electronic musical instrument includes a plurality of performance elements that specify pitch data; a sound source that produces musical sounds; and a processor configured to perform the following: when a user performance of the plurality of performance elements satisfies a prescribed condition, instructing the sound source to produce automatic arpeggio playing sounds corresponding to pitch data specified by the user performance; and when the user performance of the plurality of performance elements does not satisfy the prescribed condition, instructing the sound source to produce a sound of a pitch data specified by the user performance without producing the automatic arpeggio playing sound.
ELECTRONIC MUSICAL INSTRUMENT, SOUND PRODUCTION METHOD FOR ELECTRONIC MUSICAL INSTRUMENT, AND STORAGE MEDIUM
An electronic musical instrument includes: a plurality of performance elements that specify pitch data; a sound source that produces musical sounds; and a processor configured to perform the following: when a user performance of the plurality of performance elements satisfies a prescribed condition, instructing the sound source to produce a sound of a first timbre and a sound of a second timbre, both corresponding to a pitch data specified by the user performance; and when the user performance of the plurality of performance elements does not satisfy the prescribed condition, instructing the sound source to produce the sound of the first timbre corresponding to the pitch data specified by the user performance and not instructing the sound source to produce the sound of the second timbre.
NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORED WITH AUTOMATIC MUSIC ARRANGEMENT PROGRAM, AND AUTOMATIC MUSIC ARRANGEMENT DEVICE
A non-transitory computer-readable storage medium stored with an automatic music arrangement program, and an automatic music arrangement device are provided. An outer voice note having a highest pitch among notes of which sound production start times are approximately the same in a melody part acquired from musical piece data is identified. A melody part acquired by deleting inner voice notes of which sound production starts within a sound production period of the outer voice note and of which pitches are low from the melody part is generated. Candidate accompaniment parts in which root notes of chords of chord data of the musical piece data are arranged to be produced at sound production timings thereof for each pitch range acquired by shifting a range of pitches corresponding to one octave by one semitone at each time are generated, and an accompaniment part is selected among them.
COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING MUSICAL COMPOSITIONS THAT ARE SYNCHRONIZED TO VIDEO
Computer-based systems, devices, and methods for generating musical compositions that are purposefully synchronized with video are described. A video timeline is defined with various time-markers that demarcate specific events in the video. A music timeline is generated based on the video timeline. The music timeline preserves the various time-markers from the video timeline. A computer-based musical composition system generates a musical composition based on the music timeline. The musical composition includes various musical events that align, synchronize, or coincide with the time-markers such that when the video and musical composition are played together the musical events align, synchronize, or coincide with the demarcated events in the video.