Patent classifications
G10H7/008
Systems and methods for automatic mixing of media
A first device includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for receiving, from a second device, audio mix information for a first audio item and receiving, from the second device, an indication that the first audio item is to be mixed with a second audio item distinct from the first audio item. In response to the indication, the one or more programs include instructions for transmitting to the second device an audio stream including the first audio item and the second audio item mixed in accordance with the audio mix information.
Sound Signal Generation Method, Estimation Model Training Method, and Sound Signal Generation System
A method generates a sound signal in accordance with score data representative of respective durations of a plurality of notes and a shortening indication to shorten a duration of a specific note. The method includes generating a shortening rate, generating a series of control data, and generating a sound signal. The shortening rate is representative of an amount of shortening of the duration of the specific note, and is generated, by inputting, to a first estimation model, condition data representative of a sounding condition specified by score data for the specific note. Each of the series of control data is representative of a control condition of the sound signal corresponding to the score data, and the series of control data reflects a shortened duration of the specific note shortened in accordance with the generated shortening rate. The sound signal is generated in accordance with the series of control data.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
ELECTRONIC MUSICAL INSTRUMENTS, METHOD AND STORAGE MEDIA THEREFOR
An electronic musical instrument includes: a performance controller; and at least one processor, configured to perform the following: instructing sound generation of a first musical tone in response to a first operation on the performance controller; in response to a second operation on the performance controller during the sound generation of the first musical tone, obtaining a first amplitude value of the first musical tone at a time of the second operation, and obtaining a second amplitude value at which a second musical tone is to be sound-produced in response to the second operation on the performance controller; acquiring a parameter value for determining at least one of pitch, timbre, and volume of the second musical tone based on a ratio of the first amplitude value to the second amplitude value; and instructing sound generation of the second musical tone in accordance with the acquired parameter value.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.
Music generator
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
MUSICAL SOUND SIGNAL GENERATION DEVICE, MUSICAL SOUND SIGNAL GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
A musical sound signal generation device continuously connects any one of a connected zeroth delay unit and a connected second delay unit to a fractional delay block and connects at least any one of a new zeroth delay unit and a new second delay unit to at least any one of the fractional delay block other than the fractional delay block connected to a new first delay unit in response to setting any one of the connected zeroth delay unit and the connected second delay unit as the new first delay unit, setting a delay unit in a preceding stage of the new first delay unit as the new zeroth delay unit, and setting a delay unit in a subsequent stage of the new first delay unit as the new second delay unit in accordance with a change in a designated tone pitch.
Signal Generation Method, Signal Generation System, Electronic Musical Instrument, and Recording Medium
A signal generation method implemented by a computer system includes: generating an audio signal in response to a manipulation of each of a first key and a second key; and controlling generation of the audio signal based on a reference point that is a position of the first key at a time point of a manipulation of the second key, based on the second key being manipulated during a manipulation of the first key.
Methods for audio signal transient detection and decorrelation control
Some audio processing methods may involve receiving audio data corresponding to a plurality of audio channels and determining audio characteristics of the audio data, which may include transient information. An amount of decorrelation for the audio data may be based, at least in part, on the audio characteristics. If a definite transient event is determined, a decorrelation process may be temporarily halted or slowed. Determining transient information may involve evaluating the likelihood and/or the severity of a transient event. In some implementations, determining transient information may involve evaluating a temporal power variation in the audio data. Explicit transient information may or may not be received with the audio data, depending on the implementation. Explicit transient information may include a transient control value corresponding to a definite transient event, a definite non-transient event or an intermediate transient control value.