Patent classifications
G10H2250/615
Generating music with deep neural networks
The present disclosure provides systems and methods that include or otherwise leverage a machine-learned neural synthesizer model. Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, the neural synthesizer model can use deep neural networks to generate sounds at the level of individual samples. Learning directly from data, the neural synthesizer model can provide intuitive control over timbre and dynamics and enable exploration of new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer. As one example, the neural synthesizer model can be a neural synthesis autoencoder that includes an encoder model that learns embeddings descriptive of musical characteristics and an autoregressive decoder model that is conditioned on the embedding to autoregressively generate musical waveforms that have the musical characteristics one audio sample at a time.
AUDIO FILE RE-RECORDING METHOD, DEVICE AND STORAGE MEDIUM
Provided are an audio file re-recording method and device, and a storage medium. The method includes: determining first time, the first time being start time of a recorded clip to be re-recorded in an audio file; playing a first recorded clip that has been recorded, the first recorded clip using the first time as end time in the audio file; upon arrival of the first time, collecting first voice data of a user to obtain a second recorded clip; and processing the first recorded clip and the second recorded clip to obtain a re-recorded audio file.
MUSICAL SOUND PROCESSING APPARATUS AND MUSICAL SOUND PROCESSING METHOD
A musical sound processing apparatus includes: a first generation part configured to generate a plurality of first signals in which at least one of a frequency characteristic and a time response of each of a plurality of processing unit signals obtained from a musical sound signal is modified; and a second generation part configured to generate a plurality of second signals in which at least one of a frequency characteristic and a time response of a noise signal associated with one or more of the plurality of processing unit signals is replaced with at least one of a frequency characteristic and a time response of the corresponding first signal.
Music and audio playback system
A music and audio playback system is implemented on a computer with a playback engine that enables the operator to apply a variety of effects. The system may store one or more snapshots, or a combination of settings for a plurality of controls that are applied by the playback engine. These snapshots allow for changes to settings for effects, mixing and playback to be made quickly, some of which would normally be difficult to perform. A sampler module permits a user to specify one or more samples that may be triggered for playback. The most frequently used samples may be designated as scratching files that may be quickly activated through the push of a button (or other control). Additionally, a waveform display represents a window of audio samples around a current playback time.
Wavetable Waveform Iterative Interpolation System for Digital Synthesizers
A wavetable waveform interpolation system for digital synthesizers utilizes a progressively iterative method, by which an initial anchor waveform continuously fades into a final anchor waveform. Multiple interpolation points are positioned in progressive succession between the initial and final anchor positions in the wavetable. Each interpolation point has a normalized final position increment between it and the final anchor position, as well as a normalized initial position decrement between it and the initial anchor position. These provide the basis for weighting factors that determine the relative contributions of the initial and final waveforms at each interpolation point.