Patent classifications
G10H1/0091
VOICE ASSISTANT SYSTEM WITH AUDIO EFFECTS RELATED TO VOICE COMMANDS
Voice command type entry used as a basis for applying “audio effects” (see definition herein), “sound effects” (see definition herein) and/or audio edits (see definition herein) to a sound signal. This may be done so that the various types of instructed audio processing evoke, in typical listeners, a desired sentiment or mood. Artificial intelligence may be used to accomplish this objective.
Information processing method and apparatus
An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.
Sampler for an Intelligent Cable or Cable Adapter
A specialized audio/instrument cable with built-in digital signal processing capabilities that adds digital audio sampling capabilities that allows the user to trigger synthesized sounds or virtual musical instruments from within the cable itself to affect the sound generated from an instrument or microphone such that the cable is the only connection needed between the instrument or microphone and an output device. Using voice recognition, the specialized cable can select an audio effects chain algorithm and/or sampled sound algorithm extrapolated from a musical digital audio fingerprint (MDAF) created from a desired musician's instrument to alter the sound of the input instrument's audio.
Adaptive coefficients and samples elimination for circular convolution
Technologies are disclosed for improving the efficiency of real-time audio processing, and specifically for improving the efficiency of continuously modifying a real-time audio signal. Efficiency is improved by reducing memory bandwidth requirements and by reducing the amount of processing used to modify the real-time audio signal. In some configurations, memory bandwidth requirements are reduced by selectively transferring active samples in the frequency domain—e.g. avoiding the transfer samples with amplitudes of zero or near-zero. This has particular importance when the specialized hardware retrieves samples from main memory in real-time. In some configurations, the amount of processing needed to modify the audio signal is reduced by omitting operations that do not meaningfully affect the output audio signal. For example, a multiplication of samples may be avoided when at least one of the samples has an amplitude of zero or near-zero.
METHOD FOR CHORUS MIXING, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The present disclosure provides a method for chorus mixing, an apparatus, an electronic device and storage media. The method includes converting a main vocal audio signal and a chorus audio signal into signals in frequency domain, respectively, wherein the chorus audio signal comprises main vocal audio played by a speaker; determining a delay between the main vocal audio signal and the chorus audio signal based on a frequency-domain signal of the main vocal audio signal and a frequency-domain signal of the main vocal audio played by the speaker included in a frequency-domain signal of the chorus audio signal; aligning the chorus audio signal with the main vocal audio signal based on the determined delay; performing an echo cancellation on the aligned chorus audio signal; and mixing audio of the main vocal audio signal and the echo-canceled chorus audio signal.
Effect addition device, effect addition method and storage medium
An effect addition device includes at least one processor that executes a time domain convolution process of convolving a first time domain data part of impulse response of sound effects with a time domain data on an original sound, a frequency domain convolution process of convoluting a second time domain data part of the impulse response data with the time domain data on the original sound, a convolution extension process of extending a convolved state(s) of an output signal(s) resulting from the time domain convolution process and/or the frequency domain convolution process by arithmetic processing which corresponds to an all-pass filter and/or arithmetic processing which corresponds to a comb filter, and a synthesized sound effect addition process of adding a sound effect which is synthesized by execution of the time domain convolution process, the frequency domain convolution process and the convolution extension process to the original sound.
Modular Pedalboard Arrangement
A pedalboard arrangement (600), including: a right support end (102); a left support end (104); a pair of support members (106) configured to support a foot pedal for musical effects, each secured in position between the right support end and the left support end, and each including a clip feature (110) that extends along a long axis (112) of a respective support member; and an attachment device (300) including: a clip (304) configured to clip onto the clip feature and remain retained thereon via a resilience of the clip at a variety of locations along the long axis; and a clip connector (306) including a clip hole (308) through a first end (310) configured to receive a fastener associated with the foot pedal and a second end (322) connected to the clip.
Broad spectrum audio device designed to accelerate the maturation of stringed instruments.
The present invention comprises a device and process designed to accelerate the maturation of stringed musical instruments, composed of but not limited to a broad spectrum audio generator coupled with one or more fasteners via one or more armatures dimensioned to allow easy installation, secure attachment, and easy uninstallation from the stringed musical instrument.
Neural modeler of audio systems
A neural network is trained to digitally model a reference audio system. Training is carried out by repeatedly performing a set of operations. The set of operations includes predicting by the neural network, a model output based upon an input, where the output approximates an expected output of the reference audio system, and the prediction is carried out in the time domain. The set of operations also includes applying a perceptual loss function to the neural network based upon a determined psychoacoustic property, wherein the perceptual loss function is applied in the frequency domain. Moreover, the set of operations includes adjusting the neural network responsive to the output of the perceptual loss function. A neural model file is output that can be loaded to generate a virtualization of the reference audio system.
ADAPTIVE COEFFICIENTS AND SAMPLES ELIMINATION FOR CIRCULAR CONVOLUTION
Technologies are disclosed for improving the efficiency of real-time audio processing, and specifically for improving the efficiency of continuously modifying a real-time audio signal. Efficiency is improved by reducing memory bandwidth requirements and by reducing the amount of processing used to modify the real-time audio signal. In some configurations, memory bandwidth requirements are reduced by selectively transferring active samples in the frequency domain—e.g. avoiding the transfer samples with amplitudes of zero or near-zero. This has particular importance when the specialized hardware retrieves samples from main memory in real-time. In some configurations, the amount of processing needed to modify the audio signal is reduced by omitting operations that do not meaningfully affect the output audio signal. For example, a multiplication of samples may be avoided when at least one of the samples has an amplitude of zero or near-zero.