Patent classifications
G10H2210/315
Neural modeler of audio systems
A neural network is trained to digitally model a reference audio system. Training is carried out by repeatedly performing a set of operations. The set of operations includes predicting by the neural network, a model output based upon an input, where the output approximates an expected output of the reference audio system, and the prediction is carried out in the time domain. The set of operations also includes applying a perceptual loss function to the neural network based upon a determined psychoacoustic property, wherein the perceptual loss function is applied in the frequency domain. Moreover, the set of operations includes adjusting the neural network responsive to the output of the perceptual loss function. A neural model file is output that can be loaded to generate a virtualization of the reference audio system.
Patient tailored system and process for treating ASD and ADHD conditions using music therapy and mindfulness with Indian Classical Music
A method, system and processes to develop a patient tailored music therapy based on Indian Classical Music compositions to treat ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) is described. According to this present invention there is provided a method to develop a tailored music therapy for treating patients suffering from ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) based on the patient's response and a system to measure response of the patient to music therapy and mindfulness inputs. This invention comprises of a process to determine suitable Indian Classical Music compositions playlist for use in treating the patient (FIG. 1) followed by further tuning of the selections allowable note levels, ramp up and ramp down times to and from allowable note levels, melody hold times and rhythm pattern selection to develop an optimum waveform (FIG. 2) all based on measuring the patient response using a multiple input—physical movement, audio and brain wave response measurement system (FIG. 3) or thru visual observations. The invention also provides a process to determine daily therapy and mindfulness time and a process for monthly music therapy and mindfulness tailoring. The invention also provides a system (FIG. 3) to measure patient response to the music therapy and mindfulness, which can be used in conjunction with or in place of visual observations. In this invention the patient starts off with a therapy and mindfulness tailoring session where a playlist of Indian Classical Music Raga compositions is first developed, selected based on the patient's response as measured by the system provided in FIG. 3 or thru visual observations. Then patient specific optimum note level, beat rhythm pattern and rhythm pattern frequency, ramp up to and down times from optimum note levels are determined based on the patient's response to create a waveform (FIG. 2). The playlist selections are then modified manually or by a computer program using the waveform parameters and when played to the patient elicits a Calm Range Response pattern defined by a state of stimulated mindfulness but not falling asleep characterized by specific range of motion, audio or brain wave response unique to the patient. The specific pieces of the waveform are derived by varying waveform parameters and measuring the response of the patient (FIGS. 4 A, B, C, D) using the response measuring system (FIG. 3) or thru visual observations. The invention also describes a process to develop daily listening period duration (FIG. 5). The invention describes a process used
NEURAL MODELER OF AUDIO SYSTEMS
A process is provided for training a neural network that digitally models an audio system. A sound source is utilized to electrically couple a test signal into an input of a reference audio system. The output of the reference audio system is collected into an audio interface coupled to a computer. A neural network is then trained using the test signal and the captured information to derive a set of weight vectors with appropriate values such that the overall output of the neural network converges towards an output representative of the reference audio system, and a signal in the time domain from a musical instrument is processed through the trained neural network with a latency under 20 milliseconds. A graphical user interface then outputs a graphical representation of the trained neural network, where the graphical representation visually displays at least one virtual control for interaction by a user.
DEEP ENCODER FOR PERFORMING AUDIO PROCESSING
Embodiments are disclosed for determining an answer to a query associated with a graphical representation of data. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an unprocessed audio sequence and a request to perform an audio signal processing effect on the unprocessed audio sequence. The one or more embodiments further include analyzing, by a deep encoder, the unprocessed audio sequence to determine parameters for processing the unprocessed audio sequence. The one or more embodiments further include sending the unprocessed audio sequence and the parameters to one or more audio signal processing effects plugins to perform the requested audio signal processing effect using the parameters and outputting a processed audio sequence after processing of the unprocessed audio sequence using the parameters of the one or more audio signal processing effects plugins.
Musical instrument effects processor
A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
CONTENT CONTROL DEVICE AND STORAGE MEDIUM
A content control device includes: a plurality of controls to which a plurality of parameters for controlling properties of a content containing at least one of sound and video are respectively assigned, each of the plurality of controls outputting a first indicated value in accordance with an operation amount of the control; and a processor configured to previously create setting information used to determine respective values of the plurality of parameters in accordance with the second indicated value; determine the values of the plurality of parameters corresponding to the second indicated value respectively in accordance with the second indicated value and the setting information; and revise each of the values of the parameters to be determined in accordance with the first indicated value outputted for the control assigned to the parameter.
METHOD FOR SONG MULTIMEDIA SYNTHESIS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The disclosure provides a method for synthesizing a song multimedia, an electronic device and a storage medium. Material obtaining modes are provided based on a song multimedia synthesis request. User audios provided by a user are obtained based on a selected material obtaining mode. A user timbre output by a timbre extraction model is obtained by inputting the user audios into the timbre extraction model. Lyrics to be synthesized and a tune to be synthesized provided by the user are obtained based on the selected material obtaining mode, and a synthesized song multimedia is obtained by inputting the user timbre, the lyrics to be synthesized and the tune to be synthesized into a song synthesis model.
HARMONIC-BASED INTENSITY REGULATION SYSTEM, METHOD AND DEVICE FOR SOUND ASSETS
A harmonic-based intensity regulation system, method and device, disclosed herein, is applicable to sound assets. The system, in an embodiment, includes one or more data storage devices or tangible mediums that store or include a plurality of computer readable instructions configured to direct one or more processors to receive one or more inputs. The one or more inputs correspond to: a selection of one of a plurality of different harmonic classes, wherein each of the harmonic classes is associated with a variable harmonic frequency; a frequency range that is dimensioned or otherwise great enough to bound the variable harmonic frequency of the selected harmonic class and a plurality of other frequencies; an intensity threshold; and an amount of an intensity change. The instructions are also configured to direct the one or more processors to detect whether one or more frequencies of a frequency spectrum satisfy a regulation condition. The regulation condition includes a first requirement for the one or more frequencies to be within the frequency range, and the regulation condition also includes a second requirement for the one or more frequencies to have a designated relationship with the intensity threshold. The instructions are configured to direct the one or more processors to change an intensity of the detected one or more frequencies by at least part of the amount. The change can include an intensity decrease, an intensity elimination or an intensity increase.
Audio file envelope based on RMS power in sequences of sub-windows
A method comprising determining an envelope of an audio file based on a double-windowing analysis of the audio file.
Systems and methods for generating haptic output for enhanced user experience
Systems and methods for generating a haptic output from an audio signal having a continuous stream of sampled digital audio data are provided. A haptic processing system receives the digital audio data, analyses the digital audio data for processing and extracts haptic signals for generating a haptic effect through an actuator. The method includes passing the digital audio signal on through dynamic processor(s), adjusting the dynamic range of the digital audio signal, extracting the signal envelope of the audio data, synthesising low-frequency signals from the extracted signal envelope, and enhancing the low-frequency content using a resonator. The haptic output is generated by mixing the digital audio signal with outputs from the different modules of the haptic processing system. An analytics module monitors, controls and adjusts the processing of the digital audio signal at the noise gate module, the compressor module and the envelope module to enhance the haptic output.