Patent classifications
G10H2210/301
EQUALIZER CONTROLLER AND CONTROLLING METHOD
Equalizer controller and controlling method are disclosed. In one embodiment, an equalizer controller includes an audio classifier for identifying the audio type of an audio signal in real time; and an adjusting unit for adjusting an equalizer in a continuous manner based on the confidence value of the audio type as identified.
Vehicle, control method of vehicle, and vehicle driving sound control apparatus and method
A user may freely generate a vehicle driving sound using a mounted mobile terminal so as to generate various types of driving sounds through a speaker. A vehicle includes a setting unit configured to set response characteristics of a vehicle driving sound for at least one of a plurality of pieces of vehicle state information and set a section-specific volume of the vehicle driving sound for at least one of a plurality of set response characteristics including revolutions per minute (RPM) and a controller configured to control an output of the vehicle driving sound generated according to a setting of the setting unit and control sound transfer characteristics unique to the vehicle to be reflected when the vehicle driving sound is generated and output.
Instant-on One-button Aural Ambiance Modification And Enhancement
An instant-on nothing-to-download soundscaping device that provides natural, atonal sounds such as the rolling surf of the ocean, running streams, gurgling brooks, rain, thunder, wind, crowd sounds, et al is provided.
Joint and coordinated visual-sonic metaphors for interactive multi-channel data sonification to accompany data visualization
Data sonification arrangements for use with data visualization so as to provide parallel perceptual channels for representing complex numerical data to a user seeking to identify correlations within the data are presented. In an implementation, several varying data quantities are represented by time-varying graphics while several other varying data quantities are represented by time-varying sound, both presented to a user to observe correlations between sonic and visual events or trends. Sonification can be used to offload some information-carrying information capacity from a visualization system, while other information can be rendered via both sonification and visualization to provide affirming or orienting redundancy. In an implementation joint and coordinated visual-sonic metaphors are used for this or other purposes. For example, data sonification can include multiple data-modulated sound timbre classes, each rendered within a stereo sound field according to a spatial metaphor that is shared with the visualization.
Automatic multi-channel music mix from multiple audio stems
There are disclosed automatic mixers and methods for creating a surround audio mix. A set of rules may be stored in a rule base. A rule engine may select a subset of the set of rules based, at least in part, on metadata associated with a plurality of stems. A mixing matrix may mix the plurality of stems in accordance with the selected subset of rules to provide three or more output channels.
Equalizer controller and controlling method
Equalizer controller and controlling method are disclosed. In one embodiment, an equalizer controller includes an audio classifier for identifying the audio type of an audio signal in real time; and an adjusting unit for adjusting an equalizer in a continuous manner based on the confidence value of the audio type as identified.
Directional audio for virtual environments
Technology is described for providing audio for digital content. The digital content may be provided to a plurality of devices. Each device may be associated with a profile. At least one group of profiles may be identified from the plurality of devices that share an affiliation. The group of profiles may be represented as a group of environment objects in the digital content. A location may be identified within the digital content that corresponds to the group of environment objects. Audio may be received for the devices. The audio may be received while the digital content is being transmitted to the plurality of devices. The audio may be transmitted to the plurality of devices for directional audio playback. The audio may be directed to correspond with the virtual location of the at least one group of environment objects in the digital content.
Method of music instruction
A method of music instruction utilizing a system capable of producing two or more sounds perceived by the user as originating in specific locations in three-dimensional space relative to the user, with the system comprising a user interface, a sound generator, a transmitter, and a monitoring device, and the method comprising the steps of obtaining the system; interacting with the user interface of the system to provide instruction to the sound generator; interacting with the monitoring device; generating the one or more sounds perceived to be emanating from some location in three-dimensional space based on the instruction provided; transmitting output signals to the monitoring device; and perceiving the sounds by means of the monitoring device, with the method being practiced at the same time that the user either plays or does not play a musical instrument.
Systems and methods for acoustic simulation
Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.
Method and system for instrument separating and reproducing for mixture audio source
A method and a system for instrument separating and reproducing for a mixture audio source is provided. The method and/or the system includes inputting selected music into an instrument separation model for extracting features therefrom, determining audio source signals of multiple channels for the separation of all instruments, each channel containing sound of one instrument, and transmitting the signals of the different channels to multiple speakers placed at designated positions for playing, which can reproduce or recreate an immersive sound field listening experience for users.