Patent classifications
H04S1/005
SIGNAL PROCESSING METHODS AND SYSTEMS FOR RENDERING AUDIO ON VIRTUAL LOUDSPEAKER ARRAYS
Techniques of rendering audio involve applying a balanced-realization state space model to each head-related transfer function (HRTF) to reduce the order of an effective FIR or even an infinite impulse response (IIR) filter. Along these lines, each HRTF G(z) is derived from a head-related impulse response filter (HRIR) via, e.g., a z-transform. The data of the HRIR may be used to construct a first state space representation [A, B, C, D] of the HRTF via the relation .G(z)=C(zI−A).sup.−1B+D This first state space representation is not unique and so for an FIR filter, A and B may be set to simple, binary-valued arrays, while C and D contain the HRIR data. This representation leads to a simple form of a Gramian Q whose eigenvectors provide system states that maximize the system gain as measured by a Hankel norm. Further, a factorization of Q provides a transformation into a balanced state space in which the Gramian is equal to a diagonal matrix of the eigenvalues of Q. By considering only those states associated with an eigenvalue greater than some threshold, the balanced state space representation of the HRTF may be truncated to provide an approximate HRTF that approximates the original HRTF very well while reducing the amount of computation required by as much as 90%.
Audio system using individualized sound profiles
A system for presenting audio content to a user. The system comprises one or more microphones coupled to a frame of a headset. The one or more microphones capture sound from a local area. The system further comprises an audio controller integrated into the headset and communicatively coupled to an in-ear device worn by a user. The audio controller identifies one or more sound sources in the local area based on the captured sound. The audio controller further determines a target sound source of the one or more sound sources and determines one or more filters to apply to a sound signal associated with the target sound source in the captured sound. The audio controller further generates an augmented sound signal by applying the one or more filters to the sound signal and provides the augmented sound signal to the in-ear device for presentation to a user.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
An information processing apparatus includes a holding unit configured to hold a plurality of head related transfer functions for outputting directional sound in a plurality of directions, a setting unit configured to set a direction in which a first head related transfer function and a second head related transfer function are switched, based on characteristics of the first head related transfer function and the second head related transfer function, and a switching unit configured to switch a head related transfer function used to output the directional sound between the first head related transfer function and the second head related transfer function in the set direction.
AUDIO ENHANCEMENT FOR HEAD-MOUNTED SPEAKERS
Embodiments herein are primarily described in the context of a system, a method, and a non-transitory computer readable medium for producing a sound with enhanced spatial detectability and a crosstalk simulation. The audio processing system receives a left and right input channel of an audio input signal, and performs an audio processing to generate an output audio signal. The system generates left and right spatially enhanced signals by gain adjusting side subband components and mid subband components of the left and right input channels. The audio processing system generates left and right crosstalk channels such as by applying a filter and time delay to the left and right input channels, and mixes the spatially enhanced channels with the crosstalk channels. In some embodiments, the system includes high/low frequency enhancement channels and passthrough channels derived from the input channels, which can be mixed with the output audio signal.
AUDIO MIXER AND METHOD OF PROCESSING SOUND SIGNAL
An audio mixer includes a user interface, panners, a first adder, a localization device, a second adder, and an output circuit. The user interface supplies a first parameter and a second parameter for each channel based on a user operation. The first parameter indicates a position in a right-left direction. The second parameter specifies internalization or externalization. The panners respectively correspond to channels and, based on the first parameter, pan a sound signal corresponding to the each channel to generate first stereo signals. The first adder generates a second stereo signal by mixing first stereo signals respectively corresponding to externalization channels. The localization device generates two third stereo signals. The second adder generates a fourth stereo signal by mixing the two third stereo signals and first stereo signals respectively corresponding to internalization channels. The output circuit outputs the fourth stereo signal.
SOUND IMAGE DIRECTION SENSE PROCESSING METHOD AND APPARATUS
According to a sound image direction sense processing method and apparatus, a left-ear channel signal, a right-ear channel signal, and a centered channel signal that are of a sound source are obtained; whether a direction of the sound source is a front direction is determined according to the left-ear channel signal, the right-ear channel signal, and the centered channel signal; and when the direction of the sound source is the front direction, at least one type of the following processing: front direction enhancing processing or rear direction weakening processing is performed separately on the left-ear channel signal and the right-ear channel signal. Therefore, a difference between front direction sense and rear direction sense of a sound image may be enlarged, so that accuracy of determining a direction of a sound source may be improved.
DIGITAL AUDIO PROCESSING SYSTEMS AND METHODS
A system for processing audio data of the present disclosure has an audio processing device for receiving audio data from an audio source. Additionally, the system has logic that separates the audio data received into left channel audio data indicative of sound from a left audio source and right channel audio data indicative of sound from a right audio source. The logic further separates the left channel audio data into primary left ear audio data and opposing right ear audio data and for separating the right channel audio data into primary right ear audio data and opposing left ear audio data applies a first filter to the primary left ear audio data, a second filer to the opposing right ear audio data, a third filter to the opposing left ear audio data, and a fourth filter to the primary right ear audio data, wherein the second and third filters introduce a delay into the opposing right ear audio data and the opposing left ear audio data, respectively. Also, the logic sums the filtered primary left ear audio data with the filtered opposing left ear audio data to obtain processed left channel audio data and sums the filtered primary right ear audio data with the filtered opposing right ear audio data to obtain processed right channel audio data. The logic further combines the processed left channel audio data and the processed right channel audio data into processed audio data and outputting the processed audio data to a listening device for playback by a listener.
AUGMENTED REALITY HEADPHONE ENVIRONMENT RENDERING
Accurate modeling of acoustic reverberation can be essential to generating and providing a realistic virtual reality or augmented reality experience for a participant. In an example, a reverberation signal for playback using headphones can be provided. The reverberation signal can correspond to a virtual sound source signal originating at a specified location in a local listener environment. Providing the reverberation signal can include, among other things, using information about a reference impulse response from a reference environment and using characteristic information about reverberation decay in a local environment of the participant. Providing the reverberation signal can further include using information about a relationship between a volume of the reference environment and a volume of the local environment of the participant.
OUTCOME TRACKING IN SENSORY PROSTHESES
Presented herein are techniques for detecting sensory outcome issues through an analysis of data representing the direction of incidence/arrival of a sensory input and inertial data representing movement of the recipient's head following detection of the sensory input. By correlating recipient head movement (including lack of movement) with the arrival direction of the sensory input, a sensory prosthesis system can determine whether or not the recipient acted as expected and, if not, whether a sensory outcome problem is present.
Method for Processing Sound Signal and Terminal Device
A method includes: receiving, by using channels located in different positions of a terminal device, at least three signals emit by a same sound source; determining, according to three signals in the at least three signals, a signal delay difference between every two of the three signals; determining, according to the signal delay difference, the position of the sound source relative to the terminal device; and when the sound source is located in front of the terminal device, performing orientation enhancement processing on a target signal in the at least three signals, and obtaining a first output signal and a second output signal of the terminal device according to a result of the orientation enhancement processing, where the orientation enhancement processing is used to increase a degree of discrimination between a front characteristic frequency band and a rear characteristic frequency band of the target signal.