Patent classifications
H04S2420/01
AUDIO SIGNAL PROCESSING METHOD, ELECTRONIC APPARATUS, AND STORAGE MEDIUM
An audio signal processing method, includes: acquiring first rotation information when a wearable device rotates and second rotation information when a mobile device connected to the wearable device rotates; determining relative position information between the wearable device and the mobile device according to the first rotation information and the second rotation information; and processing an audio signal based on the relative position information to obtain a playing audio played by the wearable device.
IMMERSIVE SOUND REPRODUCTION USING MULTIPLE TRANSDUCERS
One or more embodiments include techniques for generating immersive audio for an acoustic system. The techniques include determining an apparent location associated with a portion of audio; calculating, for each speaker included in a plurality of speakers of the acoustic system, a perceptual distance between the speaker and the apparent location; selecting a subset of speakers included in the plurality of speakers based on the perceptual distances between the plurality of speakers and the apparent location; generating a set of filters based on the subset of speakers and one or more target characteristics of the acoustic system; and generating, for each speaker included in the subset of speakers, a speaker signal using one or more filters included in the set of filters.
Arrangement for producing head related transfer function filters
When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.
SYSTEM AND METHOD FOR VIRTUAL SOUND EFFECT WITH INVISIBLE LOUDSPEAKER(S)
In at least one embodiment, an apparatus for providing a virtual sound effect in a listening environment is provided. The apparatus includes at least one controller and an audio playback device. The audio playback device includes the at least one controller that is programmed to receive an audio input signal from an audio input source and to apply a head related transfer function (HRTF) to the audio input signal. The at least one controller is further programmed to apply crosstalk cancellation to the audio input signal and to generate an audio output signal after applying the HRTF and the crosstalk cancellation to the audio input signal for playback by at least one loudspeaker that is invisible to a listener in the listening environment.
SIGNAL GENERATING APPARATUS, VEHICLE, AND COMPUTER-IMPLEMENTED METHOD OF GENERATING SIGNALS
A signal generating apparatus includes: a memory configured to store instructions; and a processor communicatively connected to the memory and configured to execute the stored instructions to function as: a first generator configured to generate a processed signal by adjusting frequency characteristics of an audio signal representative of a sound from a virtual sound source based on a Head-Related Transfer Function (HRTF) corresponding to a target position of the virtual sound source; and a second generator configured to: generate, based on the processed signal generated by the first generator, a plurality of output signals in one-to-one correspondence with a plurality of loudspeakers; and perform panning processing to adjust a level of each output signal of the plurality of output signals based on the target position.
VIRTUAL SOUND LOCALIZATION FOR VIDEO TELECONFERENCING
This disclosure provides methods, devices, and systems for videoconferencing. The present implementations more specifically relate to audio signal processing techniques that can be used to identify speakers in a videoconference. In some aspects, an audio signal processor may map each speaker in a videoconference to a respective spatial direction and transform the audio signals received from each speaker using one or more transfer functions associated with the spatial direction to which the speaker is mapped. The audio signal processor may further transmit the transformed audio signals to an audio output device that emits sounds waves having a directionality associated with the transformation. For example, the audio signal processor may apply one or more head-related transfer functions to the audio signals received from a particular speaker so that the sound waves emitted by the audio output device are perceived as originating from the spatial direction to which the speaker is mapped.
PROCESSING DEVICE AND PROCESSING METHOD
A processing device according to this embodiment includes: a frequency characteristics acquisition unit configured to acquire frequency characteristics of an input signal; an extreme value extraction unit configured to extract an extreme value of spectral data; a kurtosis calculation unit configured to: calculate an evaluation value from spectral data; and calculate a kurtosis of a peak or a dip based on a plurality of evaluation values calculated by changing a calculation width, the evaluation value being used for evaluating the peak or the dip corresponding to the extreme value; a determination unit configured to determine whether to suppress the peak or the dip according to a comparison result between the kurtosis and a threshold value; and a suppression unit configured to suppress the peak or the dip with the extreme value that is determined to be suppressed.
USING BLUETOOTH / WIRELESS HEARING AIDS FOR PERSONALIZED HRTF CREATION
A hearing aid that includes a microphone, a signal processor, and a speaker transmits a signal to a computer. The signal transmitted to the computer can be the input to the microphone (before processing) or the output to the speaker (after processing). This enables the capturing of a HRTF that does not or that does include the enhancements of the hearing aids.
Personalized headphone EQ based on headphone properties and user geometry
Audio processing for a headworn device can include obtaining ear geometry of a user. A frequency response or transfer function can be determined, based on the ear geometry of the user and a model of the headworn device, where the frequency response or transfer function characterizes an effect of a path between a speaker of the headworn device and an ear canal entrance of the user on sound. An equalization filter profile can be generated based on the based on the frequency response or transfer function. The equalization filter profile can be applied to an audio signal, and the audio signal can be used to drive the speaker of the headworn device.
Discrete binaural spatialization of sound sources on two audio channels
Embodiments relate to binaural spatialization of more than two sound sources on two audio channels of an audio system. Sound signals each emitted from a corresponding sound source are collected, and a respective virtual position within an angular range of a sound scene is assigned to each sound source. Multi-source audio signals are generated by panning each sound signal according to the respective virtual position. A first multi-source audio signal is spatialized to a first direction to generate a first left signal and a first right signal. A second multi-source audio signal is spatialized to a second direction to generate a second left signal and a second right signal. A binaural signal is generated using the first left signal, the second left signal, the first right signal, and the second right signal. The binaural signal is such that each sound source appears to originate from its respective virtual position.