Patent classifications
H04S7/304
Emphasis for audio spatialization
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a first input audio signal is received. The first input audio signal is processed to generate a first output audio signal. The first output audio signal is presented via one or more speakers associated with the wearable head device. Processing the first input audio signal comprises applying a pre-emphasis filter to the first input audio signal; adjusting a gain of the first input audio signal; and applying a de-emphasis filter to the first audio signal. Applying the pre-emphasis filter to the first input audio signal comprises attenuating a low frequency component of the first input audio signal. Applying the de-emphasis filter to the first input audio signal comprises attenuating a high frequency component of the first input audio signal.
Method and apparatus to generate a six dimensional audio dataset
This patent teaches a method and apparatus of an enhanced reading experience. This enables books to be brought to life by enhancing the reading experience by delivering sounds and visual effects at a precise timing based on eye tracking technology during the reading experience.
Methods for obtaining and reproducing a binaural recording
In one aspect, a method for providing a binaural recording to a listener with a head applied in a hearing system, whereas the binaural recording is listened to using a hearing device and whereas the binaural recording consists of a left binaural ear signal intended for a left ear of the listener, and a right binaural ear signal intended for a right ear of the listener, comprises determining a head orientation, determining a source direction of the binaural recording with respect to the head orientation, detecting a change of the head orientation to a new head orientation, adapting the binaural recording considering the source direction of the binaural recording and the new head orientation.
Audio renderer based on audiovisual information
An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.
Head-tracked spatial audio
Spatial filters are generated that map response of an audio capture device to head related transfer functions (HRTFs) for different positions of the audio capture device relative to the HRTFs. A current set of spatial filters are determined based on the plurality of spatial filters and a head position of a user. The microphone signals are convolved with the current set of spatial filters, resulting in a left audio channel and right audio channel that form output binaural audio channels. The binaural audio channels can be used to drive speakers of a headphone set to generate sound that is perceived to have a spatial quality. Other aspects are described and claimed.
HEADSET AND APPLICATION CONTROL BASED ON LOCATION DATA
Disclosed is a headset for audio communication, a software application for an electronic device associated with a headset, and a method for controlling a headset feature. The headset is configured to be worn by a user. The headset comprises a speaker for sound transmission into the user's ear, a transceiver or a radio communication unit for communication with an external device, a connection to a location-based service software, the location-based service software is configured for controlling at least one headset feature based on location data of the headset, and a processing unit. The processing unit is configured for enabling the location-based service software to detect if the current location data of the headset indicates a change in location data corresponding to a certain change criterion and changing the at least one headset feature, if a change criterion associated with the change in location data is satisfied.
SOUND REPRODUCTION METHOD, NON-TRANSITORY MEDIUM, AND SOUND REPRODUCTION DEVICE
A sound reproduction method includes: obtaining a first audio signal indicating a first sound which arrives at a listener from a first range and a second audio signal indicating a second sound which arrives at the listener from a predetermined direction; when the first range and the predetermined direction are determined to be included in a second range which is a back range relative to a front range in the direction that the head part of the listener faces, performing a correction process on at least one of the first audio signal or the second audio signal so that intensity of the second audio signal becomes higher than intensity of the first audio signal; and performing mixing of the at least one of the first audio signal or the second audio signal, and outputting, to an output channel, the first and second audio signals.
ACOUSTIC REPRODUCTION METHOD, RECORDING MEDIUM, AND ACOUSTIC REPRODUCTION SYSTEM
An acoustic reproduction method is an acoustic reproduction method for causing a user to perceive a first sound as a sound arriving from a first position in a three-dimensional sound field and a second sound as a sound arriving from a second position different from the first position in the three-dimensional sound field. The acoustic reproduction method includes: obtaining a movement speed of a head of the user; and generating an output sound signal for causing the user to perceive sounds that arrive from predetermined positions in the three-dimensional sound field. In the generating, when the movement speed obtained is greater than a first threshold, the output sound signal for causing the user to perceive the first sound and the second sound as a sound arriving from a third position between the first position and the second position is generated.
Method and device for processing audio signal, using metadata
Disclosed is a device for processing an audio signal, which renders an audio signal. The device for processing an audio signal includes a processor. The processor receives metadata including an audio signal and first element reference distance information and renders a first element signal on the basis of the first element reference distance information, wherein the first element reference distance information indicates the reference distance of an element signal. The audio signal is capable of including a second element signal which may be simultaneously rendered with the first element signal, and the metadata is capable of including second element distance information indicating the distance of the second element signal. The number of bits required for representing the first element reference distance information is smaller than the number of bits required for representing the second element distance information.
METHOD AND ELECTRONIC DEVICE FOR PROVIDING AMBIENT SOUND WHEN USER IS IN DANGER
An electronic device is disclosed which may collect inertia information and ambient sound, determine whether a user is in danger by monitoring impact sound and a mismatch between a head orientation and a moving direction of a user, and provide the ambient sound collected when it is determined that the user is in danger.