H04S7/40

Enabling rendering, for consumption by a user, of spatial audio content

An apparatus comprising: means for causing selection of spatial audio content in dependence upon a position of a user in a virtual space; • means for causing rendering, for consumption by the user, of the selected spatial audio content including a first spatial audio content; • means for causing, after user consumption of the first spatial audio content, recording of data relating to the first spatial audio content; • means for using, at a later time, the recorded data to detect a new event relating to the first spatial audio content, the new event comprises that the first spatial audio content has been adapted for which a new spatial content is created, for example in the form of a limited preview; and • means for providing a user-selectable option to enable rendering, for consumption by the user, of the first spatial audio content by rendering a simplified sound object representative, which can be a downmix or clustered audio objects.

Adjustment of acoustic map and presented sound in artificial reality systems

A headset generates an acoustic map that is a visual representation of sound pressure of sound within a local area. The acoustic map is displayed as a virtual object that is overlaid onto visual content being presented by the headset. Sound is adjusted in accordance with a command. The acoustic map is updated based on the adjusted sound, and the adjusted sound is presented and the updated acoustic map is presented.

DETERMINING A VIRTUAL LISTENING ENVIRONMENT

One or more acoustic parameters of a current acoustic environment of a user may be determined based on sensor signals captured by one or more sensors of the device. One or more preset acoustic parameters may be determined based on the one or more acoustic parameters of the current acoustic environment of the user and an acoustic environment of an audio file comprising audio signals that is determined based on the audio signals of the audio file or metadata of the audio file. The audio signals may be spatially rendered by applying spatial filters that include the one or more preset acoustic parameters to the audio signals, resulting in binaural audio signals. The binaural audio signals may be used to drive speakers of a headset. Other aspects are described and claimed.

System and method for loudspeaker position estimation
11622220 · 2023-04-04 · ·

Embodiments of systems and methods are described for estimating a position of a loudspeaker and notifying a listener if an abnormal condition is detected, such as an incorrect loudspeaker orientation or an obstruction in a path between the loudspeaker and a microphone array. For example, a front component of a multi-channel surround sound system may include the microphone array and a position estimation engine. The position estimation engine may estimate the distance between the loudspeaker and the microphone array. In addition, the position estimation engine may estimate an angle of the loudspeaker using a first technique. The position estimation engine may also estimate an angle of the loudspeaker using a second technique. The two angles can be processed to determine whether the abnormal condition exists. If the abnormal condition exists, a listener can be notified and be provided with suggestions for resolving the issue in a graphical user interface.

HEAD-RELATED TRANSFER FUNCTION DETERMINATION USING REFLECTED ULTRASONIC SIGNAL

An audio system includes a plurality of transducers, one or more acoustic sensors, and a controller. The plurality of transducers transmits an ultrasonic beam towards an ear of a user. The one or more acoustic sensors detect a reflected signal generated by an interaction of the ultrasonic beam with the ear. The controller updates a three-dimensional geometry of the ear based on the reflected signal. The controller determines a head-related transfer function (HRTF) for the user based in part on the three-dimensional geometry of the ear.

AUGMENTED REALITY DEVICE PERFORMING AUDIO RECOGNITION AND CONTROL METHOD THEREFOR

Proposed is an augmented reality device capable of performing audio identification and a control method therefor. The augmented reality device comprises: a see-through display which is formed to enable a user's eyes to see therethrough and thus outputs a virtual object; an audio input unit which receives an input of an audio signal generated within a preset distance from the display; and a control unit which controls operations of the see-through display to identify event information corresponding to the audio signal and to output image information of the virtual object corresponding to the identified event information.

DISPLAY APPARATUS AND OPERATING METHOD THEREOF
20230147334 · 2023-05-11 ·

A method of operating a display apparatus includes: transmitting stereo data corresponding to first audio data included in content being reproduced, to an external audio apparatus using a first audio transmission profile; changing an audio transmission profile from the first audio transmission profile to a second audio transmission profile, based on an audio-related event occurring while the stereo data is transmitted using the first audio transmission profile; and obtaining first mono audio data by selecting any one of a plurality of pieces of sound data included in the stereo data, and transmitting the first mono audio data and second mono audio data generated based on second audio data corresponding to the audio-related event, to the external audio apparatus using the second audio transmission profile.

ELECTRONIC DEVICE AND CONTROL METHOD
20230209255 · 2023-06-29 · ·

Disclosed are an electronic device and a control method. The electronic device comprises a communication interface for communicating with an external apparatus in a UWB manner, a microphone, a camera, a sensor, a display, and a processor, wherein the processor obtains distance information and angle information with respect to the external apparatus, on the basis of data received from the external apparatus, obtains photographing direction information, on the basis of a detected direction of the camera, identifies the external apparatus, on the basis of the acquired photographing direction information, distance information, and angle information, controls the microphone or communication interface to acquire an audio signal, on the basis of a distance from the external apparatus, and controls the display to display a UI indicating the amplitude of the acquired audio signal, together with a displayed image.

Information processing to indicate a position outside a display region

The present technology relates to an information processing device, an information processing method, and a program that can more exactly indicate a position outside a display region. An outside-display-region-position designation unit designates a position outside a display region of an image display unit, and a drawing/sound control unit controls output of a sound of an AR object from a sound output unit while moving the AR objet toward the designated position. The present technology can be applied to a wearable computer, for example, a glasses-type device having a pair of image display units for a left eye and a right eye.

Information processing system, apparatus and method for measuring a head-related transfer function
09854371 · 2017-12-26 · ·

Method and apparatus for measuring a Head-Related Transfer Function (HRTF) may include determining position and orientation of an object relative to an audio signal generating device having a first known position and orientation, based on tracking information indicating position and orientation of the object. Movement data indicating direction of movement to position the object at a target position and orientation in relation to a predetermined position and orientation of the audio signal generating device may be generated, according to the relative position and orientation of the object. When the object is determined to be at the target position and orientation, the HRTF may be determined based on detection at the object of an audio signal from the audio signal generating device at the first known position and orientation while the object is at the target position and orientation.