H04S7/304

SOUND OUTPUT CONTROL DEVICE, SOUND OUTPUT SYSTEM, SOUND OUTPUT CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

A sound output control device includes: an orientation detecting unit configured to detect a state of orientation of a face of a user; an ambient sound obtaining unit configured to obtain ambient sound; an ambient sound reducing processing unit configured to perform, based on the ambient sound, processing of reducing the ambient sound; and a sound output control unit configured to cause sound to be output with the ambient sound reduced by the ambient sound reducing processing unit, when the detected orientation of the face of the user is in a first state, and make audibility of the ambient sound higher than in a state where the ambient sound has been reduced by the ambient sound reducing processing unit, when the detected orientation of the face of the user is in a second state changed from the first state.

Acoustic output apparatus

The present disclosure provides an acoustic output apparatus including one or more status sensors, at least one low-frequency acoustic driver, at least one high-frequency acoustic driver, at least two first sound guiding holes, and at least two second sound guiding holes. The status sensors may detect status information of a user. The low-frequency acoustic driver may generate at least one first sound, a frequency of which is within a first frequency range. The high-frequency acoustic driver may generate at least one second sound, a frequency of which is within a second frequency range including at least one frequency exceeding the first frequency range. The first and second sound guiding holes may output the first and second spatial sound, respectively. The first and second sound may be generated based on the status information, and may simulate a target sound coming from at least one virtual direction with respect to the user.

DELAYED AUDIO FOLLOWING
20230020792 · 2023-01-19 ·

Disclosed herein are systems and methods for presenting mixed reality audio. In an example method, audio is presented to a user of a wearable head device. A first position of the user's head at a first time is determined based on one or more sensors of the wearable head device. A second position of the user's head at a second time later than the first time is determined based on the one or more sensors. An audio signal is determined based on a difference between the first position and the second position. The audio signal is presented to the user via a speaker of the wearable head device. Determining the audio signal comprises determining an origin of the audio signal in a virtual environment. Presenting the audio signal to the user comprises presenting the audio signal as if originating from the determined origin. Determining the origin of the audio signal comprises applying an offset to a position of the user's head.

ELECTRONIC DEVICE AND OPERATION METHOD OF ELECTRONIC DEVICE FOR CONTROLLING EXTERNAL ELECTRONIC DEVICE
20230018784 · 2023-01-19 ·

An electronic device is provided. The electronic device includes a communication module, a first ultra-wideband (UWB) module, a second UWB module, and a processor operatively connected to the communication module, the first UWB module, and the second UWB module, wherein the processor is configured to determine a direction of a user's gaze, based on data acquired from the first UWB module and data acquired from the second UWB module, select at least one external electronic device positioned in the gaze direction, and send, through the communication module, a request to the selected external electronic device to output media.

Sound Localization for an Electronic Call
20230224658 · 2023-07-13 ·

During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.

Supplementing Content

An apparatus, method and computer program product for: providing spatial audio content for output via at least one loudspeaker, determining a position of at least one audio device operatively connected to the at least one loudspeaker, providing, in response to determining that the position of the at least one audio device corresponds to an audio zone associated with additional spatial audio content, the additional spatial audio content for output via the at least one audio device, receiving an instruction to include the additional spatial audio content in the spatial audio content, and supplementing, in response to receiving the instruction to include the additional spatial content in the spatial audio content, the spatial audio content with the additional spatial audio such that the additional spatial audio content is provided for output independent of the audio zone.

Multi-viewpoint multi-user audio user experience

An apparatus including circuitry configured for receiving a spatial media content file including a plurality of viewpoints; circuitry configured for determining a first viewpoint from the plurality of viewpoints for a first user consuming the spatial media content file; circuitry configured for receiving an indication that affects an audio rendering of the first viewpoint for the first user, wherein the indication is associated with one or more actions of at least one second user consuming the spatial media content file; and circuitry configured for controlling the audio rendering of the first viewpoint for the first user in response to the receiving of the indication based on at least one of: a position and/or orientation of the first user, and the one or more actions of the second user.

Sound field adjustment

A device includes one or more processors configured to receive, via wireless transmission from a streaming device, encoded ambisonics audio data representing a sound field. The one or more processors are also configured to perform decoding of the ambisonics audio data to generate decoded ambisonics audio data. The decoding of the ambisonics audio data includes base layer decoding of a base layer of the encoded ambisonics audio data and selectively includes enhancement layer decoding in response to an amount of movement of the device. The one or more processors are further configured to adjust the decoded ambisonics audio data to alter the sound field based on data associated with at least one of a translation or an orientation associated with the movement of the device. The one or more processors are also configured to output the adjusted decoded ambisonics audio data to two or more loudspeakers for playback.

VIRTUAL AND MIXED REALITY AUDIO SYSTEM ENVIRONMENT CORRECTION
20230224667 · 2023-07-13 ·

A virtual reality (VR), augmented reality (AR) and/or mixed reality (MR) system in a physical environment with a plurality of loudspeakers includes a user-worn head mounted display (HMD), a VR/AR/MR processor, and a VR/AR/MR user tracking processor. The HMD includes a microphone and a user tracking device configured to track a user orientation and position. The VR/AR/MR processor delivers a digital video signal to the head-mounted display, and a digital control signal and a digital audio signal to a receiver/preamplifier. The VR/AR/MR user tracking processor receives user tracking data from the HMD user tracking device and provides a digital user tracking data signal to the receiver preamplifier. the receiver/preamplifier receives the digital user tracking data signal, the digital control signal, the digitized microphone signal, and the digital audio signal, and provides a processed audio signal to the amplifier. An amplifier receives the processed audio signal and provides amplified audio signals.

APPARATUS FOR IMMERSIVE SPATIAL AUDIO MODELING AND RENDERING

Disclosed is an apparatus for immersive spatial audio modeling and rendering for effectively transmitting and playing immersive spatial audio content. The apparatus for immersive spatial audio modeling and rendering disclosed herein may model a spatial audio scene, generate and transmit parameters necessary for spatial audio rendering, and generate various spatial audio effects using the spatial audio parameters, to provide an immersive three-dimensional (3D) audio source coinciding with visual experience in a virtual reality space in response to free changes in the position and direction of a remote user in the space.