H04S7/308

Systems, Methods, and Graphical User Interfaces for Selecting Audio Output Modes of Wearable Audio Output Devices
20220374197 · 2022-11-24 ·

A computer system displays an audio settings user interface that includes a first user interface element that is activatable to change a current audio output mode of a set of one or more audio output devices. The computer system receives a second set of one or more inputs including an input directed to the first user interface element. In response, the computer system transitions the set of one or more audio output devices from a first audio output mode to a different second audio output mode. In the first audio output mode, audio is output based on a first frame of reference that is a three-dimensional physical environment surrounding the set of one or more audio output devices. In the second audio output mode, audio is output based on a different second frame of reference that is fixed relative to the set of one or more audio output devices.

Bidirectional propagation of sound

The description relates to rendering directional sound. One implementation includes receiving directional impulse responses corresponding to a scene. The directional impulse responses can correspond to multiple sound source locations and a listener location in the scene. The implementation can also include encoding the directional impulse responses to obtain encoded departure direction parameters for individual sound source locations. The implementation can also include outputting the encoded departure direction parameters, the encoded departure direction parameters providing sound departure directions from the individual sound source locations for rendering of sound.

AUDIO ENCODING/DECODING WITH TRANSFORM PARAMETERS

Encoding/decoding techniques where multiple transform parameter sets are encoded together with a rendered playback presentation of an input audio content. The multiple transform parameters are used on the decoder side to transform the playback presentation to provide a personalized binaural playback presentation optimized for an individual listener with respect to their hearing profile. This may be achieved by selection or combination of the data present in the metadata streams.

Audio processing apparatus and method therefor

An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703). The render controller (709) can select the rendering mode for a first audio transducer of the set of audio transducers (703) in response to a position of the first audio transducer relative to a predetermined position for the audio transducer. The approach may provide improved adaptation, e.g. to scenarios where most speakers are at desired positions whereas a subset deviate from the desired position(s).

SYSTEM AND METHOD FOR ADAPTIVE AUDIO SIGNAL GENERATION, CODING AND RENDERING

Embodiments are described for an adaptive audio system that processes audio data comprising a number of independent monophonic audio streams. One or more of the streams has associated with it metadata that specifies whether the stream is a channel-based or object-based stream. Channel-based streams have rendering information encoded by means of channel name; and the object-based streams have location information encoded through location expressions encoded in the associated metadata. A codec packages the independent audio streams into a single serial bitstream that contains all of the audio data. This configuration allows for the sound to be rendered according to an allocentric frame of reference, in which the rendering location of a sound is based on the characteristics of the playback environment (e.g., room size, shape, etc.) to correspond to the mixer's intent. The object position metadata contains the appropriate allocentric frame of reference information required to play the sound correctly using the available speaker positions in a room that is set up to play the adaptive audio content.

Jackpot machine with surround audio
11574518 · 2023-02-07 · ·

A jackpot machine with surround audio is provided. The jackpot machine includes a processor to process at least one audio to generate at least one 3D surround audio, a sending means to send the generated at least one 3D surround audio to a headphone worn by a player, and a display showing at least one graphical representation of a prize. The at least one graphical representation of the prize is associated with the at least one audio, and is perceived by the player to audibly and visually move around the player in accordance with the generated at least one 3D surround audio and the at least one graphical representation of the prize shown on the display respectively.

Dynamic positional audio

System and methods for providing dynamic positional audio are disclosed. Methods can comprise determining availability of one or more devices to output audio and determining a location of the one or more available devices. Audio information can be received and at least a portion of the audio information can be configured to generate assigned audio information based on the determined location of the available devices. The assigned audio information can be transmitted to the available devices.

Mobile electronic device and audio server for coordinated playout of audio media content

A mobile electronic device (device) operating with other devices form a group for wireless coordinated playout of audio media content. The processor performs operations that determine a common time sync shared with the other devices, and determine a timing of occurrence of a sound transient sensed relative to the common time sync. The operations receive timing reports from the other devices, where each of the timing reports indicates a timing of occurrence of the sound transient sensed at a respective one of the other devices relative to the common time sync. The sound transient sensed by the devices is generated at a preferred listening location. The operations coordinate timing of playout of audio media content by the group of devices responsive to comparison of the timing of occurrence of the sound transient sensed by the microphone of the device and the timing of occurrences indicated by the timing reports.

Signalling Change Events at an Audio Output Device

An apparatus, method and computer program for signaling change events at an audio output device are disclosed. The apparatus may include circuitry for detecting, during an audio communications session between a first user device and a second user device, a change event at a first earphones device associated with the first user device, the first earphones device included of first and second earphones. The apparatus may also include circuitry for signaling the detected change event to the second user device and/or a second audio output device associated with the second user device.

DISABLING SPATIAL AUDIO PROCESSING

In one example in accordance with the present disclosure, a system is described. The system includes a processor to perform spatial audio processing on a received audio signal and an audio interface to connect an audio output device to a computing device. The system also includes a controller. The controller determines a spatial audio processing capability of the audio output device and disables spatial audio processing on one of the audio output device and the processor based on a determination of the spatial audio processing capability of the audio output device.