Patent classifications
H04S2400/11
AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.
Method of providing sound that matches displayed image and display device using the method
A method of providing sounds matching an image displayed on a display panel includes: calculating a first object in the image by analyzing digital video data corresponding to the image, and calculating first gain values based on a location of the first object, and applying first gain values to a plurality of sound data; displaying the image on the display panel based on the digital video data; and outputting the plurality of sounds by vibrating the display panel based on the plurality of sound data to which the first gain values applied, using a plurality of sound generating devices.
AUDIO INTEGRATION OF PORTABLE ELECTRONIC DEVICES FOR ENCLOSED ENVIRONMENTS
Implementations of the subject technology provide for audio integration of portable electronic devices into enclosed environments. A portable electronic device may be carried, by a user, into an enclosed environment, such as into an enclosure of a building, a room, or other apparatus. One or more remote speakers may be disposed in the enclosed environment. The remote speaker(s) may be operated in cooperation with the portable electronic device to spatially coordinate audio output from the remote speaker(s) with video content displayed by the portable electronic device.
Apparatus, method and computer program for providing notifications
An apparatus, method and computer program, the apparatus including means for determining that perspective mediated content is available within content provided to a rendering device; and means for adding a notification to the content indicative that perspective mediated content is available; wherein the notification includes spatial audio effects added to the content.
Auditory wearable device management system, auditory wearable device management method, and program thereof
An auditory wearable device management system for supporting effective use of an auditory wearable device shared by a collection of people (hereinafter abbreviated as an auditory device) is provided. An auditory wearable device management system for distributing audio information from one or more information sound sources to an auditory device worn by each wearer constituting a collection of people and acting collectively includes wearing information holding means that associates the wearer with a wearing status of the auditory device worn by the wearer as wearing information, audio information distribution control means that outputs distribution control information for designating the wearer, the wearer being a distribution destination of the audio information from each of the one or more information sound sources.
Emphasis for audio spatialization
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a first input audio signal is received. The first input audio signal is processed to generate a first output audio signal. The first output audio signal is presented via one or more speakers associated with the wearable head device. Processing the first input audio signal comprises applying a pre-emphasis filter to the first input audio signal; adjusting a gain of the first input audio signal; and applying a de-emphasis filter to the first audio signal. Applying the pre-emphasis filter to the first input audio signal comprises attenuating a low frequency component of the first input audio signal. Applying the de-emphasis filter to the first input audio signal comprises attenuating a high frequency component of the first input audio signal.
Scalable extended reality video conferencing
Some embodiments of the present inventive concept provide for improved telepresence and other virtual sessions dynamic scaling and/or assignment of computing resources. An XR telepresence platform can allow for immersive multi-user video conferencing from within a web browser or other medium. The platform can support spatial audio and/or user video. The platform can scale to hundreds or thousands of users concurrently in a single or multiple virtual environments. Disclosed herein are resource allocation techniques for dynamically allocating client connections across multiple servers.
LIVE DATA DISTRIBUTION METHOD, LIVE DATA DISTRIBUTION SYSTEM, AND LIVE DATA DISTRIBUTION APPARATUS
A live data distribution method obtains first sound source information according to sound of a first sound source generated at a first location of the first venue and position information of the first sound source, and second sound source information according to a second sound source including an ambient sound generated at a second location of the first venue, as distribution data, distributes the distributed data to the second venue, and renders the distribution data and providing first sound of the first sound source having been performed with localization processing based on the position information of the first sound source, and second sound of the second sound source, at the second venue.
LAYERED DESCRIPTION OF SPACE OF INTEREST
Aspects of the disclosure provide methods and apparatuses for audio processing. In some examples, an apparatus for media processing includes processing circuitry. The processing circuitry receive audio inputs associated with a layered description for a space of interest in an audio scene. The space of interest includes a plurality of subspaces. The layered description includes a first layer and a second layer. The first layer has a common node with a first value that is a common attribute value of two or more subspaces in the plurality of subspaces. The second layer has individual nodes respectively associated with each of the plurality of subspaces. The processing circuitry determines the plurality of subspaces of the space of interest based on the layered description, and renders an audio output based on the audio inputs in response to a location of a subject of the audio scene being in the space of interest.
Use of local link to support transmission of spatial audio in a virtual environment
A method including enabling a first user having a first user device to communicate with one or more second users having one or more second user devices via a network, wherein each user has a spatial position within a virtual space, such that, for each user within the virtual space, all other users within the virtual space have a relative spatial position; providing spatial audio data from the first user device to the one or more second user devices and receiving spatial audio data at the first user device from the one or more second user devices, such that each user is provided with audio from the other users in the respective relative spatial positions of the other users; and enabling a third user having a third user device to communicate with the first user and the one or more second users via the first user device.