Patent classifications
H04S2400/11
VOICE OUTPUT CONTROL DEVICE, CONFERENCE SYSTEM DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A voice output control device includes a base control unit configured to set, based on information on relative positions between an own base of the base control unit and other bases, a direction of a position where voice to be output to each of the other bases is localized, and set, based on information on relative distances between the own base and the other bases, a height of the position where the voice is localized; and a sound source processor configured to localize voice from the other base to generate a voice signal to be output, based on the position set by the base control unit.
SPATIAL AUDIO CONTROLLER
A method performed a local device that is communicatively coupled with several remote devices, the method includes: receiving, from each remote device with which the local device is engaged in a communication session, an input audio stream; receiving, for each remote device, a set parameters; determining, for each input audio stream, whether the input audio stream is to be 1) rendered individually or 2) rendered as a mix of input audio streams based on the set of parameters; for each input audio stream that is determined to be rendered individually, spatial rendering the input audio stream as an individual virtual sound source that contains only that input audio stream; and for input audio streams that are determined to be rendered as the mix of input audio streams, spatial rendering the mix of input audio streams as a single virtual sound source that contains the mix of input audio streams.
Providing Digital Media with Spatial Audio to the Blockchain
Methods and apparatus provide digital media with spatial audio to a blockchain. The blockchain network executes a decentralized application (Dapp) with a user interface (UI) that enables a user to select audio for spatialization and uploading to the blockchain. The spatial audio transmits to the blockchain network to reduce processing and transmission of network data.
Audio apparatus and method of operation therefor
An audio apparatus, e.g. for rendering audio for a virtual/augmented reality application, comprises a receiver (201) for receiving audio data for an audio scene including a first audio component representing a real-world audio source present in an audio environment of a user. A determinator (203) determines a first property of a real-world audio component from the real-world audio source and a target processor (205) determines a target property for a combined audio component being a combination of the real-world audio component received by the user and rendered audio of the first audio component received by the user. An adjuster determines a render property by modifying a property of the first audio component indicated by the audio data for the first audio component in response to the target property and the first property. A renderer (209) renders the first audio component in response to the render property.
Spatial audio processing
According to an example embodiment, a method for processing a spatial audio signal that represents an audio scene, wherein the spatial audio signal is controllable and associated with at least two viewing directions is provided, the method including receiving a focus direction and a focus amount; processing the spatial audio signal by modifying the audio scene so as to control emphasis in, at least in part, a portion of the spatial audio signal in said focus direction according to said focus amount; and outputting the processed spatial audio signal, wherein the modified audio scene enables the emphasis in, at least in part, said portion of the spatial audio signal in said focus direction according to said focus amount.
Apparatus and method for processing volumetric audio
A method including receiving an audio scene including at least one source captured using at least one near field microphone and at least one far field microphone. The method includes determining at least one room-impulse-response associated with the audio scene based on the at least one near field microphone and the at least one far field microphone, accessing a predetermined scene geometry corresponding to the audio scene, and identifying best match to the predetermined scene geometry in a scene geometry database. The method also includes performing RIR comparison based on the at least one RIR and at least one geometric RIR associated with the best matching geometry and rendering a volumetric audio scene based on a result of the RIR comparison.
Information processing apparatus and information processing method
Provided is an information processing apparatus that includes a determination unit and a display control unit. The determination unit obtains a position of a virtual object relative to a display region and determines whether or not a correction allowable region set in a region different from the virtual object overlaps at least a part of the display region when the virtual object is located outside the display region. The display control unit causes at least a part of a display object showing the virtual object in the display region to be displayed in a case where the determination unit determines that the correction allowable region overlaps at least the part of the display region.
Extrapolation of acoustic parameters from mapping server
Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.
Stereophonic apparatus for blind and visually-impaired people
A method and a wearable system which includes distance sensors, cameras and headsets, which all gather data about a blind or visually impaired person's surroundings and are all connected to a portable personal communication device, the device being configured to use scenario-based algorithms and an A.I to process the data and transmit sound instructions to the blind or visually impaired person to enable him/her to independently navigate and deal with his/her environment by provision of identification of objects and reading of local texts.
Electronic system for producing a coordinated output using wireless localization of multiple portable electronic devices
Device localization (e.g., ultra-wideband device localization) may be used to provide coordinated outputs and/or receive coordinated inputs using multiple devices. Providing coordinated outputs may include providing partial outputs using multiple devices, modifying an output of a device based on its position and/or orientation relative to another device, and the like. In some cases, each device of a set of multiple devices may provide a partial output, which combines with partial outputs of the remaining devices to produce a coordinated output.