Patent classifications
H04S7/304
ACOUSTIC OUTPUT APPARATUS
The present disclosure relates to an acoustic output apparatus. The acoustic output apparatus comprising: at least one low-frequency acoustic driver that outputs sound from at least two first sound guiding holes; at least one high-frequency acoustic driver that outputs sound from at least two second sound guiding holes; and a controller configured to cause the low-frequency acoustic driver to output sound in a first frequency range, and cause the high-frequency acoustic driver to output sound in a second frequency range, wherein the second frequency range includes frequencies higher than the first frequency range.
Head-mounted display apparatus, sound image output system, and method of outputting sound image
An HMD includes a display unit mounted on a head of a user, and configured to display an image such that a real object located in a real space is visually recognizable, a right earphone and a left earphone configured to output a sound, a position specification unit configured to specify a position of the real object and a virtual object, and a sound output control unit configured to generate a synthesized sound with the position of the virtual object as a sound source position, and output the synthesized sound from the right earphone and the left earphone. The sound output control unit adjusts the synthesized sound so that the synthesized sound becomes an audible sound bypassing the real object when the position of the real object is located between the position of the virtual object and a position of the display unit.
Wearer identification based on personalized acoustic transfer functions
A wearable device includes an audio system. In one embodiment, the audio system includes a sensor array that includes a plurality of acoustic sensors. When a user wears the wearable device, the audio system determines an acoustic transfer function for the user based upon detected sounds within a local area surrounding the sensor array. Because the acoustic transfer function is based upon the size, shape, and density of the user's body (e.g., the user's head), different acoustic transfer functions will be determined for different users. The determined acoustic transfer functions are compared with stored acoustic transfer functions of known users in order to authenticate the user of the wearable device.
SPATIAL AUDIO CONTROLLER
A method performed a local device that is communicatively coupled with several remote devices, the method includes: receiving, from each remote device with which the local device is engaged in a communication session, an input audio stream; receiving, for each remote device, a set parameters; determining, for each input audio stream, whether the input audio stream is to be 1) rendered individually or 2) rendered as a mix of input audio streams based on the set of parameters; for each input audio stream that is determined to be rendered individually, spatial rendering the input audio stream as an individual virtual sound source that contains only that input audio stream; and for input audio streams that are determined to be rendered as the mix of input audio streams, spatial rendering the mix of input audio streams as a single virtual sound source that contains the mix of input audio streams.
Providing Digital Media with Spatial Audio to the Blockchain
Methods and apparatus provide digital media with spatial audio to a blockchain. The blockchain network executes a decentralized application (Dapp) with a user interface (UI) that enables a user to select audio for spatialization and uploading to the blockchain. The spatial audio transmits to the blockchain network to reduce processing and transmission of network data.
Ear-worn electronic device for conducting and monitoring mental exercises
An ear-worn electronic device includes a right ear device comprising a first processor and a left ear device comprising a second processor communicatively coupled to the first processor. A physiologic sensor module comprises one or more physiologic sensors configured to sense at least one physiologic parameter from a wearer. A motion sensor module comprises one or more sensors configured to sense movement of the wearer. The first and second processors are coupled to the physiologic and motion sensor modules. The first and second processors are configured to produce a three-dimensional virtual sound environment comprising relaxing sounds, generate verbal instructions within the three-dimensional virtual sound environment that guide the wearer through a predetermined mental exercise that promotes wearer relaxation, and generate verbal commentary that assesses wearer compliance with the predetermined mental exercise in response to one or both of the sensed movement and the at least one physiologic parameter.
Audio apparatus and method of operation therefor
An audio apparatus, e.g. for rendering audio for a virtual/augmented reality application, comprises a receiver (201) for receiving audio data for an audio scene including a first audio component representing a real-world audio source present in an audio environment of a user. A determinator (203) determines a first property of a real-world audio component from the real-world audio source and a target processor (205) determines a target property for a combined audio component being a combination of the real-world audio component received by the user and rendered audio of the first audio component received by the user. An adjuster determines a render property by modifying a property of the first audio component indicated by the audio data for the first audio component in response to the target property and the first property. A renderer (209) renders the first audio component in response to the render property.
Spatial audio processing
According to an example embodiment, a method for processing a spatial audio signal that represents an audio scene, wherein the spatial audio signal is controllable and associated with at least two viewing directions is provided, the method including receiving a focus direction and a focus amount; processing the spatial audio signal by modifying the audio scene so as to control emphasis in, at least in part, a portion of the spatial audio signal in said focus direction according to said focus amount; and outputting the processed spatial audio signal, wherein the modified audio scene enables the emphasis in, at least in part, said portion of the spatial audio signal in said focus direction according to said focus amount.
Apparatus and method for processing volumetric audio
A method including receiving an audio scene including at least one source captured using at least one near field microphone and at least one far field microphone. The method includes determining at least one room-impulse-response associated with the audio scene based on the at least one near field microphone and the at least one far field microphone, accessing a predetermined scene geometry corresponding to the audio scene, and identifying best match to the predetermined scene geometry in a scene geometry database. The method also includes performing RIR comparison based on the at least one RIR and at least one geometric RIR associated with the best matching geometry and rendering a volumetric audio scene based on a result of the RIR comparison.
Audio system for dynamic determination of personalized acoustic transfer functions
An eyewear device includes an audio system. In one embodiment, the audio system includes a microphone array that includes a plurality of acoustic sensors. Each acoustic sensor is configured to detect sounds within a local area surrounding the microphone array. For a plurality of the detected sounds, the audio system performs a direction of arrival (DoA) estimation. Based on parameters of the detected sound and/or the DoA estimation, the audio system may then generate or update one or more acoustic transfer functions unique to a user. The audio system may use the one or more acoustic transfer functions to generate audio content for the user.