Patent classifications
H04S2400/15
Extrapolation of acoustic parameters from mapping server
Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.
Stereophonic apparatus for blind and visually-impaired people
A method and a wearable system which includes distance sensors, cameras and headsets, which all gather data about a blind or visually impaired person's surroundings and are all connected to a portable personal communication device, the device being configured to use scenario-based algorithms and an A.I to process the data and transmit sound instructions to the blind or visually impaired person to enable him/her to independently navigate and deal with his/her environment by provision of identification of objects and reading of local texts.
Own voice reinforcement using extra-aural speakers
A system including an audio source device having a first microphone and a first speaker for directing sound into an environment in which the audio source device is located and a wireless audio receiver device having a second microphone and a second speaker for directing sound into a user's ear. The audio source device is configured to 1) capture, using the first microphone, speech of the user as a first audio signal, 2) reduce noise in the first audio signal to produce a speech signal, and 3) drive the first speaker with the speech signal. The wireless audio receiver device is configured to 1) capture, using the second microphone, a reproduction of the speech produced by the first speaker as a second audio signal and 2) drive the second speaker with the second audio signal to output the reproduction of the speech.
Electronic system for producing a coordinated output using wireless localization of multiple portable electronic devices
Device localization (e.g., ultra-wideband device localization) may be used to provide coordinated outputs and/or receive coordinated inputs using multiple devices. Providing coordinated outputs may include providing partial outputs using multiple devices, modifying an output of a device based on its position and/or orientation relative to another device, and the like. In some cases, each device of a set of multiple devices may provide a partial output, which combines with partial outputs of the remaining devices to produce a coordinated output.
TRAINING DATA EXTENSION APPARATUS, TRAINING DATA EXTENSION METHOD, AND PROGRAM
An input of a first observation signal corresponding to an incoming signal from a first direction is received, an angular rotation operation of the first observation signal is performed to obtain a second observation signal corresponding to an incoming signal from a second direction that is different from the first direction and the second observation signal is added to a set of training data.
IMMERSIVE AUDIO PLATFORM
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
CONVEYING MOTION DATA VIA MEDIA PACKETS
A device includes a memory configured to store instructions and one or more processors configured to execute the instructions to receive a media packet and to determine, based on a field of the media packet, whether the media packet includes motion data. The one or more processors are also configured to execute the instructions to, based on the media packet including motion data, extract the motion data from the media packet.
SYSTEM AND METHOD FOR AUTOMATICALLY TUNING DIGITAL SIGNAL PROCESSING CONFIGURATIONS FOR AN AUDIO SYSTEM
Embodiments include a processing device communicatively coupled to a plurality of audio devices comprising at least one microphone and at least one speaker, and to a digital signal processing (DSP) component having a plurality of audio input channels for receiving audio signals captured by the at least one microphone, the processing device being configured to identify one or more of the audio devices based on a unique identifier associated with each of said one or more audio devices; obtain device information from each identified audio device; and adjust one or more settings of the DSP component based on the device information. A computer-implemented method of automatically configuring an audio conferencing system, comprising a digital signal processing (DSP) component and a plurality of audio devices including at least one speaker and at least one microphone, is also provided.
METHOD AND APPARATUS FOR ESTIMATING SPATIAL CONTENT OF SOUNDFIELD AT DESIRED LOCATION
In general, the present embodiments relate to a method and apparatus for estimating spatial content of a soundfield at a desired location, including a location that has actual sound content obstructed or distorted. According to certain aspects, the present embodiments aim at presenting a more natural, spatially accurate sound, for example to a user at the desired location who is wearing a helmet, mimicking the sound a user would experience if they were not wearing any headgear. Modes for enhanced spatial hearing may be applied which would include situation-dependent processing for augmented hearing. According to other aspects, methods and apparatuses record or capture the sound experienced by a number of the participants and devices on and near the field of play, analyze the captured sound for its various components and their associated spatial content, and make those components available to participants and spectators.
Audio processing
A method for rendering a spatial audio signal that represents a sound field in a selectable viewpoint audio environment that includes one or more audio objects associated with respective audio content and a respective position in the audio environment. The method includes receiving an indication of a selected listening position and orientation in the audio environment; detecting an interaction concerning a first audio object on basis of one or more predefined interaction criteria; modifying the first audio object and one or more further audio objects linked thereto; and deriving the spatial audio signal that includes at least audio content associated with the modified first audio object in a first spatial position of the sound field that corresponds to its position in the audio environment in relation to said selected listening position and orientation, and audio content associated with the modified one or more further audio objects.