Patent classifications
H04S7/40
Virtual simulation of spatial audio characteristics
Embodiments of the present invention are directed to a system and method for demonstrating spatial performance of a demonstration speaker model to consumers in order to evaluate different speakers. The system and method comprise a microphone array for recording the output of the demonstration speaker model. The system and method comprise acoustic input samples for processing to an acoustic output and a processor for determining characteristics of each microphone recording, and processing an acoustic input sample and characteristics of each microphone recording corresponding to a selected demonstration speaker model. The system and method further comprise a reference speaker model for outputting an acoustic signal based on the result of the processing. The processing compensates for the performance characteristic of the reference speaker and the performance characteristic of the selected demonstration speaker so as to mimic the spatial characteristics of the demonstration speaker while avoiding bias from the reference speaker.
Augmented reality microphone pick-up pattern visualization
Augmented reality visual display of microphone pick-up patterns are disclosed. An example method includes capturing, via a camera of a computing device, an image of a microphone, and displaying the image on a display of the computing device. The method also includes determining, by the computing device, a location and orientation of the microphone relative to the camera, determining one or more parameters of a pick-up pattern of the microphone, determining a visual representation of the pick-up pattern based on the one or more parameters, and displaying the visual representation of the pick-up pattern overlaid on the image of the microphone.
RENDERING REVERBERATION
An apparatus comprising means configured to: obtain at least one impulse response; obtain at least one reflection filter based on the obtained at least one impulse response, wherein the at least one reflection filter is configured to determine at least one early reflection from an acoustic surface which is not overlapped in time by any other reflection, wherein a duration of the at least one early reflection is shorter than a duration of the obtained at least one impulse response. In addition, an apparatus comprising means configured to: obtain at least one impulse response, wherein the at least one impulse response is configured with a perceivable timbre during rendering; create a timbral modification filter; obtain at least one audio signal; and render at least one output audio signal based n the at least one audio signal, wherein the at least one output signal is based on an application of the timbral modification filter.
METHODS AND SYSTEMS FOR MANIPULATING AUDIO PROPERTIES OF OBJECTS
In one implementation, a method of changing an audio property of an object is performed at a device including one or more processors coupled to non-transitory memory. The method includes displaying, using a display, a representation of a scene including a representation of an object associated with an audio property. The method includes displaying, using the display, in association with the representation of the object, a manipulator indicating a value of the audio property. The method includes receiving, using one or more input devices, a user input interacting with the manipulator. The method includes, in response to receiving the user input, changing the value of the audio property based on the user input and displaying, using the display, the manipulator indicating the changed value of the audio property.
ACOUSTIC NEURAL NETWORK SCENE DETECTION
An acoustic environment identification system is disclosed that can use neural networks to accurately identify environments. The acoustic environment identification system can use one or more convolutional neural networks to generate audio feature data. A recursive neural network can process the audio feature data to generate characterization data. The characterization data can be modified using a weighting system that weights signature data items. Classification neural networks can be used to generate a classification of an environment.
SYSTEMS AND METHODS FOR AN IMMERSIVE AUDIO EXPERIENCE
A computer-implemented method for creating an immersive audio experience. The method includes receiving a selection of an audio track via a user interface, and receiving audio track metadata for the audio track. The method includes querying an audio database based on the track metadata and determining that audio data for the audio track is not stored on the audio database. The method includes analyzing the audio track to determine one or more audio track characteristics. The method includes generating vibe data based on the one or more audio track characteristics, wherein the vibe data includes time-coded metadata. Based on the vibe data, generating visualization instructions for one or A/V devices in communication with a user computing device, and transmitting the generated visualization instructions and the audio track to the user computing device.
AUDIO LEVEL METERING FOR LISTENER POSITION AND OBJECT POSITION
Playback of an audio signal is simulated from a playback position to a listening position. The simulation is performed with respect to a model of a listening area. The resulting loudness of the audio, perceived at the listening position, is rendered to a display. Other aspects are described and claimed.
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
An information processing method receives settings of a plurality of pieces of information on a plurality of physical spaces that respectively correspond to a plurality of pieces of information on a plurality of logical spaces, receives a first group of a plurality of pieces of first acoustic image localization information that indicates a position of an acoustic image to be localized in each of the plurality of logical spaces using first coordinates in the plurality of logical spaces, receives a change in one piece of first acoustic image localization information, among the first group of the plurality of pieces of first acoustic image localization information, changes other pieces of first acoustic image localization information, among the first group of the plurality of pieces of first acoustic image localization, in response to the received change in the one piece of first acoustic image localization information, and transforms the first group of the plurality of pieces of first acoustic image localization information respectively into a plurality of pieces of second acoustic image localization information using second coordinates in the plurality of physical spaces.
Service for targeted crowd sourced audio for virtual interaction
An audio generation system is provided to enable coordinated control of multiple IoT devices for audio collection and distribution of one or more audio sources according to location and user preference. The audio generation system enables a location sensitive acoustic control of sound, both as a shaped envelope for a particular source, and as an individualized experience. The audio generation system also facilitates an interactive visual system for visualization and manipulation of the audio environment including via the use of augmented reality and/or virtual reality to depict soundscapes. The audio generation system can also facilitate a system for improving and achieving an audio environment (sound influence zone) and an intuitive way to understand where sounds will be heard.
DIGITAL TWIN FOR MICROPHONE ARRAY SYSTEM
One example includes a digital twin of a microphone array. The digital twin acts as a digital copy of a physical microphone array. The digital array allows the microphone array to be analyzed, simulated and optimized. Further, the microphone array can be optimized for performing sound quality operations such as noise suppression and speech intelligibility.