Patent classifications
H04R29/008
DJ stem systems and methods
Systems and methods selectively mix a first and second song together during a live performance. The first song has a plurality of first stems, each having stereo audio that combine to form the audio of the first song. The second song has a plurality of second stems, each having stereo audio that combine to form the audio of the second song. A computer, with memory and a processor, executes machine readable instructions of a multiple channel audio mixing application with stored within the memory. The multiple channel audio mixing application plays and mixes audio of at least one of the first stems with audio of at least one of the second stems. The multiple channel audio mixing application is controlled in real-time during the performance to select the at least one first stem and the at least one second stem for the mixing.
METHOD AND APPARATUS FOR OBJECTIVE ASSESSMENT OF IN-EAR DEVICE ACOUSTICAL PERFORMANCE
A method and apparatus for objectively assessing acoustical performance of an in-ear device having a passageway extending there through use a dual microphone probe that removably engages the passageway. The acoustical performance of the in-ear device is performed with the in-ear device inserted into the ear canal of the user and a reference sound source. A clip holding the probe in an acoustic near field of the sound source permits real time calibration thereof. The method and apparatus allow on-site and in-situ measurement of a predicted personal attenuation rating of the device, a subject-fit re-insertion test, an acoustic seal test, a rating test, a stability and reliability test, as well as a protection test of the device with an assessment of a filtered predicted exposure level at the ear for a specific noise exposure level. The apparatus may be simply housed along with the sound source for in-field evaluation tests.
Environmental acoustic dosimetry with water event detection
In-ear sound pressure level, SPL, is determined that is caused by output audio being converted into sound by a headset worn by a user. The in-ear SPL is converted into a sound sample having units that are suitable for evaluating sound noise exposure. These operations are repeated to produce a sequence of sound samples during playback. This sequence of sound samples is then written to a secure database. Access to the database is authorized by the user. Other aspects are also described and claimed.
Updating Playback Device Configuration Information Based on Calibration Data
A computing device may transmit playback device configuration information to a given playback device. The computing device may then receive calibration data corresponding to each playback device of a plurality of playback devices, and receive playback device characteristic data indicating at least one playback device characteristic for each playback device of the plurality of playback devices. Based on at least the received calibration data and the received playback device characteristic data, the computing device may determine updated playback device configuration information and transmit data indicating the updated playback device configuration information to the given playback device.
Systems and methods for detecting degradation of a microphone included in an auditory prosthesis system
An exemplary system includes a sound processor associated with a patient, a first microphone communicatively coupled to the sound processor and configured to detect an audio signal presented to the patient and output a first output signal representative of the audio signal, and a second microphone communicatively coupled to the sound processor and configured to detect the audio signal presented to the patient and output a second output signal representative of the audio signal. The sound processor is configured to 1) receive the first and second output signals, 2) determine that a difference between the first and second output signals meets a threshold condition, and 3) perform, in response to the determination that the difference between the first and second output signals meets the threshold condition, a predetermined action associated with the quality level of the first microphone. Corresponding systems and methods are also disclosed.
DISPLAY SYSTEM OF MOBILE AUDIO DEVICES
The object is to provide a display system of mobile audio devices which enables comparative display between frequency range of audio source read externally and memorize by the main body of the mobile audio devices and frequency range replayed by the main body of the mobile audio devices.
The display system of audio devices comprises the audio source file 18 memorizing outer audio source data, the replay unit 14 of audio source data of the audio source file 18, the device data memory 16 of data of replay devices, the controller 15 controlling among circuits, the sampling rate output 21 of audio source data of said audio source file 18, the replay sampling rate output 23 outputting said replay sampling rates, the audio source output display part 12 displaying said sampling rate output 21, and the replay output display part 13 displaying said replay sampling rate output 23.
Playback Device Calibration User Interfaces
Examples described herein involve providing playback device calibration user interfaces to guide a calibration process for one or more playback devices in a playback environment. In one example, a network device receives audio samples continuously from a microphone of the network device for a predetermined duration of time, wherein the predetermined duration of time comprises a plurality of periodic time increments. At each time increment within the predetermined duration of time, the network device dynamically updating on a graphical display of the network device, (i) a representation of a frequency response based on audio samples that have been received between a beginning of the predetermined duration of time and the respective time increment, and (ii) a representation of the respective time increment relative to the predetermined duration of time.
Audio calibration and adjustment
The subject disclosure is directed towards calibrating sound pressure levels of speakers to determine desired attenuation data for use in later playback. A user may be guided to a calibration location to place a microphone, and each speaker is calibrated to output a desired sound pressure level in its current acoustic environment based upon the attenuation data learned during calibration. During playback, the attenuation data is used. Also described is testing the setup of the speakers, and dynamically adjusting the attenuation data in real time based upon tracking the listener's current location.
Audio user interaction recognition and context refinement
A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs.
Audio effectiveness heatmap
An audio system can be configured to generate an audio heatmap for the audio emission potential profiles for one or more speakers, in specific or arbitrary locations. The audio heatmap maybe based on speaker location and orientation, speaker acoustic properties, and optionally environmental properties. The audio heatmap often shows areas of low sound density when there are few speakers, and areas of high sound density when there are a lot of speakers. An audio system may be configured to normalize audio signals for a set of speakers that cooperatively emit sound to render an audio object in a defined audio object location. The audio signals for each speaker can be normalized to ensure accurate rendering of the audio object without volume spikes or dropout.