Patent classifications
H04R2227/007
SOUND DIFFUSION SYSTEM EMBEDDED IN A RAILWAY VEHICLE AND ASSOCIATED VEHICLE, METHOD AND COMPUTER PROGRAM
The invention relates to a sound diffusion system embedded in a railway vehicle and comprising: a plurality of groups of speakers distributed in the cars, each group of speakers being located in a respective diffusion zone of the railway vehicle; a control device of the speakers configured to broadcast generic sound signals via the different groups of speakers; and at least one reception device, each reception device being associated with a single group of speakers and being able to receive a control signal from a control device outside the railway vehicle, the control device being configured, upon reception of a control signal by one of the reception devices, to broadcast a specific sound signal solely via the associated group of speakers.
Audio cancellation for voice recognition
An audio cancellation system includes a voice enabled computing system that is connected to an audio output device using a wired or wireless communication network. The voice enabled computing device can provide media content to a user and receive a voice command from the user. The connection between the voice enabled computing system and the audio output device introduces a time delay between the media content being generated at the voice enabled computing device and the media content being reproduced at the audio output device. The system operates to determine a calibration value adapted for the voice enabled computing system and the audio output device. The system uses the calibration value to filter the user's voice command from a recording of ambient sound including the media content, without requiring significant use of memory and computing resources.
NETWORKED AUDIO AURALIZATION AND FEEDBACK CANCELLATION SYSTEM AND METHOD
The present embodiments generally relate to enabling participants in an online gathering with networked audio to use a cancelling auralizer at their respective locations to create a common acoustic space or set of acoustic spaces shared among subgroups of participants. For example, there are a set of network connected nodes, and the nodes can contain speakers and microphones, as well as participants and node mixing-processing blocks. The node mixing-processing blocks generate and manipulate signals for playback over the node loudspeakers and for distribution to and from the network. This processing can include cancellation of loudspeaker signals from the microphone signals and auralization of signals according to control parameters that are developed locally and from the network. A network block can contain network routing and processing functions, including auralization, synthesis, and cancellation of audio signals, synthesis and processing of control parameters, and audio signal and control parameter routing.
Audio effectiveness heatmap
An audio system can be configured to generate an audio heatmap for the audio emission potential profiles for one or more speakers, in specific or arbitrary locations. The audio heatmap maybe based on speaker location and orientation, speaker acoustic properties, and optionally environmental properties. The audio heatmap often shows areas of low sound density when there are few speakers, and areas of high sound density when there are a lot of speakers. An audio system may be configured to normalize audio signals for a set of speakers that cooperatively emit sound to render an audio object in a defined audio object location. The audio signals for each speaker can be normalized to ensure accurate rendering of the audio object without volume spikes or dropout.
SOUND PROCESSING DEVICE, SOUND PROCESSING METHOD, AND PROGRAM
The present technology relates to a sound processing device, a sound processing method, and a program that enable a sound signal adapted to an intended use to be output.
A sound signal adapted to an intended use can be output by providing a sound processing device including a signal processing part that processes a sound signal picked up by a microphone, and generates a recording sound signal to be recorded in a recording device and an amplification sound signal different from the recording sound signal to be output from a speaker. The present technology can be applied to, for example, a sound amplification system that performs off-microphone sound amplification.
Self-equalizing loudspeaker system
An impulse response is computed between i) an audio signal that is being output as sound by a loudspeaker that is integrated in a loudspeaker enclosure, and ii) a microphone signal from a microphone that is recording the output by the loudspeaker and that is also integrated in the loudspeaker enclosure. A reverberation spectrum is extracted from the impulse response. Sound power spectrum at the listening distance is computed, based on the reverberation spectrum, and an equalization filter is determined based on i) the estimated sound power spectrum and ii) a desired frequency response at the listening distance. Other aspects are also described and claimed.
SPATIAL AUDIO CORRECTION
Example techniques may involve performing aspects of a spatial calibration. An example implementation may include detecting a trigger condition that initiates calibration of a media playback system including multiple audio drivers that form multiple sound axes, each sound axis corresponding to a respective channel of multi-channel audio content The implementation may also include causing the multiple audio drivers to emit calibration audio that is divided into constituent frames, the multiple sound axes emitting calibration audio during respective slots of each constituent frame. The implementation may further include recording the emitted calibration audio. The implementation may include causing delays for each sound axis of the multiple sound axes to be determined, the determined delay for each sound axis based on the slots of recorded calibration audio corresponding to the sound axes and causing the multiple sound axes to be calibrated.
Calibration of a Playback Device Based on an Estimated Frequency Response
An example playback device is configured to receive a first stream of audio comprising source audio content to be played back by the playback device and record, via one or more microphones of the playback device, an audio signal output by the playback device based on the playback device playing the source audio content. The playback device is also configured to determine a transfer function between a frequency-domain representation of the first stream of audio and a frequency-domain representation of the recorded audio signal, and then determine an estimated frequency response of the playback device based on a difference between (i) the transfer function and (ii) a self-response of the playback device, where the self-response of the playback device is stored in a memory of the playback device. Based on the estimated frequency response, the playback device is configured to determine an acoustic calibration adjustment and implement the acoustic calibration adjustment.
Playback device calibration based on representative spectral characteristics
An example computing device is configured to perform functions including receiving a plurality of spectral data associated with a respective plurality of playback environments corresponding to a respective plurality of playback devices. The functions also include, based on the plurality of spectral data, determining a plurality of representative spectral characteristics. The functions also include receiving particular spectral data associated with a particular playback environment corresponding to a particular playback device and identifying a given one of the representative spectral characteristics that is representative of the particular spectral data. The functions also include, based on the given one of the representative spectral characteristics, identifying calibration data for use by the particular playback device when playing back audio and transmitting, to the particular playback device, the calibration data.
Voice controlled system
A distributed voice controlled system has a primary assistant and at least one secondary assistant. The primary assistant has a housing to hold one or more microphones, one or more speakers, and various computing components. The secondary assistant is similar in structure, but is void of speakers. The voice controlled assistants perform transactions and other functions primarily based on verbal interactions with a user. The assistants within the system are coordinated and synchronized to perform acoustic echo cancellation, selection of a best audio input from among the assistants, and distributed processing.