Patent classifications
H03G5/025
PLAYBACK OF GENERATIVE MEDIA CONTENT
Generative media content (e.g., generative audio) can be played back across multiple playback devices concurrently. A generative content group coordinator device can receive input parameters, which can include sensor data, media content, or other such input. The coordinator device can generate first and second generative media content streams, each of which can be transmitted to first and second playback devices, respectively. The first and second playback devices can play back the first and second streams of generative media content concurrently.
Adjusting a playback device
Certain embodiments provide methods and systems for managing a sound profile. An example playback device includes a network interface and a non-transitory computer readable storage medium having stored therein instructions executable by the processor. When executed by the processor, the instructions are to configure the playback device to receive, via the network interface over a local area network (LAN) from a controller device, an instruction. The example playback device is to obtain, based on the instruction, via the network interface from a location outside of the LAN, data comprising a sound profile. The example playback device is to update one or more parameters at the playback device based on the sound profile. The example playback device is to play back an audio signal according to the sound profile.
Method and system for optimizing the low-frequency sound rendition of an audio signal
A system and method for optimizing the low-frequency sound rendition of an audio signal, implementing variations in a plurality of parameters of the audio signal according to the volume level of the signal chosen by a user, in particular filtering or compression parameters, or parameters relating to the harmonics of the audio signal, while seeking to optimize the dynamics and the bandwidth of the audio signal according to the volume, in order to provide an optimal rendition to the user.
Techniques for Enabling Interoperability between Media Playback Systems
A device is configured to (i) receive media content from a media source, (ii) generate a first series of frames including first portions of the media content and first playback timing information, (iii) generate a second series of frames including second portions of the media content and second playback timing information, (iv) transmit the first series of frames to a first playback device in a synchrony group for playback of the first portions of the media content in accordance with the first playback timing information, and (v) transmit the second series of frames to a second playback device in the synchrony group for playback of the second portions of the media content in accordance with the second playback timing information such that the media content is played back in synchrony by the synchrony group.
SOUND PROCESSING APPARATUS AND SOUND PROCESSING SYSTEM
The present technology relates to a sound processing apparatus and a sound processing system for enabling more stable localization of a sound image.
A virtual speaker is assumed to exist on the lower side among the sides of a tetragon having its corners formed with four speakers surrounding a target sound image position on a spherical plane. Three-dimensional VBAP is performed with respect to the virtual speaker and the two speakers located at the upper right and the upper left, to calculate gains of the two speakers at the upper right and the upper left and the virtual speaker, the gains being to be used for fixing a sound image at the target sound image position. Further, two-dimensional VBAP is performed with respect to the lower right and lower left speakers, to calculate gains of the lower right and lower left speakers, the gains being to be used for fixing a sound image at the position of the virtual speaker. The values obtained by multiplying these gains by the gain of the virtual speaker are set as the gains of the lower right and lower left speakers for fixing a sound image at the target sound image position. The present technology can be applied to sound processing apparatuses.
Smart Audio Settings
Embodiments described herein provide for smart configuration of audio settings for a playback device. According to an embodiment, while a playback device is a part of a first zone group that includes the playback device and at least one first playback device, the playback device applies a first audio setting. The embodiment also includes the playback device joining a second zone group that includes the playback device and at least one second playback device. The embodiment further includes the playback device applying a second audio setting based on an audio content profile corresponding to the second zone group.
Methods and apparatus to adjust audio playback settings based on analysis of audio characteristics
Methods, apparatus, systems and articles of manufacture are disclosed to adjust audio playback settings based on analysis of audio characteristics. Example apparatus disclosed herein include an equalization (EQ) model query generator to generate a query to a neural network, the query including a representation of a sample of an audio signal; an EQ filter settings analyzer to: access a plurality of audio playback settings determined by the neural network based on the query; and determine a filter coefficient to apply to the audio signal based on the plurality of audio playback settings; and an EQ adjustment implementer to apply the filter coefficient to the audio signal in a first duration.
Genetic-algorithm-based equalization using IIR filters
Systems and methods utilize a modified genetic algorithm for adapting an off-the-shelf audio system, such as in a high-end television, to a given, particular room or other physical location presenting a specific or unique auditory environment with a set of acoustic properties. An audio system is adapted to a given room by determining an IIR based EQ solution via iterative techniques, including an iterative technique based upon a genetic algorithm adapted for an audio frequency response equalization application. In a variant, an audio system is adapted to a particular room, adjust the EQ across a microphone's bandwidth while preserving the factory-calibrated EQ response across the remaining bandwidth.
SYSTEM AND METHOD FOR DETERMINING OPERATIONAL STATUS OF POWER-OVER-ETHERNET POWERED LOUDSPEAKERS IN AN AUDIO DISTRIBUTION SYTEM
A system and method are described herein, for determining operational status of one or more power-over-Ethernet powered loudspeakers, the system and method comprising: receiving audio data at an audio receiver; transmitting the received audio data from the audio receiver to one or more audio data interface devices using an audio-over-Internet Protocol (AoIP) encoding scheme using an Ethernet cable; receiving the transmitted AoIP encoded audio data at audio data interface device, converting the encoded audio data to an analog audio data signal, transmitting the analog audio data signal to at least one loudspeaker, and broadcasting the same as an acoustic audio signal; substantially continuously receiving and storing status information by a status monitor located in the audio data interface device; and transmitting the received and stored status information to the audio receiver.
Playback of generative media content
Generative media content (e.g., generative audio) can be played back across multiple playback devices concurrently. A generative content group coordinator device can receive input parameters, which can include sensor data, media content, or other such input. The coordinator device can generate first and second generative media content streams, each of which can be transmitted to first and second playback devices, respectively. The first and second playback devices can play back the first and second streams of generative media content concurrently.