H03G3/20

Synchronizing playback by media playback devices

Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.

Synchronizing playback by media playback devices

Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.

Carrier aggregation and high order modulation in vehicle-to-vehicle (V2V) sidelink communication

Embodiments of a User Equipment (UE) and methods for communication are generally described herein. The UE may be configured for carrier aggregation using a primary component carrier (CC) and a secondary CC. The UE may attempt to detect a sidelink synchronization signal (SLSS) from another UE on the primary CC. The UE may, if the SLSS from the other UE is detected: determine, based on the detected SLSS, a common time synchronization for the primary CC and the secondary CC for vehicle-to-vehicle (V2V) sidelink transmissions in accordance with the carrier aggregation. The UE may, if the SLSS from the other UE is not detected: transmit an SLSS to enable determination of the common time synchronization for the primary CC and the secondary CC by the other UE. The SLSS may be transmitted on the primary CC.

Carrier aggregation and high order modulation in vehicle-to-vehicle (V2V) sidelink communication

Embodiments of a User Equipment (UE) and methods for communication are generally described herein. The UE may be configured for carrier aggregation using a primary component carrier (CC) and a secondary CC. The UE may attempt to detect a sidelink synchronization signal (SLSS) from another UE on the primary CC. The UE may, if the SLSS from the other UE is detected: determine, based on the detected SLSS, a common time synchronization for the primary CC and the secondary CC for vehicle-to-vehicle (V2V) sidelink transmissions in accordance with the carrier aggregation. The UE may, if the SLSS from the other UE is not detected: transmit an SLSS to enable determination of the common time synchronization for the primary CC and the secondary CC by the other UE. The SLSS may be transmitted on the primary CC.

Metadata for loudness and dynamic range control

An audio normalization gain value is applied to an audio signal to produce a normalized signal. The normalized signal is processed to compute dynamic range control (DRC) gain values in accordance with a selected one of several pre-defined DRC characteristics. The audio signal is encoded, and the DRC gain values are provided as metadata associated with the encoded audio signal. Several other embodiments are also described and claimed.

Metadata for loudness and dynamic range control

An audio normalization gain value is applied to an audio signal to produce a normalized signal. The normalized signal is processed to compute dynamic range control (DRC) gain values in accordance with a selected one of several pre-defined DRC characteristics. The audio signal is encoded, and the DRC gain values are provided as metadata associated with the encoded audio signal. Several other embodiments are also described and claimed.

Dynamic sound masking based on monitoring biosignals and environmental noises

Aspects of the present disclosure provide methods, apparatuses, and systems for closed-loop sleep protection and/or sleep regulation. According to an aspect, sleep disturbing noises are predicted and a biosignal parameter is measured to dynamically mask predicted disturbing environmental noises in the sleeping environment with active attenuation. Environmental noises in a sleeping environment of a subject are detected, input, or predicted based on historical data of the sleeping environment collected over a period of time. The biosignal parameter is used to determine sleep physiology of a subject. Based on the environmental noises in the sleeping environment and the determined sleep physiology, the noises are predicted to be disturbing or non-disturbing noises. For predicted disturbing noises, one or more actions are taken to regulate sleep and avoid sleep disruption by using sound masking prior to or concurrently with the occurrence of the predicted disturbing noises.

Configuration of device through microphone port

In one aspect, a device includes at least one processor, a touch-enabled display accessible to the at least one processor, and storage accessible to the at least one processor. The storage includes instructions executable by the at least one processor to detect a hover of a body part of a user or other physical object above the touch-enabled display, where the hover does not include the physical object physically touching the touch-enabled display. The instructions are also executable to identify a graphical object underneath the hover and to cache data associated with the graphical object prior to the graphical object being selected based on the physical object physically touching the touch-enabled display.

Automatic volume control for combined game and chat audio

A system comprising audio processing circuitry is provided. The audio processing circuitry is operable to receive audio signals. The audio processing circuitry is operable to process the audio signals to detect strength of a chat component of the audio signals and strength of a game component of the audio signals. The audio processing circuitry is operable to automatically control a volume setting based on one or both of: the detected strength of the chat component, and the detected strength of the game component. The combined-game-and-chat audio signals may comprise a left channel signal and a right channel signal. The processing of the combined-game-and-chat audio signals may comprise measuring strength of a vocal-band signal component that is common to the left channel signal and the right channel signal.

Automatic volume control for combined game and chat audio

A system comprising audio processing circuitry is provided. The audio processing circuitry is operable to receive audio signals. The audio processing circuitry is operable to process the audio signals to detect strength of a chat component of the audio signals and strength of a game component of the audio signals. The audio processing circuitry is operable to automatically control a volume setting based on one or both of: the detected strength of the chat component, and the detected strength of the game component. The combined-game-and-chat audio signals may comprise a left channel signal and a right channel signal. The processing of the combined-game-and-chat audio signals may comprise measuring strength of a vocal-band signal component that is common to the left channel signal and the right channel signal.