Patent classifications
H04L65/80
Synchronizing independent media and data streams using media stream synchronization points
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. According to a variant embodiment, and in lieu of embedding all of the message data in the media stream, only a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the client media player) based on the coordination index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular “type” or class of user) are transmitted alongside the video stream in which the coordination index (e.g., a sequence number) has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
Synchronizing independent media and data streams using media stream synchronization points
A messaging channel is embedded directly into a media stream. Messages delivered via the embedded messaging channel are extracted at a client media player. According to a variant embodiment, and in lieu of embedding all of the message data in the media stream, only a coordination index is injected, and the message data is sent separately and merged into the media stream downstream (at the client media player) based on the coordination index. In one example embodiment, multiple data streams (each potentially with different content intended for a particular “type” or class of user) are transmitted alongside the video stream in which the coordination index (e.g., a sequence number) has been injected into a video frame. Based on a user's service level, a particular one of the multiple data streams is released when the sequence number appears in the video frame, and the data in that stream is associated with the media.
FRONTEND CAPTURE
Disclosed are systems and methods for a frontend capture module of a video conferencing application, which can modify an input signal, received from a microphone device to match predetermined signal characteristics, such as voice signal level and expected noise floor. An Input stage, a suppression module and an output stage amplify the voice signal portion of the input signal and suppress the noise signal of input signal to predetermined ranges. The input stage selectively applies gains defined by a gain table, based on signal level of the input signal. The suppression module selectively applies a suppression gain to the input signal based on presence or absence of voice signal in the input signal. The output stage further amplifies the input signal in portions having a voice signal and applies a gain table to maintain a consistent noise floor.
Autonomous vehicle teleoperations system
A teleoperations system may be used to selectively override conditions detected by an autonomous vehicle to enable the autonomous vehicle to effectively ignore detected conditions that are identified as false positives by the teleoperations system. Furthermore, a teleoperations system may be used to generate commands that an autonomous vehicle validates prior to executing to confirm that the commands do not violate any vehicle constraints for the autonomous vehicle. Still further, an autonomous vehicle may be capable of dynamically varying the video quality of one or more camera feeds that are streamed to a teleoperations system over a bandwidth-constrained wireless network based upon a current context of the autonomous vehicle.
Autonomous vehicle teleoperations system
A teleoperations system may be used to selectively override conditions detected by an autonomous vehicle to enable the autonomous vehicle to effectively ignore detected conditions that are identified as false positives by the teleoperations system. Furthermore, a teleoperations system may be used to generate commands that an autonomous vehicle validates prior to executing to confirm that the commands do not violate any vehicle constraints for the autonomous vehicle. Still further, an autonomous vehicle may be capable of dynamically varying the video quality of one or more camera feeds that are streamed to a teleoperations system over a bandwidth-constrained wireless network based upon a current context of the autonomous vehicle.
Verifying media stream quality for multiparty video conferences
Embodiments are directed to verifying media stream quality for multiparty video conferences. A verification video may be generated based on verification goals for a video provided by a video service. A marker may be embedded in the verification video. A video conference may be established using video stations such that the video conference may be provided by a video service. The verification video may be streamed to a video input of each video station. The video may be streamed to a video output buffer of each video station such that the video provides a view of the video conference and such that the marker that corresponds to each video station may be included in the video. Video information may be captured from the video output buffer of the video stations. The video service may be classified based on the video information from each video station.
Verifying media stream quality for multiparty video conferences
Embodiments are directed to verifying media stream quality for multiparty video conferences. A verification video may be generated based on verification goals for a video provided by a video service. A marker may be embedded in the verification video. A video conference may be established using video stations such that the video conference may be provided by a video service. The verification video may be streamed to a video input of each video station. The video may be streamed to a video output buffer of each video station such that the video provides a view of the video conference and such that the marker that corresponds to each video station may be included in the video. Video information may be captured from the video output buffer of the video stations. The video service may be classified based on the video information from each video station.
Enhanced management of ACs in multi-user EDCA transmission mode in wireless networks
To avoid blocking node AC queues in the degraded MU EDCA mode due to regular OFDMA transmission of data from another AC queue in resource units provided by an AP, the present invention proposes to use a dedicated HEMUEDCATimer for each AC queue, in order for them to be able to exit the degraded MU EDCA mode independently of the other AC queues. In this respect, upon successfully transmitting data stored in two or more traffic queues, in each of one or more accessed resource units provided by the AP within one or more transmission opportunities, the node sets each traffic queue transmitting in the accessed resource unit in the degraded MU EDCA mode for a predetermined degrading duration counted down by a respective timer associated with the transmitting traffic queue. Next, upon expiry of any timer, the node switches back the associated traffic queue to the conventional EDCA mode.
Enhanced management of ACs in multi-user EDCA transmission mode in wireless networks
To avoid blocking node AC queues in the degraded MU EDCA mode due to regular OFDMA transmission of data from another AC queue in resource units provided by an AP, the present invention proposes to use a dedicated HEMUEDCATimer for each AC queue, in order for them to be able to exit the degraded MU EDCA mode independently of the other AC queues. In this respect, upon successfully transmitting data stored in two or more traffic queues, in each of one or more accessed resource units provided by the AP within one or more transmission opportunities, the node sets each traffic queue transmitting in the accessed resource unit in the degraded MU EDCA mode for a predetermined degrading duration counted down by a respective timer associated with the transmitting traffic queue. Next, upon expiry of any timer, the node switches back the associated traffic queue to the conventional EDCA mode.
In-call feedback to far end device of near end device constraints
A near end device is in a call (voice or video) over a communication link with a far end device. The near end device monitors constraints of the near end device, such as environmental noise at the near end device, latency between sequential data packets, signal strength or quality over the communication link, and energy level. The near end device detects when the near end device is having or is soon to have communication difficulty with the call due to one or more of the constraints. In response to detecting that the near end device is having or is soon to have communication difficulty with the call, the near end device communicates one or both of audible feedback and visual feedback to the far end device, notifying the far end device that the near end device is having or is soon to have communication difficulty with the call.