H04N19/166

Methods, mediums, and systems for dynamically selecting codecs
11178395 · 2021-11-16 · ·

Exemplary embodiments relate to techniques for dynamically selecting codecs as video is transmitted in real time. In some embodiments, a first codec initially encodes video data, and a second codec is evaluated to replace the first codec. The system switches to the second codec only if an increased amount of power consumption resulting from using the second codec balances with a correspondingly sufficient increase in the quality of the video encoded by the second codec. In some embodiments, codecs are excluded from consideration if it is determined that the local device does not have sufficient processing resources to operate the codec, or if a mismatch is detected between the codec operating on a sending device and a codec operating on a receiving device.

MULTI-SENSOR MOTION DETECTION
20210352300 · 2021-11-11 ·

Use of multiple sensors to determine whether motion of an object is occurring in an area is described. In one aspect, an infrared (IR) sensor can be supplemented with a radar sensor to determine whether the determined motion of an object is not a false positive.

MULTI-SENSOR MOTION DETECTION
20210352300 · 2021-11-11 ·

Use of multiple sensors to determine whether motion of an object is occurring in an area is described. In one aspect, an infrared (IR) sensor can be supplemented with a radar sensor to determine whether the determined motion of an object is not a false positive.

QoE-BASED ADAPTIVE ACQUISITION AND TRANSMISSION METHOD FOR VR VIDEO
20220006851 · 2022-01-06 ·

The present application discloses a QoE-based adaptive acquisition and transmission method for VR video, comprising the following steps: 1, capturing, by respective cameras in a VR video acquisition system, original videos with the same bit rate level, and compressing each original video with different bit rate levels; 2, selecting, by a server, a bit rate level for each original video for transmission, and synthesizing all of the transmitted original videos into a complete VR video; 3, performing, by the server, a segmentation process on the synthesized VR video, and compressing each video block into different quality levels; and 4, selecting, by the server, a quality level and an MCS scheme for each video block according to real-time viewing angle information of users and downlink channel bandwidth information in a feedback channel, and transmitting each video block to a client.

QoE-BASED ADAPTIVE ACQUISITION AND TRANSMISSION METHOD FOR VR VIDEO
20220006851 · 2022-01-06 ·

The present application discloses a QoE-based adaptive acquisition and transmission method for VR video, comprising the following steps: 1, capturing, by respective cameras in a VR video acquisition system, original videos with the same bit rate level, and compressing each original video with different bit rate levels; 2, selecting, by a server, a bit rate level for each original video for transmission, and synthesizing all of the transmitted original videos into a complete VR video; 3, performing, by the server, a segmentation process on the synthesized VR video, and compressing each video block into different quality levels; and 4, selecting, by the server, a quality level and an MCS scheme for each video block according to real-time viewing angle information of users and downlink channel bandwidth information in a feedback channel, and transmitting each video block to a client.

Method and apparatus for streaming data

A terminal for receiving streaming data may receive information of a plurality of different quality versions of an image content; request, based on the information, a server for a version of the image content from among the plurality of different quality versions of the image content; when the requested version of the image content and artificial intelligence (AI) data corresponding to the requested version of the image content are received, determines whether to perform AI upscaling on the received version of the image content, based on the AI data; and based on a result of the determining whether to perform AI upscaling, performs AI upscaling on the received version of the image content through a upscaling deep neural network (DNN) that is trained jointly with a downscaling DNN of the server.

SMART PACKET PACING FOR VIDEO FRAME STREAMING
20230254500 · 2023-08-10 ·

In various examples, a frame may be encoded as multiple sub-frames. For example, data particularly relevant to conveying visual motion between frames may be encoded in a first sub-frame(s) with remaining data being encoded in a second sub-frame(s). Other information may be included in the first sub-frame(s), such as high entropy data. The high entropy data may be estimated using quantization and dequantization of macroblocks. Packet pacing may be applied at least between the encoded sub-frames. As the first sub-frame(s) may include the most important information for frame updates at the client device, if the second sub-frame(s) is not received and/or displayed the first sub-frame may be displayed providing high quality results. More error correction may be used for the first sub-frame than the second sub-frame to increase the likelihood that the first sub-frame is received at a client device.

Low delay concept in multi-layered video coding

An interleaved multi-layered video data stream with interleaved decoding units of different layers is provided with further timing control information in addition to the timing control information reflecting the interleaved decoding unit arrangement. The additional timing control information pertains to either a fallback position according to which all decoding units of an access unit are treated at the decoded buffer access unit-wise, or a fallback position according to which an intermediate procedure is used: the interleaving of the DUs of different layers is reversed according to the additionally sent timing control information, thereby enabling a DU-wise treatment at the decoder's buffer, however, with no interleaving of decoding units relating to different layers. Both fallback positions may be present concurrently. Various advantageous embodiments and alternatives are the subject of the various claims attached herewith.

Low delay concept in multi-layered video coding

An interleaved multi-layered video data stream with interleaved decoding units of different layers is provided with further timing control information in addition to the timing control information reflecting the interleaved decoding unit arrangement. The additional timing control information pertains to either a fallback position according to which all decoding units of an access unit are treated at the decoded buffer access unit-wise, or a fallback position according to which an intermediate procedure is used: the interleaving of the DUs of different layers is reversed according to the additionally sent timing control information, thereby enabling a DU-wise treatment at the decoder's buffer, however, with no interleaving of decoding units relating to different layers. Both fallback positions may be present concurrently. Various advantageous embodiments and alternatives are the subject of the various claims attached herewith.

System and method for compressing video for streaming video game content to remote clients

Methods for hosting online video games are provided. The method includes generating a plurality of video frames and initiating a sending of each one of the plurality of video frames to a client. Each of the video frames that is sent is compressed. Then, stopping the compression and sending of video frames when one of the plurality of video frames is taking longer than a frame time to compress and send. A frame time is defined as one over a frame rate, and wherein stopping the compression of video frames includes ignoring the video frames by an encoder. The method includes continuing to compress and send audio data to the client when one or more of the plurality of video frames are not sent to the client. The client is configured to display a received video frame for more than one frame time when a video frame is not received.