Patent classifications
H04N21/440281
Systems and methods for bandwidth-limited video transport
Systems and methods for bandwidth-limited video transport are configured to receive (or otherwise discern) a selection of video parameter limits that correspond to a bandwidth limit and apply the video parameter limits to an input video stream to enforce the bandwidth limit while preserving video quality. Methods may include adjusting the video stream one parameter at a time until the adjusted video stream meets the bandwidth limit. Parameters to be adjusted may include image resolution, frame rate, image compression, color depth, bits per pixel, and/or color encoding. In some embodiments, the image resolution is reduced first, the frame rate is reduced next, and the image compression is increased last. The extent and/or order of the adjustments of the parameters may be selected by the user, based on the content of the video stream, and/or based on the bandwidth limit.
METHOD FOR VIDEO PROCESSING, AN ELECTRONIC DEVICE FOR VIDEO PLAYBACK AND A VIDEO PLAYBACK SYSTEM
The present disclosure relates to a video processing method, an electronic device for video playback, and a video playback system. A video processing method comprises: receiving an input video comprising a plurality of sections, and metadata associated with the input video, at least two sections of the plurality of sections of the input video having different frame rates; and performing real-time processing on the input video according to the metadata, so as to output an output video having a constant target frame rate in real time, wherein the metadata includes frames indicating various sections of the input video rate information, or the metadata includes information indicating the frame rate of each section of the input video and at least one of the following information: information indicating a target frame rate and information indicating a processing operation to be used for performing real-time processing on the input video.
TRANSMISSION DEVICE, TRANSMISSION METHOD, RECEPTION DEVICE, AND RECEPTION METHOD
An image data of pictures constituting moving image data is encoded to generate an encoded video stream. In this case, the image data of the pictures constituting the moving image data is classified into a plurality of levels and encoded to generate a video stream having the image data of the pictures at the respective levels. Hierarchical composition is equalized between a low-level side and a high-level side, and corresponding pictures on the low-level side and the high-level side are combined into one set and are sequentially encoded. This allows a reception side to decode the encoded image data of the pictures on the low-level side and the high-level side with a smaller buffer size and a reduced decoding delay.
Techniques for enabling ultra-high definition alliance specified reference mode (UHDA-SRM)
Techniques for enabling the display of video content is a specified display mode, such as the Ultra-High Definition Alliance Specified Reference Mode (UHDA-SRM). A video source device receives video content as a bitstream in one format that includes a specification of a display mode for the video content. The video source also receives information from a display device or other video sink on the display modes that it supports. If the display device supports the specified display mode, the video provides the video content to the display in a second format, such as HDMI, as a series of frames the specification of the display mode embedded in a blanking interval in each of the frames.
Automated graphical image modification scaling based on rules
Aspects of the present disclosure involve systems and methods for performing operations comprising receiving, with a messaging application, user input to access a graphical image modification feature of the messaging application; in response to receiving, causing display of a video; accessing a first configuration rule of a plurality of configuration rules that associates a first device property rule with the graphical image modification feature of the messaging application; determining that the first configuration rule is satisfied by a first property of the client device; and in response to determining that the first configuration rule is satisfied by the first property of the client device, causing display of a first plurality of graphical image modification options each associated with performing a different modification to the video.
PIPELINED VIDEO INTERFACE FOR REMOTE CONTROLLED AERIAL VEHICLE WITH CAMERA
The present teachings provide a system and method. The system and method include receiving images or video frames at a wireless receiver interface from a wireless transmitter. The system and method include performing decoder nudging while decoding the images or the video frames received by the wireless transmitter. Overclocking a display of a controller to an overclocked frequency. Outputting decoded images or decoded video frames to the display of the controller at the overclocked frequency.
Audio and Video Data Processing Method, Live Streaming Apparatus, Electronic Device, and Storage Medium
Disclosed are an audio and video data processing method, a live streaming apparatus, an electronic device, and a storage medium. A media stream is acquired, a difference value between a current media frame timestamp and a previous media frame timestamp in the media stream is acquired, and an upper and lower limit range of the difference value is acquired; the current media frame timestamp is output when the difference value is within the upper and lower limit range; a standard media frame interval of the media stream is acquired if the difference value is not within the upper and lower limit range. The problem of abnormal playback of a player caused by an inhomogeneous timestamp of an audio and video frame is solved. An accumulation error is also balanced through forward compensation and reverse compensation, so as to prevent the accumulation error from accumulating and increasing.
TECHNIQUES FOR GENERATION OF A CONFORMANT OUTPUT SUB-BITSTREAM
Examples of video encoding methods and apparatus and video decoding methods and apparatus are described. An example method of video processing includes performing a conversion between a video including multiple layers and a bitstream of the video according to a rule, wherein the rule specifies that, in a first process of sub-bitstream extraction to output a first output sub-bitstream, the first output sub-bitstream is extracted without removing network abstraction layer (NAL) units of a particular type and having a particular NAL unit header identifier value, and wherein the particular type includes an access unit delimiter (AUD) NAL unit.
SYSTEM AND METHOD FOR OPTIMIZING VIDEO COMMUNICATIONS BASED ON DEVICE CAPABILITIES
A system and method for optimizing video for transmission on a device includes, in one example, the method includes capturing an original video frame and scaling the original video frame down to a lower resolution video frame. The lower resolution video frame is downscaled using a first encoder to produce a first layer output and the first layer output is decoded. The decoded first layer output is upscaled to match a resolution of the original video frame. A difference is obtained between the upscaled decoded first layer output and the original video frame. The difference is independently encoded using a second encoder to create a second layer output. The first and second layer outputs may be stored or sent to another device.
Method, device and apparatus for adding video special effects and storage medium
Provided are a method, apparatus and device for adding a video special effect and a storage medium. The method includes: acquiring a source video sequence and at least one special effect video sequence; in the case where frame rates of the two or more than two special effect video sequences are same, inserting a frame into the source video sequence and superimposing the two or more than two special effect video sequences on the source video sequence at the same time; and in the case where frame rates of the two or more than two special effect video sequences are different, determining a target frame rate from the frame rates of the two or more than two special effect video sequences inserting frames into the source video sequence and then superimposing the two or more than two special effect video sequences on the source video sequence.