Patent classifications
H04N19/40
FRAGMENT-ALIGNED AUDIO CODING
Audio video synchronization and alignment or alignment of audio to some other external clock are rendered more effective or easier by treating fragment grid and frame grid as independent values, but, nevertheless, for each fragment the frame grid is aligned to the respective fragment's beginning. A compression effectiveness lost may be kept low when appropriately selecting the fragment size. On the other hand, the alignment of the frame grid with respect to the fragments' beginnings allows for an easy and fragment-synchronized way of handling the fragments in connection with, for example, parallel audio video streaming, bitrate adaptive streaming or the like.
METHOD AND IMAGE PROCESSING DEVICE FOR ENCODING A VIDEO
A method and image processing device for encoding a video comprising a sequence of image frames captured between a first and a second time is disclosed. The method comprises encoding a subset of the image frames, wherein the image frames of the subset are distributed over the sequence and storing the remaining image frames of the sequence. After the second time, the encoded subset is decoded and the stored remaining image frames as well as the decoded encoded subset are encoded to generate the encoded video. Alternatively, the stored remaining image frames are encoded and the encoded subset added to generate the encoded video.
METHOD FOR MANAGING ENCODING OF MULTIMEDIA CONTENT AND APPARATUS FOR IMPLEMENTING THE SAME
A method for managing encoding of multimedia content stored in a file is proposed, which comprises: determining, using a supervised learning algorithm, a prediction of processing resources required for encoding the multimedia content, based on one or more multimedia content characteristics of the multimedia content and on one or more multimedia content encoding parameters for encoding the multimedia content; and determining a processing configuration for encoding the multimedia content based on the prediction of processing resources.
DISTRIBUTING COMPRESSED VIDEO FRAMES IN A VIDEO CONFERENCE
One disclosed example method includes receiving, by a video conference provider, video frames from a plurality of existing participants in a video conference; receiving, by the video conference provider, a request from a new user to join the video conference, and in response: generating, by the video conference provider, an instantaneous decoder refresh (IDR) frame; determining, by the video conference provider, one or more prior video frames previously acknowledged by each existing participant of the plurality of existing participants; generating, by the video conference provider, a benchmark frame for each of the plurality of existing participants based on at least one of the determined one or more prior video frames and the IDR frame; transmitting, by the video conference provider, the IDR frame to the new user; and transmitting, by the video conference provider, a message comprising the benchmark frame to each of the plurality of existing participants.
TRANSCODING TECHNIQUES FOR ALTERNATE DISPLAYS
Video coding techniques are disclosed for resource-limited destination display devices. Input video data may be coded by converting a first representation of the input video to a resolution of a destination display and base layer coding the converted representation. Additionally, a region of interest may be predicted from within the input video. The predicted ROI may be converted to a resolution of the destination display, and the converted ROI may be enhancement layer coded. The base layer coded data and the enhancement layer data may be transmitted to the destination display where the coded base layer data is decoded and displayed until a zoom event occurs. When a zoom event occurs, both the coded base layer data and the coded enhancement layer data may be decoded and displayed. Thus, the switchover from a first field of view to an ROI view may be performed quickly.
TRANSCODING TECHNIQUES FOR ALTERNATE DISPLAYS
Video coding techniques are disclosed for resource-limited destination display devices. Input video data may be coded by converting a first representation of the input video to a resolution of a destination display and base layer coding the converted representation. Additionally, a region of interest may be predicted from within the input video. The predicted ROI may be converted to a resolution of the destination display, and the converted ROI may be enhancement layer coded. The base layer coded data and the enhancement layer data may be transmitted to the destination display where the coded base layer data is decoded and displayed until a zoom event occurs. When a zoom event occurs, both the coded base layer data and the coded enhancement layer data may be decoded and displayed. Thus, the switchover from a first field of view to an ROI view may be performed quickly.
REDUCTION OF STARTUP TIME IN REMOTE HLS
A method is provided for streaming transcoded HLS video from a video asset to allow a minimum startup delay time. A method includes pre-transcoding a first number of the HLS chunks. Then, once a request is received from a remote HLS client for the HLS video asset, transmitting a number of the pre-transcoded chunks to the remote HLS player. The pre-transcoded chunks are transmitted during a startup period until real-time transcoded chunks can be received and processed by the remote HLS player at a time position to allow seamless transition from the pre-transcoded chunks.
REDUCTION OF STARTUP TIME IN REMOTE HLS
A method is provided for streaming transcoded HLS video from a video asset to allow a minimum startup delay time. A method includes pre-transcoding a first number of the HLS chunks. Then, once a request is received from a remote HLS client for the HLS video asset, transmitting a number of the pre-transcoded chunks to the remote HLS player. The pre-transcoded chunks are transmitted during a startup period until real-time transcoded chunks can be received and processed by the remote HLS player at a time position to allow seamless transition from the pre-transcoded chunks.
Method and system for wireless video transmission via different interfaces
A method and system is provided for wireless transmission of audio/video information via different wired AV interface formats. A method and system for wireless communication of audio/video AV information between AV devices includes receiving audio/video (AV) information from a first AV module via a first wired AV interface in a first AV device, applying interface dependent processing to the AV information, and transmitting the processed AV information from a wireless transceiver over a wireless channel to a wireless receiver of a second AV device. The second AV device includes a second wired AV interface and the first AV interface is of a different type than the second interface device.
Method and system for wireless video transmission via different interfaces
A method and system is provided for wireless transmission of audio/video information via different wired AV interface formats. A method and system for wireless communication of audio/video AV information between AV devices includes receiving audio/video (AV) information from a first AV module via a first wired AV interface in a first AV device, applying interface dependent processing to the AV information, and transmitting the processed AV information from a wireless transceiver over a wireless channel to a wireless receiver of a second AV device. The second AV device includes a second wired AV interface and the first AV interface is of a different type than the second interface device.