Patent classifications
H04N19/142
Game application providing scene change hint for encoding at a cloud gaming server
A method for encoding including executing game logic built on a game engine of a video game at a cloud gaming server to generate video frames. The method including executing scene change logic to predict a scene change in the video frames based on game state collected during execution of the game logic. The method including identifying a range of video frames that is predicted to include the scene change. The method including generating a scene change hint using the scene change logic, wherein the scene change hint identifies the range of video frames, wherein the range of video frames includes a first video frame. The method including delivering the first video frame to an encoder. The method including sending the scene change hint from the scene change logic to the encoder. The method including encoding the first video frame as an I-frame based on the scene change hint.
Game application providing scene change hint for encoding at a cloud gaming server
A method for encoding including executing game logic built on a game engine of a video game at a cloud gaming server to generate video frames. The method including executing scene change logic to predict a scene change in the video frames based on game state collected during execution of the game logic. The method including identifying a range of video frames that is predicted to include the scene change. The method including generating a scene change hint using the scene change logic, wherein the scene change hint identifies the range of video frames, wherein the range of video frames includes a first video frame. The method including delivering the first video frame to an encoder. The method including sending the scene change hint from the scene change logic to the encoder. The method including encoding the first video frame as an I-frame based on the scene change hint.
ADAPTIVELY ENCODING VIDEO FRAMES USING CONTENT AND NETWORK ANALYSIS
An example apparatus for adaptively encoding video frames includes a network analyzer to predict an instant bitrate based on channel throughput feedback received from a network. The apparatus also includes a content analyzer to generate ladder info based on a received frame. The apparatus further includes an adaptive decision executer to determine a frame rate, a video resolution, and a target frame size based on the predicted instant bitrate and the ladder outputs. The apparatus further includes an encoder to encode the frame based on the frame rate, the video resolution, and the target frame size.
ADAPTIVELY ENCODING VIDEO FRAMES USING CONTENT AND NETWORK ANALYSIS
An example apparatus for adaptively encoding video frames includes a network analyzer to predict an instant bitrate based on channel throughput feedback received from a network. The apparatus also includes a content analyzer to generate ladder info based on a received frame. The apparatus further includes an adaptive decision executer to determine a frame rate, a video resolution, and a target frame size based on the predicted instant bitrate and the ladder outputs. The apparatus further includes an encoder to encode the frame based on the frame rate, the video resolution, and the target frame size.
PROCESSING VIDEO USING MASKING WINDOWS
A first quantization value for encoding at least one frame of a content item may be determined based at least on a predetermined bitrate and a point in the content item associated with a scene change. A first duration associated with a first portion of the content item may be determined. The first portion of the content item may comprise the at least one frame and may be associated with the first quantization value. A second quantization value for encoding at least another frame of the content item may be determined based at least on the predetermined bitrate. A second duration associated with a second portion of the content item may be determined. The second portion of the content item may comprise the at least another frame and may be associated with the second quantization value.
PROCESSING VIDEO USING MASKING WINDOWS
A first quantization value for encoding at least one frame of a content item may be determined based at least on a predetermined bitrate and a point in the content item associated with a scene change. A first duration associated with a first portion of the content item may be determined. The first portion of the content item may comprise the at least one frame and may be associated with the first quantization value. A second quantization value for encoding at least another frame of the content item may be determined based at least on the predetermined bitrate. A second duration associated with a second portion of the content item may be determined. The second portion of the content item may comprise the at least another frame and may be associated with the second quantization value.
Machine learning for visual processing
A method for developing an enhancement model for low-quality visual data, the method comprising the steps of receiving one or more sections of higher-quality visual data; and training a hierarchical algorithm. The hierarchical algorithm is operable to increase the quality of one or more sections of lower-quality visual data so as to substantially reproduce the one or more sections of higher-quality visual data. The hierarchical algorithm is then outputted.
Machine learning for visual processing
A method for developing an enhancement model for low-quality visual data, the method comprising the steps of receiving one or more sections of higher-quality visual data; and training a hierarchical algorithm. The hierarchical algorithm is operable to increase the quality of one or more sections of lower-quality visual data so as to substantially reproduce the one or more sections of higher-quality visual data. The hierarchical algorithm is then outputted.
Content adaptive encoding
The described technology is generally directed towards developing an adaptive bitrate stack (ladder) on a per-title basis. Variable bitrate encodings are used to obtain complexity information for a title and per-frames scores for the encodings; another encoding provides scene data. The complexity information is analyzed and processed based on the scene data to determine scene-based (e.g., objective and/or subjective quality) scores, which are used to determine scores for the encodings. The results are used to derive a candidate stack, comprising various resolutions and bitrates that provide desirable results. The candidate stack is evaluated by encoding the title using the candidate stack. These encodings are evaluated to select one resolution from any duplicate resolutions for a bitrate (e.g., based on relative quality), resulting in a pruned, final ladder that is associated with the title as the adaptive bitrate stack to be used for streaming that title's content.
Content adaptive encoding
The described technology is generally directed towards developing an adaptive bitrate stack (ladder) on a per-title basis. Variable bitrate encodings are used to obtain complexity information for a title and per-frames scores for the encodings; another encoding provides scene data. The complexity information is analyzed and processed based on the scene data to determine scene-based (e.g., objective and/or subjective quality) scores, which are used to determine scores for the encodings. The results are used to derive a candidate stack, comprising various resolutions and bitrates that provide desirable results. The candidate stack is evaluated by encoding the title using the candidate stack. These encodings are evaluated to select one resolution from any duplicate resolutions for a bitrate (e.g., based on relative quality), resulting in a pruned, final ladder that is associated with the title as the adaptive bitrate stack to be used for streaming that title's content.