Patent classifications
H04N21/2405
BUFFER DRAIN RATE TUNING TO A MEASURED MAXIMUM RECEIVE BANDWIDTH MEASURED FOR A CLIENT DEVICE WHEN STREAMING
A method for cloud gaming. The method including generating a plurality of video frames when executing a video game at a cloud gaming server. The method including encoding the plurality of video frames at an encoder bit rate, wherein the plurality of video frames that is compressed is transmitted to a client from a streamer of the cloud gaming server. The method including measuring a maximum receive bandwidth of a client. The method including monitoring the encoding of the plurality of video frames at the streamer. The method including dynamically tuning a parameter of the encoder based on the monitoring of the encoding.
WORKLOAD-BASED DYNAMIC THROTTLING OF VIDEO PROCESSING FUNCTIONS USING MACHINE LEARNING
Embodiments of the present disclosure relate to workload-based dynamic throttling of video processing functions. Systems and methods are disclosed that dynamically throttle video processing and/or streaming based on a workload. Live video is captured from one or more sources (e.g., cameras) and stored. The video is then provided to a video processing engine and a video streaming engine. The video processing engine may perform one or more operations such as object detection, object tracking, and object classification to produce characterization data (e.g., bounding boxes, object trajectories, alerts, object labels, object counts, boundary crossings, intersection highlighting, etc.). System resource usage and performance of the video processing and streaming are monitored to produce workload data (e.g., metrics). Based on the policies and the workload data, the video streaming and/or processing is dynamically reconfigured by adjusting parameters provided to the video streaming and processing engines.
Systems and methods for providing transcoded portions of a video
Multiple videos having individual time durations may be obtained, including a first video with a first time duration. The videos may include visual information defined by one or more electronic media files. An initial portion of the first time duration where the one or more electronic media are to be transcoded may be determined, including determining whether the first time duration is greater than a predefined threshold and if the first time duration is greater than the predefined threshold, determining the initial portion to be an initial time duration that is less than the first time duration. One or more transcoded media files may be generated during the initial portion. A request for the first video may be received from a client computing platform. In response to receipt of the request, the one or more transcoded media files may be transmitted to the client computing platform for display.
RENDERING VIDEO FRAMES FOR A USER INTERFACE OPERATION PERFORMED AT A CLIENT DEVICE
In some implementations, a device includes one or more processors and a non-transitory memory. In some implementations, a method includes obtaining a request for a sequence of video frames that corresponds to a user interface operation being performed at a client device. In some implementations, the sequence of video frames is to be presented at the client device at a first frame rate. In some implementations, the method includes determining an availability of computing resources associated with providing the sequence of video frames to the client device. In some implementations, the method includes generating, based on the availability of computing resources, the sequence of video frames at a second frame rate that is greater than the first frame rate. In some implementations, the method includes triggering the client device to present the sequence of video frames at the first frame rate.
Coordinator for preloading time-based content selection graphs
The described technology is generally directed towards coordinating the generation, validation and enabling of content selection graphs in an in-memory content selection graph data store. When a set of content selection graphs is requested, a coordinator starts the generation of the relevant graphs. Upon successful generation, the coordinator starts a validation of the generated graphs against rules for the nodes/response data in the graphs. If the generated graphs pass validation, the coordinator enables the graph set for use in an in-memory cache, whereby when a request to return content selection data is received, an active graph that corresponds to the request and the current time is accessed to obtain and return the response data as the requested content selection data.
METHODS AND SYSTEMS CONFIGURED TO MANAGE VIDEO TRANSCODER LATENCIES
Systems and methods configured to detect and manage video transcoder latencies are described. A manifest is received and is used to request video segments included in a manifest playlist. A transcoder having an input and output is used to transcode video segments. A delta time for a first SCTE-35 marker between the transcoder input and the transcoder output is determined, where the delta time corresponds to a transcoder latency. A determination is made as to whether a corrective action needs to be taken with respect to the latency, and such corrective action is taken as needed. The corrective action may include a transcoder reset. The manifest may be a text file and may be in the form of an HLS or DASH manifest. Additionally, streaming latencies may be reduced by switching content distribution systems, increasing the number of edge systems distributing content to clients, and/or by increasing video cache memory.
SOFTWARE DEFINED CONTENT DELIVERY NETWORK FOR FLEXIBLE, REAL-TIME MANAGEMENT OF LARGE-SCALE DATA TRANSFERS
A method and an associated SDCDN device for delivering data content in a communication network. A software defined content delivery network (SDCDN) monitors one or more performance indicators regarding an exchange of the data content between a first content delivery network (CDN) and at least one client device using a communication channel. The SDCDN determines that at least one performance indicator of the one or more performance indicators exceeds a threshold performance value. The SDCDN identifies a different CDN in operative communication with the at least one client device. The different CDN includes the data content. In response to determining that at least one performance indicator exceeds the threshold performance value, the SDCDN transmits a transfer command to the at least one client device to cause the at least one client device to switch to the different CDN and receive the data content from the different CDN.
Workload-based dynamic throttling of video processing functions using machine learning
Embodiments of the present disclosure relate to workload-based dynamic throttling of video processing functions. Systems and methods are disclosed that dynamically throttle video processing and/or streaming based on a workload. Live video is captured from one or more sources (e.g., cameras) and stored. The video is then provided to a video processing engine and a video streaming engine. The video processing engine may perform one or more operations such as object detection, object tracking, and object classification to produce characterization data (e.g., bounding boxes, object trajectories, alerts, object labels, object counts, boundary crossings, intersection highlighting, etc.). System resource usage and performance of the video processing and streaming are monitored to produce workload data (e.g., metrics). Based on the policies and the workload data, the video streaming and/or processing is dynamically reconfigured by adjusting parameters provided to the video streaming and processing engines.
Intelligent video streaming system
A system for intelligent video streaming a video controller having at least one processor and non-transitory computer readable media having a set of instructions executable by the at least one processor to receive a playback request from a user device for a live stream, determine, from the playback request, whether source streaming content for the live stream is being transcoded, and allocate an available transcoder to transcode the source streaming content. The system further includes a transcoding having at least one processor and non-transitory computer readable media having a set of instructions executable by the at least one processor to join the multicast stream, retrieve the source streaming content, and transcode the source streaming content, and provide transcoded streaming content for delivery to the user device.
DETECTING LATENCY ANOMALIES FROM PIPELINE COMPONENTS IN CLOUD-BASED SYSTEMS
A method, computer readable medium, and system are disclosed for monitoring a pipeline to detect anomalies such as unusual latency associated with a particular stage. Each stage of the pipeline is configured to update metadata associated with content being processed by inserting a time stamp into the metadata when processing of the content is completed by the stage. The server device can collect the metadata from the last stage of the pipeline and analyze the metadata in order to generate metrics for the pipeline, including a residual latency and/or a gain for each stage of the pipeline. In an embodiment, the content is a frame of video to be displayed on a client device after being rendered by a server device, such as through a streaming service (e.g., a video game streaming service). The server device can adjust the pipeline based on the metrics to improve performance.