H04L47/801

APPARATUS, SYSTEM, AND METHOD FOR MULTI-BITRATE CONTENT STREAMING

An apparatus for multi-bit rate content streaming includes a receiving module configured to capture media content, a streamlet module configured to segment the media content and generate a plurality of streamlets, and an encoding module configured to generate a set of streamlets. The system includes the apparatus, wherein the set of streamlets comprises a plurality of streamlets having identical time indices and durations, and each streamlet of the set of streamlets having a unique bit rate, and wherein the encoding module comprises a master module configured to assign an encoding job to one of a plurality of host computing modules in response to an encoding job completion bid. A method includes receiving media content, segmenting the media content and generating a plurality of streamlets, and generating a set of streamlets.

COMMUNICATION METHOD AND APPARATUS
20230075078 · 2023-03-09 ·

A communication method and apparatus are provided for transmitting packets of a data stream between user equipment. After receiving a first packet from a first user equipment, a user plane function (UPF) forwards the first packet to a second user equipment at a first moment, so that the first packet that arrives at the UPF before the first moment is not forwarded to the second user equipment until the first moment, to support deterministic sending and ensure that a time sensitive communication (TSC) packet is sent at a determined moment, so as to provide deterministic delay assurance for applications such as industrial control and telemedicine.

METHOD FOR SWITCHING WORKFLOW OR UPDATING WORKFLOW WITH CONTINUITY AND NO INTERRUPTION IN DATAFLOW
20220337530 · 2022-10-20 · ·

Systems and methods for managing a Network Based Media Processing (NBMP) workflow are provided. A method includes obtaining a first network based media processing (NBMP) workflow description document (WDD); creating a first workflow corresponding to the first NBMP WDD; managing at least one media processing entity (MPE) according to the first workflow; obtaining an update to the first NBMP WDD, the update comprising a second NBMP WDD, wherein the second NBMP WDD includes a continuity flag indicating that a second workflow corresponding to the second NBMP WDD is a continuation of the first workflow; creating the second workflow based on the second NBMP WDD; and in response to creating the second workflow, managing the at least one MPE according to the second workflow.

Distributed Data Transmission Method, Apparatus, and System
20230126759 · 2023-04-27 ·

This application provides a distributed data transmission method, an apparatus, and a system. When sending RTP data to an output device, an input device determines a delay time of a first output device and a delay time of a second output device based on a device delay control list, and sends the RTP data to the first output device and the second output device based on the delay time of the first output device and the delay time of the second output device, so that times for the first output device and the second output device to output the data are the same. The times for the first output device and the second output device to output the data are controlled. Thus, the first output device and the second output device can output the data simultaneously.

RESOURCE BUNDLE FOR TIME SENSITIVE NETWORKING BRIDGE
20230075864 · 2023-03-09 ·

Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a first user equipment (UE) may transmit a request for a network device to establish a resource bundle, of resources to be used by nodes at hops in a time sensitive networking (TSN) bridge for TSN communications to a second UE, with a maximum latency for the TSN bridge. The first UE may transmit the TSN communications to the second UE via the TSN bridge. Numerous other aspects are described.

Admission control of a communication session

Aspects of the disclosure relate to admission control of a communication session in a network. The admission control can be implemented by a network node at the boundary of the network or a subsystem thereof. In one aspect, the admission control can be implemented during a predetermined period and can be based at least on an admission criterion, which can be specific to an end-point device, e.g., a target device or an origination device. The admission criterion can be configurable and, in certain implementations, it can be obtained from historical performance associated with establishment of communication session. Such historical performance can be assessed within a period of a configurable span.

System and method for dynamic physical resource block allocation across networks using a virtualization layer

Aspects herein provides a system that utilizes a virtualization layer at a distribution unit or central unit in a network. The virtualization layer includes a plurality of virtual resource blocks that are pooled together as a resource for both a public portion of the network and one or more private portions of the network. Based on loading monitoring of the different portions of the network, the plurality of virtual resource blocks in the pool can be dynamically reallocated between the public and private networks to accommodate and optimize loading. The plurality of virtual resource blocks are mapped to physical resource blocks for scheduling and utilization based on the reallocation.

METHOD AND SERVER FOR ADJUSTING ALLOCATION OF COMPUTING RESOURCES TO PLURALITY OF VIRTUALIZED NETWORK FUNCTIONS (VNFs)
20220329539 · 2022-10-13 ·

A method, performed by a server, of adjusting allocation of computing resources to a plurality of virtualized network functions (VNFs), and the server are provided. The method includes: for processing at least one task related to user equipments (UEs) connected to the server, identifying a plurality of VNFs related to the task; obtaining predicted traffic expected to be generated in the server by processing the task via the plurality of VNFs; obtaining, from at least one associated server, status information of computing resources in the at least one associated server; and adjusting allocation of computing resources to the plurality of VNFs based on the status information of the computing resources in the at least one associated server and the predicted traffic, wherein the at least one associated server includes another server that processes the task.

Apparatus, system, and method for multi-bitrate content streaming

An apparatus for multi-bitrate content streaming includes a receiving module configured to capture media content, a streamlet module configured to segment the media content and generate a plurality of streamlets, and an encoding module configured to generate a set of streamlets. The system includes the apparatus, wherein the set of streamlets comprises a plurality of streamlets having identical time indices and durations, and each streamlet of the set of streamlets having a unique bitrate, and wherein the encoding module comprises a master module configured to assign an encoding job to one of a plurality of host computing modules in response to an encoding job completion bid. A method includes receiving media content, segmenting the media content and generating a plurality of streamlets, and generating a set of streamlets.

Low delay networks for interactive applications including gaming

Aspects of the subject disclosure may include, for example, determining delay data for respective network delays through a communication network from respective gaming stations to a gaming provider database for implementing a multi-user online video game, determining available cloud nodes in a potential path in the communication network between a respective gaming station and the gaming provider database, determining potential network configurations for data communication between the respective gaming stations and the gaming provider database using the available cloud nodes and available communication links, identifying an optimum configuration for data communication between the respective gaming stations and the gaming provider database, wherein the optimum configuration provides a minimum fair delay for the respective gaming stations, and configuring the communication network according to the optimum configuration. Other embodiments are disclosed.