Patent classifications
H04L12/875
HANDLING SOURCE ROUTED CONTENT
Methods for handling source-routed content are provided, together with apparatuses for performing the methods. A method at a receiving network node includes receiving a data transmission. The data transmission has control information and a content payload. The receiving node determines whether the control information includes an instruction to execute an action. This determination may involve determining that the instruction is directed to the receiving node. In response, the receiving node performs the action. The action may include caching the content at the receiving node. The receiving node also forwards the content payload in accordance with the control information.
Delaycast queue prioritization
Systems and methods are described for optimizing resource utilization in a communications network while also optimizing subscriber engagement with media content over the communications network. Requested content objects can be identified as delayable objects that can be queued for opportunistically delayed communication to both requesting and non-requesting subscribers. Queued delayed content objects are scored with an eye toward optimizing both subscriber engagement and utilization of opportunistically available communications link resources. For example, a storage manager calculates a likelihood that each subscriber will engage with the content if it is opportunistically delivered, and a scheduler calculates a priority order in which to queue each requested delayable content object. Content objects can then be multicast to the subscribers in priority order and with associated information that can be used by the subscribers to determine whether to locally store the content objects as they are opportunistically received.
Method for transmitting uplink information, terminal device and network device
A method for transmitting information, a terminal device and a network device are provided. The method comprises: a terminal device receives n groups of downlink channels/signals on a downlink resource in the channel occupancy time (COT), each group of downlink channels/signals in the n groups of downlink channels/signals comprising at least one downlink channel/signal; the terminal device transmits uplink information corresponding to an i-th group of downlink channels/signals in the n groups of downlink channels/signals on an uplink resource in the COT; the starting time for transmitting uplink information corresponding to the i-th group of downlink channels/signals is determined according to the end time T0 of the i-th group of downlink channels/signals, the end time T1 of the downlink resource, and a processing delay of the downlink channel/signal.
CROSS-LAYER AND CROSS-ACCESS TECHNOLOGY TRAFFIC SPLITTING AND RETRANSMISSION MECHANISMS
The present disclosure is related to Multi-Access Management Services (MAMS), which is a programmable framework that provides mechanisms for the flexible selection of network paths in a multi-access (MX) communication environment, based on an application's needs. The present disclosure discusses dynamic traffic splitting mechanisms, cross-layer and cross access technology traffic splitting mechanisms and retransmission mechanisms, multi-link packet reordering mechanisms, and link-aware packet duplication mechanisms. Generic Multi-Access (GMA) data plane functions are also integrated into the MAMS framework.
Method and apparatus for managing transport of delay-sensitive packets
A method of managing transport of packets transmitted over a time division multiplexed, TDM, link in a network. The method performed at a second network node comprises: receiving (102) blocks of data from a first network node. Data from one packet is received in a plurality of blocks and a first block from a packet has a time-stamp indicating arrival time of the packet at the first network node. The blocks are multiplexed for transmission over the TDM link. The method also comprises: queuing (106) the received blocks and if a block from the top of the queue (108, 110) has a time-stamp (110—yes) and a maximum allowed latency has been exceeded (112) the method discards (116) blocks containing data from the same packet as the block with said time-stamp if there is at least one block containing data from another packet in the queue (114—yes). An apparatus is also disclosed.
Preventing failure processing delay
A method and device for preventing a failure processing delay are provided in the disclosure. In an example, when the number of queue elements in an equivalence class time-window queue reaches a set threshold (denoted as N) in a set time-window, it means that there are N Bidirectional Forwarding Detection (BFD) sessions in the same equivalence class set, that detect Down events. It thus can be intelligently inferred that a public network path carrying the N BFD sessions breaks down. For processing a failure in time and reducing data stream loss on an upper layer, the present disclosure may allow reporting a corresponding Down event for each BFD session in the equivalence class set to which the N BFD sessions belong.
Communication apparatus, control method, and storage medium
If a communication apparatus is to transmit data to another communication apparatus and communication via a communication unit included in the other communication apparatus is not performable, whether or not to transmit a frame for causing a transition to a state where the communication via the communication unit included in the other communication apparatus is performable is selected based on an amount of data accumulated in a transmission queue in which the data is stored.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.
Unified radio access network (RAN)/multi-access edge computing (MEC) platform
A device can receive, from a node in a core network, application identifiers associated with applications accessible by a first user device. The application identifiers can be associated with latency requirements. The device can obtain, from the first user device, a first packet associated with a first packet flow. The device can compare information regarding the first packet flow, and the application identifiers to determine that the first packet is destined for a low-latency application having a specified latency range. The device can identify a first low-latency bearer that satisfies the specified latency range associated with the low-latency application. The device can map the first packet flow to the first low-latency bearer, and communicate packets, associated with the first packet flow, using the first low-latency bearer. The packets can include data packets communicated between an entity hosting the low-latency application and the first user device, while bypassing the core network.
Encapsulation of data packets
Example embodiments describe a transmitter including data encapsulation circuitry configured to encapsulate data packets into Data Transport Units, DTUs, for further transmission over a communication medium. The data packets have respective Quality of Service, QoS, tolerances. The data encapsulation circuitry is configured to delay transmission of first data packets with a lower QoS tolerance and to group the first data packets in a subset of DTUs available for transportation of the first data packets.