Patent classifications
H04L47/6225
Interconnect resource allocation
The present disclosure advantageously provides a method and system for allocating shared resources for an interconnect. A request is received at a home node from a request node over an interconnect, where the request represents a beginning of a transaction with a resource in communication with the home node, and the request has a traffic class defined by a user-configurable mapping based on one or more transaction attributes. The traffic class of the request is determined. A resource capability for the traffic class is determined based on user configurable traffic class-based resource capability data. Whether a home node transaction table has an available entry for the request is determined based on the resource capability for the traffic class.
Apparatus and method for forwarding handover data in wireless communication system
A technique forwards handover data in a wireless communication system. A base station apparatus includes a first buffer for storing downlink data of a terminal, a handover agent for, when the terminal performs a handover, performing scheduling on data which is stored in the first buffer for at least one terminal including the terminal that performs the handover so that an interruption time of the at least one terminal is reduced in order to forward the data to a target base station, and a communication unit for transmitting the data according to a scheduling result of the handover agent.
Seamless switching for multihop hybrid networks
Seamless path switching is made possible in a multi-hop network based upon stream marker packets and additional path distinguishing operations. A device receiving out-of-order packets on the same ingress interface is capable of determining a proper order for the incoming packets having different upstream paths. Packets may be reordered at a relay device or a destination device based upon where a path update is initiated. Reordering packets from the various upstream paths may be dependent upon a type of service associated with the packet.
Method for prioritizing network packets at high bandwidth speeds
The embodiments are directed to methods and appliances for scheduling a packet transmission. The methods and appliances can assign received data packets or a representation of data packets to one or more connection nodes of a classification tree having a link node and first and second intermediary nodes associated with the link node via one or more semi-sorted queues, wherein the one or more connection nodes correspond with the first intermediary node. The methods and appliances can process the one or more connection nodes using a credit-based round robin queue. The methods and appliances can authorize the sending of the received data packets based on the processing.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.
Method to route packets in a distributed direct interconnect network
The present invention provides a method and apparatus to route data packets across a torus or higher radix topology that has low latency, increased throughput and traffic distribution to avoid hot spots development. Disclosed is a method of routing packets in a distributed direct interconnect network from a source node to a destination node comprising the steps of: discovering all nodes and associated ports; updating the database to include the nodes and ports in the network topology; calculating the shortest path from every output port on each node to every other node in the topology; segmenting each packet into flits at the output port of the source node; as the flits are segmented, distributing said flits along the shortest path from each output port on the source node to the destination node using wormhole switching, whereby the packets are distributed along alternate maximum disjoint routes in the network topology; and re-assembling and re-ordering the packets at the destination node so that the packets accord with their original order/form.
Algorithm to predict optimal Wi-Fi contention window based on load
A novel method that dynamically changes the contention window of access points based on system load to improve performance in a dense Wi-Fi deployment is disclosed. A key feature is that no MAC protocol changes, nor client side modifications are needed to deploy the solution. Setting an optimal contention window can lead to throughput and latency improvements up to 155%, and 50%, respectively. Furthermore, an online learning method that efficiently finds the optimal contention window with minimal training data, and yields an average improvement in throughput of 53-55% during congested periods for a real traffic-volume workload replay in a Wi-Fi test-bed is demonstrated.
Dynamic client-server arbiter
Electronic apparatus includes functional circuitry configured to respond to requests from a plurality of client devices, data storage circuitry configured as a plurality of client queues in which each respective client queue is configured to store pending requests from a respective client device, priority determination circuitry configured to assign a respective priority level to each respective client queue based at least in part on requests stored in the respective client queues, and arbiter circuitry configured to control access to the functional circuitry by the plurality of client devices. The arbiter circuitry is configured to monitor the priority level of each respective client queue, and control passage of requests from client queues to the functional circuitry based at least in part on a respective priority level assigned to each respective client queue. The priority determination circuitry includes fill level detector circuitry configured to determine a fill level of each client queue.
VIRTUAL NETWORK DEVICE
A virtual network device increases the effective number of local physical ports by converting each of the local physical ports into a plurality of virtual local physical ports, and the effective number of network physical ports by converting each of the network physical ports into a plurality of virtual network physical ports.
POWER SPECTRAL DENSITY AWARE UPLINK SCHEDULING
According to an aspect, there is provided an apparatus for performing the following. The apparatus is configured to, first, allocate one or more of a plurality of available physical resource blocks to a plurality of terminal devices, wherein the allocating is performed so that a power spectral density for the plurality of terminal devices matches or exceeds a pre-defined limit or a plurality of respective pre-defined limits for power spectral density. In response to one or more physical resource blocks being still available following said allocating, the apparatus is configured to further allocate at least one of the one or more physical resource blocks still available to at least one of the plurality of terminal devices, wherein the further allocating is performed so that at least a pre-defined value for a modulation and coding scheme index is sustainable for said at least one of the plurality of terminal devices.