Patent classifications
H04L47/522
Systems and methods for transport capacity scheduling
The present disclosure relates to systems and methods for transport capacity scheduling. The systems and methods may determine a target region, wherein a plurality of service requests that satisfy a preset condition initiate from the target region. The systems and methods may determine a non-busy region based on information of the target region. The non-busy region may include one or more available service providers that are free to accept a service request. The systems and methods may transmit, via a network, a scheduling instruction associated with the plurality of service requests to a user terminal associated with at least one of the one or more available service providers in the non-busy region. The scheduling instruction may include information inquiring whether the at least one of the one or more available service providers in the non-busy region agrees to go to the target region.
Dynamic allocation of network resources using external inputs
Systems and methods for managing network resources are disclosed. One method can comprise receiving first information relating to network traffic parameters and receiving second information relating to one or more contextual events having an effect on the network traffic parameters. The first information and the second information and be correlated. And one or more network resources can be allocated based on the correlation of the first information and the second information.
Dynamic routing of queued network-based communications using real-time information and machine learning
Methods for dynamic routing of queued network-based communications using real-time information and machine learning are performed by systems and devices. Requests associated with fulfillments are received over a network from requestor systems, and the requests are queued in a data structure of a queue. Information that includes geolocation information from a user device of a user that is associated with the fulfillment, temporal information from the user device, or related request information associated with another request is then received over the network, and a fulfiller and a fulfillment time for the fulfillment are determined from the information. The request is provided from the queue to the fulfiller at the fulfillment time over the network.
Smart bandwidth allocation
A controller is provided for use with a CD, a WAN, and a service provider server, the HNC includes: a memory; and a processor configured to execute instructions stored on memory to cause the HNC to: establish a priority time period; associate the priority time period with a first application; establish a first service flow queue having a first QoS during priority period; establish a second service flow queue having a second QoS; receive first upstream packets and second upstream packets; assign the first upstream packets to a first upstream queue during the priority time period; assign the second upstream packets to a second upstream queue; receive first downstream packets and second downstream packets; assign the first downstream packets to a first downstream queue during the priority time period; and assign the second downstream packets to a second downstream queue.
Flow table aging optimized for DRAM access
A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table.
Flow Table Aging Optimized For Dram Access
A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table.
Self-Protecting Computer Network Router with Queue Resource Manager
A self-protecting router limits the extent to which its queues can be filled with potentially malicious or otherwise harmful messages received from outside the router, thereby ensuring the queues have sufficient room to accept messages generated internally within the router and are necessary for management and operation of the router. Such routers are, therefore, immune to attack by floods of messages from malicious or malfunctioning network nodes, such as computers, switches and other routers.
Packet Forwarding Method and Apparatus
Embodiments of the present invention disclose a packet forwarding method and apparatus. The method includes: receiving, by a first scheduler, a target packet; sending the target packet to a destination physical egress port corresponding to the egress port information, and increasing, according to the queue identifier, a queue length of a virtual queue corresponding to the queue identifier by the packet length; sending update information to a second scheduler, where the update information includes that the queue length of the virtual queue is increased by the packet length; and decreasing the queue length of the virtual queue by the packet length according to a bandwidth scheduling result that is corresponding to the update information and sent by the second scheduler. In this way, even if back pressure appears in the destination physical egress port corresponding to the target packet, that the first scheduler sends the target packet is not affected.
TECHNOLOGIES FOR NETWORK I/O ACCESS
Technologies for accelerating non-uniform network input/output accesses include a multi-home network interface controller (NIC) of a network computing device communicatively coupled to a plurality of non-uniform memory access (NUMA) nodes, each of which include an allocated number of processor cores of a physical processor package and an allocated portion of a main memory directly linked to the physical processor package. The multi-home NIC includes a logical switch communicatively coupled to a plurality of logical NICs, each of which is communicatively coupled to a corresponding NUMA node. The multi-home NIC is configured to facilitate the ingress and egress of network packets by determining a logical path for each network packet received at the multi-home NIC based on a relationship between one of the NUMA nodes and/or a logical NIC (e.g., to forward the network packet from the multi-home NIC) coupled to the one of the NUMA nodes. Other embodiments are described herein.
APPARATUSES, METHODS, AND COMPUTER PROGRAMS FOR A REMOTE UNIT AND A CENTRAL UNIT OF AN OPTICAL LINE TERMINAL
Examples relate to apparatuses, methods, and computer programs for a remote unit and a central unit of an optical line terminal. In particular, a central unit apparatus for an optical line terminal comprises one or more interfaces configured to communicate with one or more remote unit apparatuses via one or more communication links. The apparatus further comprises a processor configured to receive information on one or more upstream reports from the remote unit apparatuses, the upstream reports relate to one or more optical networks used by the remote unit apparatuses to communicate with a plurality of optical network users. The processor further determines information on bandwidth assignments for the plurality of optical network users based on the information on the one or more upstream reports and transmits the information on bandwidth assignments to the one or more remote unit apparatuses.