Patent classifications
H04L45/742
Distributed flow processing and flow cache
The techniques disclosed herein improve the efficiency, reliability and scalability of flow processing systems by providing a multi-tier flow cache structure that can reduce the size of a flow table and also reduce replicated flow sets. In some configurations, a system can partition a flow space across workers and replicate the flows within a partition to a set of workers. In some configurations, a flow cache structure can include three tiers: (1) a scalable flow processing layer for executing the actions and transformations of a flow, (2) a flow state management layer for managing distributed flow state decisions, and (3) a flow decider layer for identifying actions and transformations needs to be executed on each packet of a flow. Flow replications allow other workers to pick up flows allocated to a particular worker that is taken offline in the event of a crash or update.
Systems for building data structures with highly scalable algorithms for a distributed LPM implementation
Described are programmable IO devices configured to perform operations. These operations comprise: determining a set of range-based elements for a network; sorting the set of range-based elements according to a global order among the range-based elements; generating an interval table from the sorted range-based elements; generating an interval binary search tree from the interval table; propagating data stored in subtrees of interior stages of the interval binary search tree to subtrees of a last stage of the interval binary search tree such that the interior stages do not comprise data; converting the interval binary search tree to a Pensando Tree; compressing multiple levels of the Pensando Tree into cache-lines; and assembling the cache-lines in the memory unit such that each stage can compute an address of a next-cache line to be fetched by a next stage.
Logical router with multiple routing components
Some embodiments provide a method for implementing a logical router in a network. The method receives a definition of a logical router for implementation on a set of network elements. The method defines several routing components for the logical router. Each of the defined routing components includes a separate set of routes and separate set of logical interfaces. The method implements the several routing components in the network. In some embodiments, the several routing components include one distributed routing component and several centralized routing components.
METHOD AND DEVICE FOR TRANSMITTING/RECEIVING DATA USING CACHE MEMORY IN WIRELESS COMMUNICATION SYSTEM SUPPORTING RELAY NODE
The present invention relates to a wireless communication system. More particularly, the present invention relates to a method for transmitting a content using a cache memory, and a method for transmitting, by a relay node, a content using a cache memory according to the present invention may comprise the steps of: storing a first content, received from a serving cell, in the cache memory; storing a second content, received from an adjacent cell or the serving cell, in the cache memory; selecting a content to be transmitted to a user equipment (UE) from among the first content requested by the UE and the second content which acts as interference to the first content; and transmitting the second content to the UE.
TECHNOLOGIES FOR NETWORK I/O ACCESS
Technologies for accelerating non-uniform network input/output accesses include a multi-home network interface controller (NIC) of a network computing device communicatively coupled to a plurality of non-uniform memory access (NUMA) nodes, each of which include an allocated number of processor cores of a physical processor package and an allocated portion of a main memory directly linked to the physical processor package. The multi-home NIC includes a logical switch communicatively coupled to a plurality of logical NICs, each of which is communicatively coupled to a corresponding NUMA node. The multi-home NIC is configured to facilitate the ingress and egress of network packets by determining a logical path for each network packet received at the multi-home NIC based on a relationship between one of the NUMA nodes and/or a logical NIC (e.g., to forward the network packet from the multi-home NIC) coupled to the one of the NUMA nodes. Other embodiments are described herein.
Aliasing of named data objects and named graphs for named data networks
A method for aliasing of named data objects (in named data networks) and entities for named data networks (e.g., named graphs for named data networks). In various examples, aliasing of named data objects may be implemented in one or more named data networks in the form of systems, methods and/or algorithms. In other examples, named graphs may be implemented in one or more named data networks in the form of systems, methods and/or algorithms.
Communication of endpoint information among virtual switches
Techniques for utilizing virtual edge switches of cloud computing networks to send, receive, and store in respective virtual memory, associations between virtual resource and virtual edge switches for better convergence in virtual application centric infrastructure networks. Each virtual memory acts as a virtual endpoint database and contains a number of records indicating associations between each virtual endpoint and the virtual edge switch attached to the virtual endpoint. Each virtual edge switch is hosted by a physical server and is configured to forward communications received from separate physical servers in the cloud computing network to the virtual endpoints attached to the virtual edge switch. The advertisement messages are configured to be sent upon a new virtual resource or a migrated virtual resource spinning-up on a physical server. The advertisement message may be configured to store additional network routing information associated with the virtual machine hosting the virtual endpoint.
End-to-end cache for network elements
A method in a network element includes processing input packets using a set of two or more functions that are defined over parameters of the input packets. Each function in the set produces respective interim actions applied to the input packets and the entire set produces respective end-to-end actions applied to the input packets. An end-to-end mapping, which maps the parameters of at least some of the input packets directly to the corresponding end-to-end actions, is cached in the network element. The end-to-end mapping is queried with the parameters of a new input packet. Upon finding the parameters of the new input packet in the end-to-end mapping, an end-to-end action mapped to the found parameters is applied to the new input packet, without processing the new input packet using the set of functions.
MULTI-DOMAIN CENTRALIZED CONTENT-CENTRIC NETWORKING
A multi-domain centralized content-centric networking (MCCN), including: a management layer; a control layer; and a data layer. The management layer communicates with the data layer through the control layer. The management layer is configured to acquire application transmission requests, network resource allocation, and network running status, and give network operating commands to a control plane according to reconfiguration of management strategies. The control layer is configured to carry out routing establishment, maintain network topology of domains, inform the management layer of network status, and execute commands of the management layer. The data layer is configured to process data packet according to commands of the control layer. The task of the data layer is completed by a router and link of the bottom layer.
DATA STORAGE DEVICE
A memory system includes a plurality of volatile memory modules to temporarily store data in a distributed manner, a V storing place management unit included in each of the volatile memory modules, a plurality of nonvolatile memory modules to store the data stored in each of the volatile memory modules in a distributed manner, and a NV storing place management unit included in each of the nonvolatile memory modules. Each V storing place management unit and each NV storing place management unit communicate and determine the destination nonvolatile memory module for each volatile memory module. The data is transmitted to the determined destination nonvolatile memory module and stored in the destination nonvolatile memory module.