H04L12/747

SYSTEM AND METHOD FOR DIRECT STORAGE ACCESS IN A CONTENT-CENTRIC NETWORK
20180011936 · 2018-01-11 ·

One embodiment of the present invention provides a system for caching content data to a storage device attached to a node in a content-centric network (CCN). During operation, the system receives a content packet; forwards the content packet to an incoming port of an interest in the content packet; caches a copy of the content packet in the attached storage device, assembles a query; which includes at least a network header and an address associated with storage blocks at which the cached copy of the content packet is located, corresponding to the content packet; and stores the query in a cache table, thereby facilitating a subsequent direct access to the storage blocks using the assembled query to retrieve the copy of the content packet.

METHOD OF FORWARDING DATA PACKETS, METHOD OF CREATING MERGED FIB KEY ENTRY AND METHOD OF CREATING A SEARCH KEY
20170366457 · 2017-12-21 ·

The method of creating a key entry includes inserting a routing instance identifier (RII) after at least a portion of a key entry of a routing instance (RI) FIB, in accordance with an encoding scheme. In other words, at least a portion of bits of the RI FIB key entry is located before bit(s) of the RII in the resulting, merged FIB key entry. Depending on the encoding scheme, the RII can be inserted at the end of the RI FIB key entry, or at an intermediary location within the RI FIB key entry (after a given number of bits). To form the merged FIB, the method is repeated multiple times on corresponding key entries of the RI FIB. There is also provided a method of creating a search key to lookup the merged FIB.

System and method for direct storage access in a content-centric network
09836540 · 2017-12-05 · ·

One embodiment of the present invention provides a system for caching content data to a storage device attached to a node in a content-centric network (CCN). During operation, the system receives a content packet; forwards the content packet to an incoming port of an interest in the content packet; caches a copy of the content packet in the attached storage device, assembles a query; which includes at least a network header and an address associated with storage blocks at which the cached copy of the content packet is located, corresponding to the content packet; and stores the query in a cache table, thereby facilitating a subsequent direct access to the storage blocks using the assembled query to retrieve the copy of the content packet.

CHOREOGRAPHED CACHING
20170346916 · 2017-11-30 · ·

A routing device capable of performing application layer data caching is described. Application data caching at a routing device can alleviate the bottleneck that an application data host may experience during high demands for application data. Requests for the application data can also be fulfilled faster by eliminating the network delays for communicating with the application data host. The techniques described can also be used to perform analysis of the underlying application data in the network traffic transiting though a routing device.

Packet routing apparatus, interface circuit and packet routing method
09832120 · 2017-11-28 · ·

A packet routing apparatus includes a plurality of interface units each includes, a first memory in which a plurality of routing information used in a routing of a packet are stored, a second memory in which the plurality of routing information are stored, the number of the routing information stored in the second memory being smaller than the number of the routing information stored in the first memory, and a retrieval unit configured to detect a packet length of the packet received at any of the plurality of ports, perform the routing based on the plurality of routing information stored in the second memory when the detected packet length is less than a predetermined value, and perform the routing based on the plurality of routing information stored in the first memory when the detected packet length is greater than or equal to the predetermined value.

Network cache accelerator

A network host such as a caching device is disclosed that greatly increases the speed with which a server reads and writes data for clients. The host may include a specialized network interface that not only processes TCP but also parses received network file system headers, identifying those headers within the TCP data stream, separating them from any corresponding network file system data, and providing them separately from that data to the network file system of the host for processing as a batch, all without any interrupt to the host. Similarly, the network file system of the host may communicate directly with the network interface by writing network file system headers directly to a register of the network interface to transmit data.

METHOD AND DEVICE FOR TRANSMITTING/RECEIVING DATA USING CACHE MEMORY IN WIRELESS COMMUNICATION SYSTEM SUPPORTING RELAY NODE

The present invention relates to a wireless communication system. More particularly, the present invention relates to a method for transmitting a content using a cache memory, and a method for transmitting, by a relay node, a content using a cache memory according to the present invention may comprise the steps of: storing a first content, received from a serving cell, in the cache memory; storing a second content, received from an adjacent cell or the serving cell, in the cache memory; selecting a content to be transmitted to a user equipment (UE) from among the first content requested by the UE and the second content which acts as interference to the first content; and transmitting the second content to the UE.

TECHNOLOGIES FOR NETWORK I/O ACCESS
20170289036 · 2017-10-05 ·

Technologies for accelerating non-uniform network input/output accesses include a multi-home network interface controller (NIC) of a network computing device communicatively coupled to a plurality of non-uniform memory access (NUMA) nodes, each of which include an allocated number of processor cores of a physical processor package and an allocated portion of a main memory directly linked to the physical processor package. The multi-home NIC includes a logical switch communicatively coupled to a plurality of logical NICs, each of which is communicatively coupled to a corresponding NUMA node. The multi-home NIC is configured to facilitate the ingress and egress of network packets by determining a logical path for each network packet received at the multi-home NIC based on a relationship between one of the NUMA nodes and/or a logical NIC (e.g., to forward the network packet from the multi-home NIC) coupled to the one of the NUMA nodes. Other embodiments are described herein.

Aliasing of named data objects and named graphs for named data networks

A method for aliasing of named data objects (in named data networks) and entities for named data networks (e.g., named graphs for named data networks). In various examples, aliasing of named data objects may be implemented in one or more named data networks in the form of systems, methods and/or algorithms. In other examples, named graphs may be implemented in one or more named data networks in the form of systems, methods and/or algorithms.

End-to-end cache for network elements

A method in a network element includes processing input packets using a set of two or more functions that are defined over parameters of the input packets. Each function in the set produces respective interim actions applied to the input packets and the entire set produces respective end-to-end actions applied to the input packets. An end-to-end mapping, which maps the parameters of at least some of the input packets directly to the corresponding end-to-end actions, is cached in the network element. The end-to-end mapping is queried with the parameters of a new input packet. Upon finding the parameters of the new input packet in the end-to-end mapping, an end-to-end action mapped to the found parameters is applied to the new input packet, without processing the new input packet using the set of functions.