H04L12/861

EGRESS FLOW MIRRORING IN A NETWORK DEVICE
20170339074 · 2017-11-23 ·

A packet is received at a network device. The packet is processed by the network device to determine at least one egress port via which to transmit the packet, and to perform egress classification of the packet based at least in part on information determined for the packet during processing of the packet. Egress classification includes determining whether the packet should not be transmitted by the network device. When it is not determined that the packet should not be transmitted by the network device, a copy of the packet is generated for mirroring of the packet to a destination other than the determined at least one egress port, and the packet is enqueued in an egress queue corresponding to the determined at least one egress port. The packet is subsequently transferred to the determined at least one egress port for transmission of the packet.

Network traffic controller (NTC)

A Network Device (ND) may be configured to enable secure digital video streaming for HD (high definition) digital video systems over a standard network. The ND may operate in at least one of two modes, a co-processor mode and a stand-alone mode, and may provide at least three high level functions: network interface control (NIC), video streaming offload (VSO), and stand-alone video streaming (SVS). To seamlessly execute the VSO functionality, the ND may be configured to have two network stacks running synchronously on a single network interface having a single network address. The two network stacks may share the data traffic, while the Host network stack may act as a master, and configure the ND network stack to accept only specifically designated traffic, thus offloading some of the data processing to the processor configured in the ND. The ND network system may appear as an ordinary network controller to the Host, from which the user may configure the ND network driver to obtain/set the network address, configure the physical layer link speed and duplex mode, configure the multicast filter settings, and obtain and clear the network level statistics.

Dynamic resource allocation for distributed cluster-storage network

An apparatus, method and computer program in a distributed cluster storage network comprises storage control nodes to write data to storage on request from a host; a forwarding layer at a first node to forward data to a second node; a buffer controller at each node to allocate buffers for data to be written; and a communication link between the buffer controller and the forwarding layer at each node to communicate a constrained or unconstrained status indicator of the buffer resource to the forwarding layer. A mode selector selects a constrained mode of operation requiring allocation of buffer resource at the second node and communication of the allocation before the first node can allocate buffers and forward data, or an unconstrained mode of operation granting use of a predetermined resource credit provided by the second to the first node and permitting forwarding of a write request with data.

Content filtering of remote file-system access protocols
09825988 · 2017-11-21 · ·

Methods and systems for content filtering of remote file-system access protocols are provided. According to one embodiment, a proxy, implemented within a network gateway device of a private network, monitors remote file-system access protocol sessions involving client computer systems and a server computer system associated with the private network. For each file on a share of the server computer system being accessed by one or more of the client computer systems: (i) a shared holding buffer corresponding to the file is created within a shared memory of the network gateway device; (ii) data being read from or written to the file by the monitored remote file-system access protocol sessions is buffered into the shared holding buffer; and (iii) responsive to a predetermined event, content filtering is performed on the shared holding buffer to determine whether malicious, dangerous or unauthorized content is contained within the shared holding buffer.

Application-controlled network packet classification
09794196 · 2017-10-17 · ·

Embodiments of the present invention provide a system, method, and computer program product that enables applications transferring data packets over a network to a multi-processing system to choose how the data packets are going to be processed by, e.g., allowing the applications to pre-assign connections to a particular network thread and migrate a connection from one network thread to another network thread without putting the connection into an inconsistent state.

Interconnect flow control

A communication technique which includes determining, at least in part by comparing data associated with a packet that has been pulled from a received packet queue with a highest sequence number among packets that have been placed in the received packet queue, that the received packet queue has space available to receive a further packet. A receiver with which the received packet queue is associated is sent, based at least in part on the determination, a next packet.

Method and a media device for pre-buffering media content streamed to the media device from a server system

The present disclosure relates to a method and a media device for pre-buffering media content streamed to the media device from a server system. The media device is connected to a network and has a rechargeable battery. The media device determines, by means of a bandwidth logic, an available network bandwidth and, by means of a charging logic, a charging level of the rechargeable battery. Based on these determinations, i.e. based on the determined available network bandwidth and the determined charging level of the rechargeable battery the media device selects a pre-buffering policy, by means of a pre-buffering logic, and pre-buffers media content, by means of the pre-buffering logic, from the server system in accordance with the selected pre-buffering policy.

LONGEST QUEUE IDENTIFICATION

The present disclosure generally discloses a longest queue identification mechanism. The longest queue identification mechanism, for a set of queues of a buffer, may be configured to identify the longest queue of the set of queues and determine a length of the longest queue of the set of queues. The longest queue identification mechanism may be configured to identify the longest queue of the set of queues using only two variables including a longest queue identifier (LQID) variable for the identity of the longest queue and a longest queue length (LQL) variable for the length of the longest queue. It is noted that the identity of the longest queue of the set of queues may be an estimate of the identity of the longest queue and, similarly, that the length of the longest queue of the set of queues may be an estimate of the length of the longest queue.

PARALLEL PROCESSING APPARATUS AND METHOD FOR CONTROLLING COMMUNICATION
20170293589 · 2017-10-12 · ·

A packet transmitting unit transmits, to a node via RDMA communication, a packet with a first identifier that represents a predetermined process and a second identifier that represents a destination communication interface and is a logical identifier, as a destination, being added thereto. A plurality of communication interfaces exist. A packet receiving unit receives a packet transmitted from the node via RDMA communication, selects a communication interface that is a destination of a received packet and is used in the predetermined process, based on the first identifier and the second identifier added to the received packet, and transfers the received packet to a selected communication interface.

Bypass FIFO for multiple virtual channels
09824058 · 2017-11-21 · ·

A group of low-level FIFOs may be logically bound together to form a super-FIFO. The super-FIFO may treat each low-level FIFO as a storage location. The super-FIFO may enable a push to (or a pop from) every low-level FIFO, simultaneously. The super-FIFO may enable a virtual channel (VC) to use the super-FIFO, bypassing a VC FIFO for the VC, removing several cycles of latency otherwise needed for enqueuing and dequeuing messages in the VC FIFO. In addition, the super-FIFO may enable bypassing of an arbiter, further reducing latency by avoiding a penalty of the arbiter.