H04L47/6225

System and method for enabling secure web access
20230122746 · 2023-04-20 ·

The present invention relates to networking technologies, specifically the disclosed invention enables clients or customers to anonymously access web using a privately held web device. Generally, web client identity hiding has multiple commercial usages in internet community such as protecting computer privacy, facilitating web scrapping activities and allowing geo-blocking bypass. The object of the present invention is to hide web identity that further requires hiding of a web client IP address. IP address used to uniquely identify a web client. The present invention addresses some of the problems of hiding internet (IP) identity by enabling hiding of identity using approach that is an alternative to a Virtual Private Network (VPN) method. Specifically, the disclosed invention provides a web service, that hides internet (IP) identity and geo location of a web client or customer from an operator/owner of a website.

Fair Arbitration Between Multiple Sources Targeting a Destination
20230144797 · 2023-05-11 ·

A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.

Adaptive multi-service data framing
09847945 · 2017-12-19 · ·

When a signal-to-noise ratio affecting radio communication becomes sufficiently low, then the data transmission rate is responsively decreased in compensation. The signal-to-noise ratio of the communication link is thereby increased. Data for multiple different services is transmitted in data packets between two radios. By allocating one part, or time slot, of the data packet's payload to one service, and allocating another part, or time slot, of the data packet's payload to another service, communications sessions for multiple services can be maintained concurrently. Services are prioritized relative to each other. In case the signal-to-noise ratio becomes too low, data packet portions that are related to lower-priority services can be omitted from some data packets before those data packets are transmitted. Data remaining in the packet can be sent at a reduced data transmission rate without causing the quality of service for the remaining packets to fall below the minimum required level.

Messaging system thread pool
09847950 · 2017-12-19 · ·

A thread pool of consumers polls existing queues. A thread manager controls the number of active threads. This approach limits the number of threads, but is still able to keep up with the volume of traffic.

SENDING AND RECEIVING MESSAGES INCLUDING TRAINING DATA USING A MULTI-PATH PACKET SPRAYING PROTOCOL
20230198914 · 2023-06-22 ·

Systems and methods for sending and receiving messages, including training data, using a multi-path packet spraying protocol are described. A method includes segmenting a message into a set of data packets comprising training data. The method further includes initiating transmission of the set of data packets to a receiving node. The method further includes spraying the set of data packets across the switch fabric in accordance with the multi-path spraying protocol such that depending upon a value of a fabric determination field associated with a respective data packet, the respective data packet can traverse via any one of a plurality of paths offered by the switch fabric for a connection between the sending node to the receiving node. The method further includes initiating transmission of synchronization packets to the receiving node, where unlike the set of data packets, the synchronization packets are not sprayed across the switch fabric.

Shared traffic manager

A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot.

Message processing using dynamic load balancing queues in a messaging system

A system, method, and computer-readable medium are disclosed for dynamically managing message queues to balance processing loads in a message-oriented middleware environment. A first source message associated with a first target is received, followed by generating a first dynamic load balancing message queue when a first message queue associated with the first target is determined to not be optimal. The first dynamic load balancing message queue is then associated with the first target, followed by enqueueing the first source message to the first dynamic load balancing message queue for processing by the first target.

Virtual network device
11245645 · 2022-02-08 · ·

A virtual network device increases the effective number of local physical ports by converting each of the local physical ports into a plurality of virtual local physical ports, and the effective number of network physical ports by converting each of the network physical ports into a plurality of virtual network physical ports.

Predictable virtualized NIC

A method for controlling congestion in a datacenter network or server is described. The server includes a processor configured to host a plurality of virtual machines and an ingress engine configured to maintain a plurality of per-virtual machine queues configured to store received packets. The processor is also configured to execute a CPU-fair fair queuing process to control the processing of the packets by the processor. The processor is also configured to selectively trigger temporary packet per second packet transmission limits on top of a substantially continuously enforced bit per second transmission limit upon detection of a per virtual machine queue overload.

Packet arbitration for buffered packets in a network device

Devices and techniques for packet arbitration for buffered packets in a network device are described herein. A packet can be received at an input of the network device. The packet can be placed in a buffer for the input and a characteristic of the packet can be obtained. A record for the packet, that includes the characteristic, is written into a data structure that is independent of the buffer. Arbitration, based on the characteristic of the packet in the record, can then be performed among multiple packets to select a next packet from the buffer for delivery to an output.