Patent classifications
H04L49/9089
System and method of a high buffered high bandwidth network element
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
Method and apparatus for transmitting and receiving paging message in mobile communication system
The present invention relates to a method and an apparatus for transmitting and receiving a paging message in a mobile communication system and, more particularly, to a method and an apparatus for determining a priority between paging messages at a wireless access point and paging a terminal. A method for paging a wireless access point in a mobile communication system according to an embodiment of the present invention comprises the steps of: receiving a plurality of paging messages from a core network; determining a priority between the plurality of paging messages; and transmitting the paging message to a terminal on the basis of the determined priority, wherein information on a number of paging attempts included in the paging messages is used for determining the priorities. The present disclosure relates to a 5G or pre-5G communication system that will be provided to support higher data rates than a post-4G communication system such as LTE.
TECHNOLOGIES FOR PACKET FORWARDING ON INGRESS QUEUE OVERFLOW
Technologies for packet forwarding under ingress queue overflow conditions includes a computing device configured to receive a network packet from another computing device, determine whether a global packet buffer of the NIC is full, and determine, in response to a determination that the global packet buffer is full, whether to forward all the global packet buffer entries. The computing device is additionally configured to compare, in response to a determination not to forward all the global packet buffer entries, a selection filter to one or more characteristics of the received network packet and forward, in response to a determination that the selection filter matches the one or more characteristics of the received network packet, the received network packet to a predefined output. Other embodiments are described herein.
SERVICE OVERLOAD ATTACK PROTECTION BASED ON SELECTIVE PACKET TRANSMISSION
A first computing system receives a user request. The user request includes a first set of data. The first computing system determines that one or more resources have exceeded at least one resource utilization threshold. In response to the determining that one or more resources have exceeded the at least one utilization threshold, a first data transfer rate is modified to a second data transfer rate based on transmitting a first subset of the first set of data to one or more host devices, wherein a second subset of the first set of data is not transmitted to the one or more host devices. The one or more host devices validate the user request against one or more security policies in order to complete or terminate the user request.
SERVICE OVERLOAD ATTACK PROTECTION BASED ON SELECTIVE PACKET TRANSMISSION
A first computing system receives a user request. The user request includes a first set of data. The first computing system determines that one or more resources have exceeded at least one resource utilization threshold. In response to the determining that one or more resources have exceeded the at least one utilization threshold, a first data transfer rate is modified to a second data transfer rate based on transmitting a first subset of the first set of data to one or more host devices, wherein a second subset of the first set of data is not transmitted to the one or more host devices. The one or more host devices validate the user request against one or more security policies in order to complete or terminate the user request.
INFORMATION PROCESSING APPARATUS AND DISTRIBUTED PROCESSING METHOD
An information processing apparatus includes a memory and a processor coupled to the memory. The processor is configured to identify a reducible message string by using state transition information. The reducible message string is used to reduce a message string held in a message queue. The state transition information indicates a relationship between a message for executing a service and transition of a state of the service. The processor is configured to detect the reducible message string included in the message string. The processor is configured to reduce the message string held in the message queue by using the reducible message string.
Communication apparatus with multiple buffers and control thereof
A packet communication apparatus is configured to relay packets transmitted and received between information processing apparatuses. The packet communication apparatus includes: a network interface connectable to a network; a CPU to be a destination of at least one of a plurality of packets to be received through the network interface; a first buffer configured to hold the packets destined to the CPU in order to output the packets to the CPU; a second buffer having a plurality of planes and configured to hold copies of the packets destined to the CPU held in the first buffer in one of the plurality of planes; and a reception history controller configured to store a copy of a packet to a specified plane of the second buffer or to save copies of packets held in the second buffer to another storage area based on usage of the first buffer.
INFORMATION PROCESSING APPARATUS, CONTROL SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
There is provided with an information processing apparatus. A control unit performs control to store, in a first operation mode, predetermined communication information that is information used for communication with the device in a first memory, and store, in a second operation mode different from the first operation mode, the predetermined communication information in a second memory whose capacity is smaller than a capacity of the first memory. When shifting from the first operation mode to the second operation mode, the control unit performs control of changing a setting of the predetermined communication information such that a size of a saving destination of packet data transferred to the second memory is a predetermined size.
METHOD AND APPARATUS FOR TRANSMITTING AND RECEIVING PAGING MESSAGE IN MOBILE COMMUNICATION SYSTEM
The present invention relates to a method and an apparatus for transmitting and receiving a paging message in a mobile communication system and, more particularly, to a method and an apparatus for determining a priority between paging messages at a wireless access point and paging a terminal. A method for paging a wireless access point in a mobile communication system according to an embodiment of the present invention comprises the steps of: receiving a plurality of paging messages from a core network; determining a priority between the plurality of paging messages; and transmitting the paging message to a terminal on the basis of the determined priority, wherein information on a number of paging attempts included in the paging messages is used for determining the priorities. The present disclosure relates to a 5G or pre-5G communication system that will be provided to support higher data rates than a post-4G communication system such as LTE.
Multi-destination traffic handling optimizations in a network device
When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.