Patent classifications
H04L12/851
SYSTEM AND METHOD FOR MTU SIZE REDUCTION IN A PACKET NETWORK
Systems and methods are disclosed that reduce the maximum transmission unit (MTU) size associated with an output port when a jitter sensitive packet flow utilizes the output port. This may reduce the amount of jitter introduced into the jitter sensitive packet flow. In one embodiment, a method includes transmitting packets through an output port. The output port has an MTU size set to a first MTU value, and the packets have a size no greater than the first MTU value. The method further includes setting the MTU size to a second smaller MTU value, and transmitting other packets through the output port. The other packets have a size no greater than the second MTU value. The other packets include packets from a jitter sensitive packet flow. The method further includes subsequently setting the MTU size to an MTU value greater than the second MTU value.
ADAPTING CLASSIFIER PARAMETERS FOR IMPROVED NETWORK TRAFFIC CLASSIFICATION USING DISTINCT PRIVATE TRAINING DATA SETS
In one embodiment, a device in a first network receives traffic flow information regarding a plurality of traffic flows in the first network. The device labels the traffic flow information by associating classifier labels to the traffic flow information. The device receives a generic traffic classifier that was trained using a training data set that comprises labeled traffic flow information for a plurality of other networks and excludes the traffic flow information regarding the plurality of traffic flows in the first network. The device acclimates the generic traffic classifier to the first network using the labeled traffic flow information regarding the plurality of traffic flows in the first network.
USING A MACHINE LEARNING CLASSIFIER TO ASSIGN A DATA RETENTION PRIORITY FOR NETWORK FORENSICS AND RETROSPECTIVE DETECTION
In one embodiment, a device in a network receives traffic data regarding one or more traffic flows in the network. The device applies a machine learning classifier to the traffic data. The device determines a priority for the traffic data based in part on an output of the machine learning classifier. The output of the machine learning classifier comprises a probability of the traffic data belonging to a particular class. The device stores the traffic data for a period of time that is a function of the determined priority for the traffic data.
Source-based queue selection mechanism in the routing environment
The invention is directed to a method and system for selecting queues for source-based queuing in a packet router, requiring only one flow per destination route. The invention stores source interface information for each packet while it is being processed. The invention applies to packet routers including IP routers, Ethernet routers and Label Switched Routers (LSR).
System and method for wireless connected device prioritization in a vehicle
A method and system for managing electronic devices in a vehicle including establishing operable connections for computer communication between electronic devices and the vehicle in a vehicle network and communicating device characteristics between each electronic device and the vehicle. The device characteristics are stored at a data storage device and the device characteristics include at least one or more data types capable of transmission between each electronic device and the vehicle. Assigning a priority to each electronic device based on the device characteristics and a priority rules set stored at the data storage device and controlling data transmission between each electronic device and the vehicle based on the priority.
Partial information throttle based on compliance with an agreement
Partially reduces performance or features of a user's electronic device if the user does not comply with an agreement. An agreement may specify tasks or activities to be performed, such as homework or chores, or required results such as grades. Partial throttling of the device when the user is not in compliance with the agreement may include for example disabling a subset of the apps or services on the device, slowing down the device or selected applications, denying access to selected information sources, limiting audio volume or display resolution, or limiting time on activities such as web browsing. The device may remain usable, but with reduced features or performance. Throttling actions may also be based on location, schedule, or environmental conditions. The system may reward compliance with the agreement by increasing performance, by re-enabling previously disabled applications, or by providing direct rewards such as money or credits.
TUNABLE LOW COST NETWORK
Aspects of the subject disclosure may include, for example, a method comprising providing services over a network to a device, and constructing device capability and usage profiles. A level of service quality for the device is adjusted by adjusting a latency criterion regarding connection of the device to the network; adjusting a speed of transmissions to or from the device; and altering a routing of transmissions to or from the device. The network can be partitioned so that the adjusted service quality level is provided by a network portion having a predetermined level of resources. The adjusted service quality level can comprise a first level while the device is active and a second level while the device is inactive; the first level is higher than the second level. The first and second levels are lower than a service quality level provided by another network portion. Other embodiments are disclosed.
TECHNIQUES FOR DECREASING MULTIPROTOCOL LABEL SWITCHING ENTROPY LABEL OVERHEAD
A method is provided in one embodiment and includes receiving at a network element an encapsulated packet and determining whether both an ECMP/LAG Existing (“ele”) flag and an Entropy Label Capability (“elc”) flag are set for an egress node of the packet in a Label Distribution Protocol (“LDP”) database of the network element. If both the ele and elc flags are set for the egress node of the packet in the LDP database, the method further includes determining whether the network element is an ingress node for the packet and, if the network element is the ingress node for the packet, pushing an Entropy Label (“EL”) and an Entropy Label Indicator (“ELI”) onto an MPLS stack of the packet.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING DEVICE
An information processing method executed by a processor included in a computer, the computer including a memory that stores a plurality of flow entries in each of which a packet condition for choosing a packet, a processing content corresponding to the packet, and a type of the processing content are associated with one another, the information processing method includes choosing, from the flow entries, one or more candidate flow entries respectively including a type different from the type included in a new flow entry, when storing the new flow entry; detecting, from among the one or more candidate flow entries, a competitive flow entry having the processing content different from that of the new flow entry based on the packet condition; and notifying another information processing device coupled to the information processing device of a result of the detecting.
DATA TRAFFIC CONTROL
As an example, a method includes storing, in non-transitory memory, prioritization rules that establish a priority preference for egress of data traffic for a first location. The first location includes a first location apparatus to control egress of data traffic for the first location apparatus and a second location apparatus at a second location, which is different from the first location, to receive data traffic and cooperate with the first apparatus to measure bandwidth with respect to the first location. The first location apparatus is coupled with the second location apparatus via at least one bidirectional network connection. The method also includes estimating capacity of the at least one network connection for the egress of data traffic with respect to the first location. The method also includes categorizing each packet in egress data traffic from the first location based on an evaluation of each packet with respect to the prioritization rules. The method also includes placing each packet in one of a plurality of egress queues associated with the at least one network connection at the first location apparatus according to the categorization of each respective packet and the estimated capacity. The method also includes sending the packets from the first location apparatus to the second location apparatus via a respective network connection according to a priority of the respective egress queue into which each packet is placed, such that the first location apparatus transmits at the estimated capacity for the egress of data traffic.