Patent classifications
H04L12/859
METHOD, SERVER, AND SYSTEM FOR DATA STREAM REDIRECTING
Method, server, and system for data stream redirecting are provided. The method is applied to a service node and includes: redirecting a data stream based on a first global database, wherein the first global database includes at least one piece of destination address information and application identities corresponding to the at least one piece of destination address information; based on destination address information of data streams that traverse within a first preset period and application identities corresponding to the destination address information, determining variance information of the first global database; uploading the variance information to a central node; updating, by the central node, the first global database based on variance information uploaded by at least one service node, to generate a second global database; and acquiring the second global database from the central node and, based on the second global database, redirecting the data stream.
PROVIDING PROCESS DATA TO A DATA RECORDER
A kernel driver on an endpoint uses a process cache to provide a stream of events associated with processes on the endpoint to a data recorder. The process cache can usefully provide related information about processes such as a name, type or path for the process to the data recorder through the kernel driver. Where a tamper protection cache or similarly secured repository is available, this secure information may also be provided to the data recorder for use in threat detection, forensic analysis and so forth.
Communication control method
In a smartphone or a personal computer, when there is an application which performs communication which a user is unconscious of, a communication volume increases unintentionally, and there arise problems in that a maximum communication volume is exceed, a network bandwidth of a base station is compressed, or communication being intentionally performed is disturbed. In order to solve the above problems, provided is a communication control method used in a communication control device capable of performing communication using an application via a communication line, and includes an application control step of switching a plurality of applications between a foreground state and a background state and a communication control step of changing an allocation amount of a network bandwidth of an application in accordance with whether the switched application is in the foreground state or the background state.
Data flow processing method and device
This application provide a data flow processing method and a device. A host determines a priority corresponding to a first data flow to be sent to a switch, and adds the priority to the first data flow to generate a second data flow that includes the priority. The host sends the second data flow to the switch, so that the switch processes the second data flow according to the priority of the second data flow. A host assigns a priority to a data flow, and the switch does not need to determine whether the data flow is an elephant flow or a mouse flow, thereby saving hardware resources of the switch. The switch does not need to determine the priority of the data flow, thereby processing the data flow in a timely manner.
WIRELESS COMMUNICATION DEVICE AND METHOD
A wireless communication device is provided, which includes a transceiver circuit and a processor. The processor is configured to execute operations of: by an application layer, generating a packet, and generating a topic name of the packet and a priority order corresponding to the packet according to the packet; by the application layer, generating a port number according to the priority order and the first quality of service correspondence table, and establishing a first setting table to store corresponding relationship between the port number and the topic name; by a middleware layer, receiving the topic name from the application layer, and searching the corresponding port number by looking up the first setting table according to the topic name; and by the middleware layer, establishing a first communication connection from the wireless communication device to the base station according to the port number.
NETWORK FLOW CONTROL
Aspects of the present disclosure include a content delivery network (CDN) for delivering content associated with a plurality of different types of applications/devices. Using a CDN flow application, a plurality of network flow parameters are generated for content delivery unique to different types of applications or devices. The network flow parameters include customized data transmission rates. The network flow parameters include predetermined settings for transmission control protocol (TCP) connections between the CDN and devices using a TCP flow control mechanism. Upon receiving a content request, the CDN fulfills the content request based upon first network flow parameters. The network flow parameters may be adjusted for each of the plurality of different types of applications/devices. The network flow parameters may be generated based upon requests or based upon the performance of each of the plurality of applications/devices.
Popularity-aware bitrate adaptation of linear programming for mobile communications
Embodiments provide popularity-based adaptive bitrate management of linear programming over constrained communications links. Embodiments can operate in context of a communications network communicating with multiple mobile client devices disposed in one or more transport craft. A number of channel offerings, including channels providing linear programming, can be made available via the communications network for consumption by the client devices. Embodiments can compute channel popularity scores for the channel offerings based on a predicted popularity, an estimated popularity, a measured popularity, etc. A bitrate can be determined for each (some or all) of the channel offerings based at least in part on its channel popularity score, so that more popular channel offerings can be communicated at higher bitrates. Determined-bitrate instances of the channel offerings can be obtained and/or generated, and delivered via the communications network, to the client devices for consumption.
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack). Due to this disclosed architecture, physical memory allocations (and deallocations) may be more flexibly implemented.
System for application aware rate-limiting using plug-in
A method, system and computer-usable medium for web application aware rate-limiting. One embodiment of the system involves a computer-implemented method in which requests for a web application are receive from a plurality of client entities. When the received requests are to be rate-limited, a rate-limiting identifier is requested from a plug-in respectively associated with the web application. The plug-in generates the rate-limiting identifier, wherein the rate-limiting identifier is unique to the web application. The plug and sends the rate-limiting identifier to the rate-limiting engine, which uses the rate-limiting identifier to rate-limit passing of the received requests to the web application. In some embodiments, the rate-limiting identifier is generated as a hash value that is independent of IP address and header information data of the client making the request.
Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
Methods and apparatus for efficient data transfer within a user space network stack. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. Direct transfer reduces the per-byte and per-packet costs relative to socket based communication. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack).