Patent classifications
H04L47/28
Unique ID generation for sensors
Systems, methods, and computer-readable media are provided for generating a unique ID for a sensor in a network. Once the sensor is installed on a component of the network, the sensor can send attributes of the sensor to a control server of the network. The attributes of the sensor can include at least one unique identifier of the sensor or the host component of the sensor. The control server can determine a hash value using a one-way hash function and a secret key, send the hash value to the sensor, and designate the hash value as a sensor ID of the sensor. In response to receiving the sensor ID, the sensor can incorporate the sensor ID in subsequent communication messages. Other components of the network can verify the validity of the sensor using a hash of the at least one unique identifier of the sensor and the secret key.
Multi-timescale packet marker
A network node (120), such as a packet marking node, efficiently measures the bitrates of incoming packets on a plurality of timescales (TSs). A throughput-value function (TVF) is then graphed to indicate the throughput-packet value relationship for that TVF. Then, starting from the longest TS and moving towards the shortest TS, the packet marking node determines (88) a distance between the TVFs of different TSs at the measured bitrates. To determine the packet marking, the packet marking node selects a random throughput value between 0 and the bitrate measured on the shortest TS. Depending on how the random value relates to the measured bitrates, a TVF, and the distances to add to the random value, is then selected to determine (92) a packet value (PV) with which to mark the packet. The packet marking node then marks (94) the packet according to the determined PV.
Multi-timescale packet marker
A network node (120), such as a packet marking node, efficiently measures the bitrates of incoming packets on a plurality of timescales (TSs). A throughput-value function (TVF) is then graphed to indicate the throughput-packet value relationship for that TVF. Then, starting from the longest TS and moving towards the shortest TS, the packet marking node determines (88) a distance between the TVFs of different TSs at the measured bitrates. To determine the packet marking, the packet marking node selects a random throughput value between 0 and the bitrate measured on the shortest TS. Depending on how the random value relates to the measured bitrates, a TVF, and the distances to add to the random value, is then selected to determine (92) a packet value (PV) with which to mark the packet. The packet marking node then marks (94) the packet according to the determined PV.
Dynamically computing load balancer subset size in a distributed computing system
A distributed computing system uses dynamically calculates a subset size for each of a plurality of load balancers. Each of a plurality of load balancers logs requests from client devices for connections to back-end servers and periodically sends a request report to a traffic aggregator, which aggregates the report requests from the load balancers in the corresponding zone. Each traffic aggregator sends the aggregated request data to a traffic controller, which aggregates the request data to determine a total number of requests received at the system. The total request data is transmitted through each traffic aggregator to each load balancer instance, which calculates a percentage of the total number of requests produced by the load balancer and determines a subset size based on the calculated percentage.
Dynamically computing load balancer subset size in a distributed computing system
A distributed computing system uses dynamically calculates a subset size for each of a plurality of load balancers. Each of a plurality of load balancers logs requests from client devices for connections to back-end servers and periodically sends a request report to a traffic aggregator, which aggregates the report requests from the load balancers in the corresponding zone. Each traffic aggregator sends the aggregated request data to a traffic controller, which aggregates the request data to determine a total number of requests received at the system. The total request data is transmitted through each traffic aggregator to each load balancer instance, which calculates a percentage of the total number of requests produced by the load balancer and determines a subset size based on the calculated percentage.
Prepopulation of caches
A system, process, and computer-readable medium for updating an application cache using a stream listening service is described. A stream listening service may monitor one or more data streams for content relating to a user. The stream listening service may forward the content along with time-to-live values to an application cache. A user may use an application to obtain information regarding the user's account, where the application obtains information from a data store and/or cached information from the application cache. The stream listening service, by forwarding current account information, obtained from listening to one or more streams, to the application cache, reduces traffic at the data store by providing current information from the data stream to the application cache.
Method and apparatus for providing a low latency transmission system using adjustable buffers
One aspect of the present invention discloses a network system capable of transmitting and processing audio video (“A/V”) data with enhanced quality of service (“QoS”). The network system includes a transmitter, a transmission channel, an adjustable decoder buffer, and a decoder. The transmitter contains an encoder able to encode A/V data in accordance with encoding bit rate recommendation from SQoS and packets loss notifications. The transmission channel, in one example, transmits A/V data from the transmitter or the receiver. The adjustable decoder buffer, in one aspect, is able to change its storage capacity or buffering size in response to the adaptive latency estimate. Upon fetching at least a portion of the A/V data from the adjustable decoder buffer, SQoS updates the adaptive latency estimate based on the quality of the decoded A/V data.
Method and apparatus for providing a low latency transmission system using adjustable buffers
One aspect of the present invention discloses a network system capable of transmitting and processing audio video (“A/V”) data with enhanced quality of service (“QoS”). The network system includes a transmitter, a transmission channel, an adjustable decoder buffer, and a decoder. The transmitter contains an encoder able to encode A/V data in accordance with encoding bit rate recommendation from SQoS and packets loss notifications. The transmission channel, in one example, transmits A/V data from the transmitter or the receiver. The adjustable decoder buffer, in one aspect, is able to change its storage capacity or buffering size in response to the adaptive latency estimate. Upon fetching at least a portion of the A/V data from the adjustable decoder buffer, SQoS updates the adaptive latency estimate based on the quality of the decoded A/V data.
Check code processing method, electronic device and storage medium
Disclosed in embodiments of this disclosure are a check code processing method, an electronic device and a storage medium. The check code processing method comprising: performing operations on m bits of the n.sup.th byte of a code block to obtain the n.sup.th bit of a first sequence; and performing operation on the first sequence of the code block with a same transmission period to obtain a check code.
Check code processing method, electronic device and storage medium
Disclosed in embodiments of this disclosure are a check code processing method, an electronic device and a storage medium. The check code processing method comprising: performing operations on m bits of the n.sup.th byte of a code block to obtain the n.sup.th bit of a first sequence; and performing operation on the first sequence of the code block with a same transmission period to obtain a check code.