Patent classifications
H04L69/04
API OPTIMIZER USING CONTEXTUAL ANALYSIS OF INTERFACE DATA EXCHANGE
A computer-implemented process includes the following operations. Interface data for a first computer application having a first interface configured to exchange data with a second computer application is identified. The interface data is aggregated using a machine learning engine, and the machine learning engine performs contextual analysis on the aggregated interface data to identify a context. A fix pack for the first computer application is generated using the context from the contextual analysis. The fix pack is caused to be applied to the first computer application. The fix pack includes an installable for the first application to transform notations used by the second computer application when communication with the first application.
DETERMINING COMPRESSION LEVELS TO APPLY FOR DIFFERENT LOGICAL CHUNKS OF COLLECTED SYSTEM STATE INFORMATION
An apparatus comprises a processing device configured to collect system state information from host devices, to split the collected system state information into logical chunks, and to determine, based at least in part on a plurality of factors, a compression level to be applied to each of the logical chunks. The plurality of factors comprise a first factor characterizing a time at which the collected system state information is needed at a destination device and at least a second factor characterizing resources available for at least one of performing compression of the collected system state information and transmitting the collected system state information over at least one network to the destination device. The processing device is further configured to apply the determined compression level to each of the logical chunks to generate compressed logical chunks, and to transmit the compressed logical chunks to the destination device.
Time reservations for ensuring consistent reads in a distributed database without logging
The subject matter described herein provides techniques to ensure that queries of a distributed database observe a consistent read of the database without locking or logging. In this regard, next-write timestamps uniquely identify a set of write transactions whose updates can be observed by reads. By publishing the next-write timestamps from within an extendable time lease and tracking a “safe timestamp,” the database queries can be executed without logging read operations or blocking future write transactions, and clients issuing the queries at the “safe timestamp” observe a consistent view of the database as it exists on or before that timestamp. Aspects of this disclosure also provide for extensions, done cheaply and without the need for logging, to the range of timestamps at which read transactions can be executed.
METHOD AND APPARATUS FOR COMPRESSION PROFILE DISTRIBUTION
Header compression/decompression profiles are stored in a central registry, or database, and provided on demand, on initialisation of a new device, from time to time, or otherwise, to gateways communicating with one or more endpoints in accordance with the profile in question. The profile to be retrieved is selected on the basis of an identity value included in a message transmitted from the endpoint. The identity may be unique to a particular endpoint, or a type or class of endpoints using a particular profile, or correspond directly to the profile, or otherwise. Distributed registry structures, possibly including private and public registers, are proposed. Different classes of information may be associated with each profile, which may be subject to varying degrees of protection, and or varying access conditions.
VERTICAL FEDERATED LEARNING WITH COMPRESSED EMBEDDINGS
For a plurality of client computing devices of a federated learning system, obtain initial compressed embeddings, compressed by clustering, and including output of initial local models for a current minibatch, and initial cluster labels corresponding to the initial embeddings. Recreate an initial overall embedding based on the initial embeddings and the initial labels. At a server of the federated learning system, send a current version of a server model to each of the client computing devices; and obtain, from the client computing devices: updated compressed embeddings, compressed by clustering, and updated cluster labels corresponding to the updated embeddings. Based on local training by the plurality of clients with the overall embedding and the current server model, at the server, recreate an updated overall embedding based on the updated embeddings and the corresponding updated labels, and locally train the server model based on the updated overall embedding.
Systems and methods for establishing asymmetric network communications
A method of establishing an asymmetric network between at least one node device and a gateway device is provided. The method may include transmitting a reduced data package from the node device, receiving the reduced data package in a data stream at the gateway device, validating bits of the data stream, and retrieving the reduced data package based on the validated bits.
Systems and methods for establishing asymmetric network communications
A method of establishing an asymmetric network between at least one node device and a gateway device is provided. The method may include transmitting a reduced data package from the node device, receiving the reduced data package in a data stream at the gateway device, validating bits of the data stream, and retrieving the reduced data package based on the validated bits.
Bandwidth compression for neural network systems
Techniques and systems are provided for compressing data in a neural network. For example, output data can be obtained from a node of the neural network. Re-arranged output data having a re-arranged scanning pattern can be generated. The re-arranged output data can be generated by re-arranging the output data into the re-arranged scanning pattern. One or more residual values can be determined for the re-arranged output data by applying a prediction mode to the re-arranged output data. The one or more residual values can then be compressed using a coding mode.
Bandwidth compression for neural network systems
Techniques and systems are provided for compressing data in a neural network. For example, output data can be obtained from a node of the neural network. Re-arranged output data having a re-arranged scanning pattern can be generated. The re-arranged output data can be generated by re-arranging the output data into the re-arranged scanning pattern. One or more residual values can be determined for the re-arranged output data by applying a prediction mode to the re-arranged output data. The one or more residual values can then be compressed using a coding mode.
Transparency Overlay Method for Virtual Set Top Box, Virtual Set Top Box, and Storage Medium
The embodiments of the present disclosure provide a transparency overlay method for a virtual set top box, a virtual set top box and a storage medium. Transparency layout features of a picture presented by an application scenario of the virtual set top box are acquired; whether compression processing of transparency data is allowed for each block on the picture is determined according to the transparency layout features; and compression processing of transparency data is performed on each allowed block, and transparency overlay is performed according to transparency data sampling points less than full number of transparency data sampling points in each allowed block.