Patent classifications
H04L67/2871
CACHE MANAGEMENT IN CONTENT DELIVERY SYSTEMS
Examples described herein relate to apparatuses and methods for managing caching for a content delivery system, which may include receiving a content request indicating that the caching agent is requesting content data for a client, filling the content data in a first cache storage of the business logic agent, providing the cached content data to the caching agent, and while a second cache storage of the caching agent is being filled with the content data, maintaining the cached content data in response to receiving additional content requests from the caching agent. The additional content requests may indicate that the caching agent is requesting the same content data for additional clients.
Data transfer between application and vehicle management system
A method and system for transferring data is disclosed. The method comprises initiating a handshake between an electronics unit onboard a vehicle and an application via a first communication protocol through an onboard gateway; transferring a data file to the gateway from the application via a second communication protocol; storing the data file in the gateway at a storage location; sending a message from the application to the electronics unit, via the first communication protocol, to inform the electronics unit that the data file is available and the storage location of the data file; and pulling the data file from the storage location, via the second communication protocol, to the electronics unit.
Data transfer between application and vehicle management system
A method and system for transferring data is disclosed. The method comprises initiating a handshake between an electronics unit onboard a vehicle and an application via a first communication protocol through an onboard gateway; transferring a data file to the gateway from the application via a second communication protocol; storing the data file in the gateway at a storage location; sending a message from the application to the electronics unit, via the first communication protocol, to inform the electronics unit that the data file is available and the storage location of the data file; and pulling the data file from the storage location, via the second communication protocol, to the electronics unit.
Edge communication locations
Methods, systems, and computer programs are presented for lowering network latency for cloud-based services. Service-delivery edge locations allow customers to improve communication-providers public and private network connectivity for improved performance. One method includes operations for performing, by an edge server, a handshake to establish a communication session between a client and a main server, and for exchanging data between the client and the main server via the edge server. The handshake includes exchanging, by the edge server, communication initiation messages with the client, and validating, by the edge server, authentication credentials for the communication session based on the communication initiation messages. The exchanging data comprises forwarding, by the edge server, data requests from the client to the main server through a private connection between the edge server and the main server, and forwarding, by the edge server, data responses from the main server to the client through the private connection.
Edge communication locations
Methods, systems, and computer programs are presented for lowering network latency for cloud-based services. Service-delivery edge locations allow customers to improve communication-providers public and private network connectivity for improved performance. One method includes operations for performing, by an edge server, a handshake to establish a communication session between a client and a main server, and for exchanging data between the client and the main server via the edge server. The handshake includes exchanging, by the edge server, communication initiation messages with the client, and validating, by the edge server, authentication credentials for the communication session based on the communication initiation messages. The exchanging data comprises forwarding, by the edge server, data requests from the client to the main server through a private connection between the edge server and the main server, and forwarding, by the edge server, data responses from the main server to the client through the private connection.
REMOTE PAIRING DEVICE AND METHOD
A remote pairing device is provided. The remote paring device includes a processor to execute the following steps. A plurality of controlled elements is divided into a restart group and a force-shutdown group. Each of the controlled elements is connected to an electronic device. A restart command is sent to the controlled elements in the restart group. A force-shutdown command is sent to the controlled elements in the force-shutdown group. The pairing candidate lists each corresponding to the controlled elements are updated according to the connection status corresponding to the electronic devices received during a specific time period after sending the restart command or the force-shutdown command. Determining that the controlled elements corresponding to the pairing candidates list having only one electronic device therein are successfully paired.
JOINING AND DIMENSIONAL ANNOTATION IN A STREAMING PIPELINE
Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising receiving, at a stream processor, an event, the stream processor including a plurality of processing stages; generating, by the stream processor, an augmented event based on the event, the augmented event including at least one additional field not appearing in the event, the additional field generated by an operation selected from the group consisting of a join or dimensional annotation operation; and emitting, by the stream processor, the augmented event to downstream consumer.
JOINING AND DIMENSIONAL ANNOTATION IN A STREAMING PIPELINE
Disclosed are embodiments for providing batch performance using a stream processor. In one embodiment, a method is disclosed comprising receiving, at a stream processor, an event, the stream processor including a plurality of processing stages; generating, by the stream processor, an augmented event based on the event, the augmented event including at least one additional field not appearing in the event, the additional field generated by an operation selected from the group consisting of a join or dimensional annotation operation; and emitting, by the stream processor, the augmented event to downstream consumer.
SYSTEMS AND METHODS FOR IMPLEMENTING TRANSPARENT SaaS PROXY ON AND OFF NETWORK
The present disclosure is directed to a system/method for utilizing SaaS proxy platform to provide a transparent proxy solution and allow deployment of a hybrid network having a uniform proxy/Internet access environment for both on-network and off-network user traffic. A granular architecture for steering of user Internet traffic is presented that utilizes a SaaS agent as a primary proxy control based on modifications to the PAC files to allow for surgical traffic control. The proposed approach utilizes SaaS proxy agent beyond its intended capabilities and provides a solution that improves uniformity of user Internet experience as well as improving security and network resiliency in a hybrid network.
METHOD AND SYSTEM FOR REGISTERING PLURALITY OF DEVICES
A computer-implemented method includes receiving, at a server, device information of a plurality of devices, the device information being collected by an intermediate device, receiving, at the server, administrator information, and registering the plurality of devices to the server based on the received administrator information and the received device information. A system for registering the plurality of devices may implement the computer-implemented method.