Patent classifications
H04L47/741
Persistent integration platform for multi-channel resource transfers
Embodiments of the present invention provide a persistent integration platform for conducting a multichannel resource transfer. In particular, the system may utilize a multi-step and multilayered authentication process across multiple disparate computing systems to complete the resource transfer process. In some embodiments, the system may utilize a persistent element which may be accessed by the user across multiple devices which aids in the resource transfer. For instance, the resource transfer process may be started on a first computing system, which may be a stationary networked terminal. At this point, a record of the resource transfer may be created within the persistent element. The user may thereafter access the persistent element through a second computing system, such as a user device, to resume the resource transfer and complete the remaining steps as necessary.
Resource allocation and provisioning in a multi-tier edge-cloud virtualization environment
Techniques are provided for resource allocation and provisioning in a multi-tier edge-cloud virtualization environment. An exemplary method comprises: obtaining an application request for processing a given data type in a multi-tier environment; processing application requests received within a decision window to allocate resources for virtual nodes to process the application requests received within the decision window, wherein the allocated resources for each virtual node is on a corresponding one of cloud resources and a given edge node; instantiating the allocated virtual nodes to process the application requests; and providing the application requests to the instantiated virtual nodes, wherein the instantiated virtual nodes obtain the data of the given data type from a data repository. The virtual node waits to process a given application request for output data of any predecessor requests and sends the output data of the given application request to any additional virtual nodes holding successor requests to the given application request.
Opportunistic packet retransmissions
In an opportunistic packet retransmission strategy, responsive to determining that a retransmission mode is set, a retransmission probability is calculated using minimum and maximum channel busy level retransmission thresholds, such that if a channel busy level of a communication channel is less than a minimum channel busy level retransmission threshold then a retransmission probability is set to 100%, if the channel busy level is greater than the maximum channel busy level then the retransmission probability is set to 0%, and within the minimum and maximum channel busy level retransmission thresholds the retransmission probability is set to decrease from 100% to 0% as a channel busy level of the communication channel rises from the minimum channel busy level retransmission threshold to the maximum channel busy level retransmission threshold. The message is retransmitted responsive to randomly determining whether to retransmit according to the retransmission probability.
STORE AND FORWARD LOGGING IN A CONTENT DELIVERY NETWORK
A computer-implemented method on a device. The device has hardware including storage. The method includes obtaining log event data from at least one component or service on the device that is to be delivered to a component or service on a distinct device. Each log event data item has a priority. If a connection to an external location is lost, at least some of the log event data items are selectively stored in the storage, wherein the storing is based on priority of the log event data items. Otherwise, if the connection is not lost, at least some of the log event data items are sent to the at least one external location.
DISTRIBUTED POLICY-BASED PROVISIONING AND ENFORCEMENT FOR QUALITY OF SERVICE
Embodiments of the disclosure provide techniques for measuring congestion and controlling quality of service to a shared resource. A module that interfaces with the shared resource monitors the usage of the shared resource by accessing clients. Upon detecting that the rate of usage of the shared resource has exceeded a maximum rate supported by the shared resource, the module determines and transmits a congestion metric to clients that are currently attempting to access the shared resource. Clients, in turn determine a delay period based on the congestion metric prior to attempting another access of the shared resource.
Method, apparatus and system for addressing resources
A method and an apparatus for addressing resources, the apparatus having a first interface to communicate with end-points operationally connected to the apparatus using a binary web service, the end-points including one or more resources; a second interface for receiving requests regarding the resources and for responding to the requests; a component for storing information on sleeping end-points which are non-continuously available and storing a request queue for each sleeping end-point; a component for receiving through the second interface a request regarding a sleeping end-point, adding the request to the request queue of the end-point; communicating with a sleeping end-point regarding the requests after receiving through the first interface a queue request from the end-point; and sending through the first interface responses for the resolved requests.
Network traffic routing in distributed computing systems
Distributed computing systems, devices, and associated methods of packet routing are disclosed herein. In one embodiment, a method includes receiving, from a computing network, a packet at a packet processor of a server. The method also includes matching the received packet with a flow in a flow table contained in the packet processor and determining whether the action indicates that the received packet is to be forwarded to a NIC buffer in the outbound processing path of the packet processor instead of the NIC. The method further includes in response to determining that the action indicates that the received packet is to be forwarded to the NIC buffer, forwarding the received packet to the NIC buffer and processing the packet in the NIC buffer to forward the packet to the computer network without exposing the packet to the main processor.
System and method for controlling access to resources in a multicomputer network
A network resource manager is configured to: store a first number of deferrable instances in a record for a first user; store a second number of deferrable instances in a record for a second user; and increase the first number and reduce the second number by an amount when a consideration is transferred from the first user to the second user. The network resource manager is further configured to read in from a deferrable instance a request to transfer program data and/or execution instructions to a computer-based resource of a cloud service provider for execution. If the load on the cloud service provider is high, the manager will transmit a query to the deferrable instance offering to assign an additional deferrable instance to the original deferrable instance if both the original deferrable instance and the additional deferrable instance accept a deferral period for their requests for resources.
SCEF entity, communication terminal, data processing method, data receiving method, and non-transitory computer readable medium
To provide an SCEF entity capable of suppressing an increase in processing load related to communication between an SCEF and an MME in Non-IP data communication. An SCEF entity (10) according to the present invention includes a storage unit (11) configured to buffer first Non-IP data not delivered to a communication terminal (40), and a control unit (12) configured to, when the first Non-IP data is buffered upon receiving second Non-IP data addressed to the communication terminal (40) from a server device (30), suppress transmission of the second Non-IP data to a control device (20) in a mobile network and buffer the second Non-IP data into the storage unit (11).
Providing streaming media data
A system for servicing streaming media requests. The system includes stream director nodes and intelligent stream engine nodes, such as permanent storage devices with network interfaces. The stream director node receives a streaming media request and enqueues the request until all resources on a path from the stream engine node having the media object being requested to the user/client system have been reserved. Once reserved, the enqueued request is then serviced by requesting the stream object from the stream engine node, which then transfers the requested stream object between the stream engine node and the user/client system over the prepared path without involving the stream director node. Upon completion, the prepared path is torn down. In one embodiment the prepared path is a Label Switched Path. A provision is made for balancing the load among the stream engine nodes by duplicating stream objects on other stream engine nodes.