Patent classifications
H04L47/741
Deferred download based on network congestion
Manners of scheduling downloads for a user equipment (UE). The UE is configured to establish a connection to a network and receive an indication of a user-initiated download, determining whether the download is to be performed now or at a subsequent time, when the download is to be performed at the subsequent time, determine a time window during which the download is to be initiated and initiate the download during the time window.
System and method for virtual parallel resource management
An improved system and method are disclosed for providing virtual parallel access to a shared resource. In one example, the method includes receiving a request from a device to take control of the shared resource. After determining that another device is currently in control of the shared resource, a timer is started. Control of the shared resource will automatically pass from the device currently in control to the requesting device when the timer expires. Input received from the device currently in control is executed. Input received from the device that has requested control is buffered and executed once control is transferred.
Method and apparatus for providing multimedia broadcast and multicast service (MBMS) in wireless communication system
A method for receiving a multimedia broadcast multicast service (MBMS) by a user equipment (UE) in a wireless communication system; the UE therefore; a method for transmitting an MBMS by a base station (BS) in a wireless communication system; and the BS therefore are discussed. The method for receiving an MBMS by a UE according to one embodiment includes transmitting one or more system information blocks (SIBs); receiving a first MBMS interest indication message indicating whether MBMS reception is prioritized above unicast reception, when a predetermined SIB related to MBMS service continuity is included in the one or more SIBs; and receiving a second MBMS interest indication message according to a change of priority between the MBMS reception and the unicast reception.
Speculative resource allocation for routing on interconnect fabrics
Methods and systems related to speculative resource allocation for routing on an interconnect fabric are disclosed herein. One disclosed method includes speculatively allocating a collection of resources to support a set of paths through an interconnect fabric. The method also includes aggregating a set of responses from the set of paths at a branch node on the set of paths. If a resource contention is detected, the set of responses will include an indicator of a resource contention. The method will then further include transmitting, from the branch node and in response to the indicator of the resource contention, a deallocate message downstream and the indicator of the resource contention upstream, and reallocating resources for the multicast after a hold period.
Dynamic throttling systems and services
A lightweight throttling mechanism allows for dynamic control of access to resources in a distributed environment. Each request received by a server of a server group is parsed to determine tokens in the request, which are compared with designated rules to determine whether to process or reject the request based on usage data associated with an aspect of the request, the token values, and the rule(s) specified for the request. The receiving of each request can be broadcast to throttling components for each server such that the global state of the system is known to each server. The system then can monitor usage and dynamically throttle requests based on real time data in a distributed environment.
BACKGROUND DATA TRAFFIC DISTRIBUTION OF MEDIA DATA
An example device for retrieving media data includes a memory configured to store media data; and one or more processors implemented in circuitry and configured to: send a request to retrieve media data according to a background data transfer to a media streaming application function (AF); in response to the request, receive an indication of a background data transfer opportunity from the media streaming AF; in response to the indication of the background data transfer opportunity, retrieve the media data according to the background data transfer; and store the retrieved media data to the memory.
ON-DEMAND RESOURCE CAPACITY IN A SERVERLESS FUNCTION-AS-A-SERVICE INFRASTRUCTURE
Various aspects are disclosed for optimization of dependent systems for serverless frameworks that facilitate a function-as-a-service (FaaS). In some examples, an agent can be installed on a dependent system and collect resource consumption data that is reported to a management service. The management service can throttle requests submitted to the FaaS or scale up the infrastructure depending upon the resource consumption data.
SPECULATIVE RESOURCE ALLOCATION FOR ROUTING ON INTERCONNECT FABRICS
Methods and systems related to speculative resource allocation for routing on an interconnect fabric are disclosed herein. One disclosed method includes speculatively allocating a collection of resources to support a set of paths through an interconnect fabric. The method also includes aggregating a set of responses from the set of paths at a branch node on the set of paths. If a resource contention is detected, the set of responses will include an indicator of a resource contention. The method will then further include transmitting, from the branch node and in response to the indicator of the resource contention, a deallocate message downstream and the indicator of the resource contention upstream, and reallocating resources for the multicast after a hold period.
Controller Command Scheduling in a Memory System to Increase Command Bus Utilization
A first command is scheduled on a command bus, where the first command requires use of a data bus resource at a first time period after scheduling the first command. Prior to the first time period, a second command is identified according to a scheduling policy. A determination is made whether scheduling the second command on the command bus will cause a conflict in usage of the at least one data bus resource. In response to determining that scheduling the second command will cause the conflict in usage, a third lower-priority command is identified for which scheduling on the command bus will not cause the conflict in usage. The third command is scheduled on the command bus prior to scheduling the second command, even though it has lower priority than the second command.
STORE AND FORWARD LOGGING IN A CONTENT DELIVERY NETWORK
A computer-implemented method on a device. The device has hardware including storage. The method includes obtaining log event data from at least one component or service on the device that is to be delivered to a component or service on a distinct device. Each log event data item has a priority. If a connection to an external location is lost, at least some of the log event data items are selectively stored in the storage, wherein the storing is based on priority of the log event data items. Otherwise, if the connection is not lost, at least some of the log event data items are sent to the at least one external location.