H04L47/822

Throttling queue for a request scheduling and processing system

Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.

Edge device, control method, and program

An object of the present invention is to provide an edge device, a control method, and a program with which the effects of loop generation can be minimized from the NW operator side while ensuring connection normality with the device of an NW user. An edge device according to the present invention physically closes an access port upon confirmation that the access port has been connected to an external device while the access port is in a physically released state and a DF state is undefined, notifies other edge devices within an EVPN MH configuration that the host device has entered a state in which the host device may become the DF and causes an EVPN function unit to calculate the DF state to be set, physically opens the access port when the calculation result indicates DF, and physically closes the access port when the calculation result indicates BDF.

Accelerated processing apparatus for transaction and method thereof

An accelerated transaction processing apparatus includes a memory for storing one or more instructions, a communication interface for communicating with a blockchain network, and a processor. The processor is configured to determine whether the blockchain network is in a congested state based on monitoring information about the blockchain network, adjust a batch size based on a result of the determination, and perform batch processing for one or more individual transactions using the adjusted batch size.

Systems and methods for dynamic adjustment of workspaces based on available local hardware

Systems and methods adjust workspaces based on available hardware resource of an IHS (Information Handling System) by which a user operates a workspace supported by a remote orchestration service. A security context and a productivity context of the IHS are determined based on reported context information. A workspace definition for providing access to a managed resource is selected based on the security context and the productivity context. A notification specifies a hardware resource of the IHS that is not used by the workspace definition, such as a microphone or camera that has not been enabled for use by workspaces. A productivity improvement that results from the updated productivity context that includes use of the first hardware resource is determined. Based on the productivity improvement, an updated workspace definition is selected that includes use of the first hardware resource in providing access to the managed resource via the IHS.

SYSTEMS AND METHODS FOR SERVER LOAD BALANCING
20230057832 · 2023-02-23 · ·

Methods and systems for balancing online stores across servers. Monitoring a level of customer activity associated with a particular online store in a plurality of online stores. Detecting, based on the level of customer activity, a demand-level condition for the particular online store. Responsive to the detecting of the demand-level condition for the particular online store, moving one or more of the plurality of online stores from a first server of a plurality of servers to a second server of the plurality of servers.

METHOD OF LOAD FORECASTING VIA ATTENTIVE KNOWLEDGE TRANSFER, AND AN APPARATUS FOR THE SAME

A method of forecasting a future load may include: obtaining source data sets and a target data set that have been collected from a plurality of source base stations and a target base station, respectively; among a plurality of source machine learning models, selecting at least one machine learn source model that has a traffic load prediction performance higher than that of a target machine learning model through a negative transfer analysis; obtaining model weights to be applied to the target machine learning model and the selected at least one source machine learning model via an attention neural network that is jointly trained with the target machine learning model and the selected source machine learning models; obtaining a load forecasting model for the target base station by combining the target machine learning model and the selected at least one source machine learning model according to the model weights; and predicting a future communication traffic load of the target base station based on the load forecasting model.

Distributed stream-based database triggers

Information describing changes to a collection of items maintained by a database may be stored in a log file. The information in the log file may be converted into a stream of records describing the changes. The records may be directed to a computing node selected for performing a trigger function in response to the change, based on applying a hash function to a portion of the record, identifying a hash space associated with a value output by the hash function, and mapping from the hash space to the selected computing node.

ENFORCEMENT OF TIME-BASED USER ACCESS LEVELS FOR COMPUTING ENVIRONMENTS
20220368651 · 2022-11-17 ·

A system is provided for enforcing time-based user access levels in a computing infrastructure of an organization. The system includes a processor and a computer readable medium operably coupled thereto, to perform operations which include executing a synchronization of the time-based user access levels, obtaining a first login identifier (ID) of a plurality of login IDs for a group of employees of the organization, identifying a position ID and an employment status ID for the first login ID, determining a current time and a last login timestamp for the first login ID, determining a time-based access rule for the group of employees, determining whether a time period from the last login timestamp to the current time violates the time-based access rule, and setting, for the synchronization of the first login ID, at least a first access level of the first login ID to computing resources.

Resource allocating and scheduling for a network

A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.

Capacity optimization in an automated resource-exchange system

The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing-facilities based on attribute values associated with the needed resources, the resource providers, and the resource consumers. The resource-exchange system monitors and controls resource exchanges on behalf of participants in the resource-exchange system in order to optimize resource usage within participant data centers and computing facilities. By optimizing resource usage, the resource-exchange system drives participant data centers and computing facilities towards maximum operational efficiency.