G06F2209/502

System and method for determining a file for an interaction with a wearable device based on utility indicators

A system for query processing of a frequency of utility indicators comprises a processor operable to receive a transmission from a first wearable device comprising entity file information associated with a first entity. The processor is operable to generate a file vector comprising one or more files of a digital folder based on an association with one or more utility indicators and determine that one of the files corresponds to a greater number of the one or more utility indicators than the remaining files based, at least in part, on the entity file information. The processor is operable to assign the determined one of the one or more files as a first file within the file vector and send a transmission to the first wearable device comprising the file vector and an indication to utilize the first file in an interaction between the first user and the first entity.

Task scheduling for machine-learning workloads
11544113 · 2023-01-03 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for scheduling tasks of ML workloads. A system receives requests to perform the workloads and determines, based on the requests, resource requirements to perform the workloads. The system includes multiple hosts and each host includes multiple accelerators. The system determines a quantity of hosts assigned to execute tasks of the workload based on the resource requirement and the accelerators for each host. For each host in the quantity of hosts, the system generates a task specification based on a memory access topology of the host. The specification specifies the task to be executed at the host using resources of the host that include the multiple accelerators. The system provides the task specifications to the hosts and performs the workloads when each host executes assigned tasks specified in the task specifications for the host.

POSITIONING OF EDGE COMPUTING DEVICES

A processor may receive user data associated with one or more locations of a user in an environment. The processor may receive edge computing data associated with utilization of edge computing resources by the user. The processor may analyze the edge computing data to associate a context with an edge computing resource need. The processor may analyze the user data to associate a context with a location of the user within the environment. The processor may determine a first location of the user in the environment at a first time. The processor may predict a first edge computing need of the user in the first location. The processor may determine an arrangement of one or more edge computing devices configured to meet the first edge computing need of the user at the first time.

Thread associated memory allocation and memory architecture aware allocation
11520633 · 2022-12-06 · ·

A method and system for thread aware, class aware, and topology aware memory allocations. Embodiments include a compiler configured to generate compiled code (e.g., for a runtime) that when executed allocates memory on a per class per thread basis that is system topology (e.g., for non-uniform memory architecture (NUMA)) aware. Embodiments can further include an executable configured to allocate a respective memory pool during runtime for each instance of a class for each thread. The memory pools are local to a respective processor, core, etc., where each thread executes.

Serverless function colocation with storage pools
11513860 · 2022-11-29 · ·

Methods and systems are provided for assigning nodes to execute functions in a serverless computing environment. In one embodiment, a method is provided that includes receiving a function for execution in a serverless computing environment and identifying a storage pool needed during execution of the function. The serverless computing environment may include nodes for executing functions and a first set of nodes may be identified that implement the storage pool. Colocation measures may be determined between the first set of nodes and a second set of nodes. Available computing resources may be determined for the second set of nodes, such as available processing cores and available memory. The second set of nodes may be ranked according to the colocation measures and the available computing resources and a first node may be selected based on the ranking. The first node may be assigned to execute the function.

User presence prediction driven device management

Pooling computing resources based on inferences about a plurality of hardware devices. The method includes identifying inference information about the plurality of devices. The method further includes based on the inference information optimizing resource usage of the plurality of hardware devices.

Method and system for generating latency aware workloads using resource devices in a resource device pool

A method for managing data includes obtaining, by a management module, a workload generation request, wherein the workload generation request specifies a plurality of resource devices, identifying available resource devices in a resource device pool based on the plurality of resource devices, performing a latency analysis on the available resource devices to obtain a plurality of resource device combinations and a total latency cost of each resource device combination, and selecting a resource device combination of the plurality of resource device combinations based on the total latency cost of each resource device combination, wherein the resource device combination comprises a second plurality of resource devices and wherein each of the second plurality of resource devices is one of the plurality of resource devices.

SERVICES THREAD SCHEDULING BASED UPON THREAD TRACING

One embodiment provides a method, including: producing, for each of a plurality of containers, a resource profile for each thread in each of the plurality of containers; identifying, for each of the plurality of containers and from, at least in part, the resource profiles, container dependencies between threads on a single of the plurality of containers; determining service dependencies between threads across different of the plurality of containers; scheduling, based upon the container dependencies and the service dependencies, threads to cores, wherein the scheduling is based upon minimizing thread processing times; and publishing the container dependencies and the service dependencies on a registry of the node clusters.

SERVERLESS FUNCTION COLOCATION WITH STORAGE POOLS
20230100484 · 2023-03-30 ·

Methods and systems are provided for assigning nodes to execute functions in a serverless computing environment. In one embodiment, a method is provided that includes receiving a function for execution in a serverless computing environment and identifying a storage pool needed during execution of the function. The serverless computing environment may include nodes for executing functions and a first set of nodes may be identified that implement the storage pool. Colocation measures may be determined between the first set of nodes and a second set of nodes. Available computing resources may be determined for the second set of nodes, such as available processing cores and available memory. The second set of nodes may be ranked according to the colocation measures and the available computing resources and a first node may be selected based on the ranking. The first node may be assigned to execute the function.

MAINTAINING SESSIONS INFORMATION IN MULTI-REGION CLOUD ENVIRONMENT

Techniques are described that enable, in a multi-region cloud environment, information regarding one or more tenancy sessions that a network access program (e.g., a browser) participates in to be efficiently stored in a centralized location. The centrally stored sessions information can then be used for various purposes such as for restricting the number of tenancy sessions using a network access program, sessions cleanup, and other sessions-related tasks. In certain implementations, the centrally stored sessions information is used to prevent the network access program from opening multiple sessions for the same tenancy. In such implementations, for a particular tenancy, the network access program is allowed to have only one active session for the particular tenancy at a time. The centrally stored sessions information facilitates efficient sessions management including session cleanup after a session is closed.