G06F9/545

MULTI-PASS PERFORMANCE PROFILING
20230222017 · 2023-07-13 ·

Apparatuses, systems, and techniques to collect compute performance information. In at least one embodiment, an API is performed to cause two or more portions of at least one software program to be concurrently performed a plurality of times in order to generate one or more performance metrics.

SYSTEM AND METHOD FOR AUTOMATICALLY MONITORING PERFORMANCE OF SOFTWARE ROBOTS

Various methods, apparatuses/systems, and media for automatically monitoring performance of multiple bots (software robots) are disclosed. A processor hosts a plurality of bots on a virtual machine, each bot having a unique process identifier on the virtual machine for processing tasks associated with a plurality of applications and each bot having its own configured instance; integrates the plurality of bots with a plurality of data sources via a communication interface; calls corresponding application programming interface (API) to access data from each of the plurality of data sources; integrates all data accessed from each of the plurality of data sources into a single platform; automatically generates, in response to integrating all accessed data, a performance metrics for each bot; and displays the performance metrics onto a graphical user interface (GUI) for constant monitoring of each bot's performance for automatic execution of remedial actions as necessary.

MACHINE LEARNING NOTEBOOK CELL OBFUSCATION

Embodiments securely share a machine learning (“ML”) notebook, comprising a plurality of cells, over a cloud network. Embodiments receive the ML notebook with one or more of the cells designated as a masked cell. Embodiments encrypt the masked cells and hash the masked cell using a corresponding hash. Embodiments store the hashed masked cell with a corresponding one or more identities of users who can use the hash to execute the masked cell.

Application interface implementation method in a host platform layer, device, and medium

Provided are an application interface implementation method and apparatus in a host platform layer, a device, and a storage medium, which relate to the field of computer technologies. The implementation scheme includes: the host platform layer acquiring description data of a terminal capability interface; parsing the description data to acquire a communication mode of the terminal capability interface; and configuring a corresponding processor according to the communication mode, configuring a corresponding concept mapping relationship according to the communication mode, or configure a corresponding processor and a corresponding concept mapping relationship according to the communication mode to encapsulate a platform layer interface of the terminal capability interface, where the platform layer interface is configured to process data of communication interactions in a process in which a mini program calls the terminal capability interface through the host platform layer.

System and method for page table caching memory

A processing system includes a processor, a memory, and an operating system that are used to allocate a page table caching memory object (PTCM) for a user of the processing system. An allocation of the PTCM is requested from a PTCM allocation system. In order to allocate the PTCM, a plurality of physical memory pages from a memory are allocated to store a PTCM page table that is associated with the PTCM. A lockable region of a cache is designated to hold a copy of the PTCM page table, after which the lockable region of the cache is subsequently locked. The PTCM page table is populated with page table entries associated with the PTCM and copied to the locked region of the cache.

System and method for controlling inter-application association through contextual policy control
11693954 · 2023-07-04 ·

A method for controlling the interoperation of a plurality of software applications and resources includes intercepting communications from a first application to a second application or resource, directing the communication to a context management system, generating a candidate list of contexts for the communication, evaluating the candidate list according to at least one policy defined for these contexts to identify the resultant action and namespace for the communication, and performing the action as defined by the policies within the identified namespace. The method further includes tracking one or more versions of the second application, as well as tracking an evolution of application and/or resource names. The method further includes identifying one or more operations associated with a context on the candidate list, and executing the identified operations prior to a further communication.

Integration and transformation framework
11695848 · 2023-07-04 · ·

A solution is provided that significantly reduces the complexity of existing solutions for creating interfaces in a computer system. More particularly, a middleware common to such solutions is removed and a low-level approach is taken where customer-specific logic is processed on an Extensible Stylesheet Language transformation (XSLT) processor instead. Additional transformations may also then be imported from external XSLT editors making it very easy to generate a backend configuration for interfaces.

MEMORY ALLOCATION USING GRAPHS

Apparatuses, systems, and techniques to generate one or more graph code nodes to allocate memory. In at least one embodiment, one or more graph code nodes to allocate memory are generated, based on, for example, CUDA or other parallel computing platform code.

MEMORY DEALLOCATION USING GRAPHS

Apparatuses, systems, and techniques to generate one or more graph code nodes to deallocate memory. In at least one embodiment, one or more graph code nodes to deallocate memory are generated, based on, for example, CUDA or other parallel computing platform code.

Recommendations for scheduling jobs on distributed computing devices

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented as a computational graph on a distributed computing network. A method includes: receiving data representing operations to be executed in order to perform a job on a plurality of hardware accelerators of a plurality of different accelerator types; generating, for the job and from at least the data representing the operations, features that represent a predicted performance for the job on hardware accelerators of the plurality of different accelerator types; generating, from the features, a respective predicted performance metric for the job for each of the plurality of different accelerator types according to a performance objective function; and providing, to a scheduling system, one or more recommendations for scheduling the job on one or more recommended types of hardware accelerators.