Patent classifications
G06F9/50
AUTOMATED SYNTHESIS OF REFERENCE POLICIES FOR RUNTIME MICROSERVICE PROTECTION
A method, apparatus and computer program product for automated security policy synthesis and use in a container environment. In this approach, a binary analysis of a program associated with a container image is carried out within a binary analysis platform. During the binary analysis, the program is micro-executed directly inside the analysis platform to generate a graph that summarizes the program's expected interactions within the run-time container environment. The expected interactions are identified by analysis of one or more system calls and their arguments found during micro-executing the program. Once the graph is created, a security policy is then automatically synthesized from the graph and instantiated into the container environment. The policy embeds at least one system call argument. During run-time monitoring of an event sequence associated with the program executing in the container environment, an action is taken when the event sequence is determined to violate the security policy.
APPARATUSES AND METHODS FOR SCHEDULING COMPUTING RESOURCES
Apparatus and methods for scheduling computing resources is disclosed that facilitate the cooperation of resource managers in the resource layer and workload schedulers in the workload layer working together so that resource managers can efficiently manage and schedule resources for horizontally and vertically scaling resources on physical hosts shared among workload schedulers to run workloads.
PREDICTIVE SCALING OF CONTAINER ORCHESTRATION PLATFORMS
Systems, methods, and computer programming products leveraging recurrent neural network architectures to proactively predict workload demand of container orchestration platforms. The platform continuously collects metric data from clusters of the platform and train multiple parallel neural networks with different architectures to predict future platform workload demands. At periodic intervals, the registered neural networks in consideration for controlling the scaling operations of the platform are compared against one another to identify the neural network demonstrating the highest performance and/or most accurate workload prediction strategy for scaling the orchestration platform. The selected neural network is enforced as controller for the platform to implement the workload prediction strategy. The neural network controller enforced by the platform predictively scales up or down the number of pods within nodes of the platform and/or the number of clusters providing computational resources to the platform, in anticipation of future increased or decreased end user demand.
DATABASE REPLICATION USING HETEROGENOUS ENCODING
Embodiments of the invention are directed to database replication using heterogenous encoding. Aspects include obtaining a database and analyzing a data pattern of data in the database. Aspects also include identifying a plurality of candidate encoding formats and evaluating a computing cost for encoding the database for each of the plurality of candidate encoding formats. Aspects further include selecting an encoding format from the plurality of candidate encoding formats based at least in part on the computing cost and storing a backup copy of the database using the encoding format.
MULTILAYER PROCESSING ENGINE IN A DATA ANALYTICS SYSTEM
Methods, systems, and computer storage media for providing a multilayer processing engine of a multilayer processing system. The multilayer processing engine supports an event layer, a metadata layer, and a multi-tier processing layer. The metadata layer can refer to a functional layer that operates via a sequential hierarchy of functional layers (i.e., event layer and multi-tier processing layer) to analyze incoming event streams and configure a downstream processing configuration. The metadata layer provides for dynamic metadata-based configuration of downstream processing of data associated with the event layer and the multi-tier processing layer. The multilayer processing system can be a data analytics system—operating via a serverless distributed computing system. The data analytics system implements the multilayer processing engine as a serverless data analytics management engine for processing high frequency data at scale based on dynamically-generated processing code—generated based on a downstream processing configuration—that supports automatically processing the data.
MULTILAYER PROCESSING ENGINE IN A DATA ANALYTICS SYSTEM
Methods, systems, and computer storage media for providing a multilayer processing engine of a multilayer processing system. The multilayer processing engine supports an event layer, a metadata layer, and a multi-tier processing layer. The metadata layer can refer to a functional layer that operates via a sequential hierarchy of functional layers (i.e., event layer and multi-tier processing layer) to analyze incoming event streams and configure a downstream processing configuration. The metadata layer provides for dynamic metadata-based configuration of downstream processing of data associated with the event layer and the multi-tier processing layer. The multilayer processing system can be a data analytics system—operating via a serverless distributed computing system. The data analytics system implements the multilayer processing engine as a serverless data analytics management engine for processing high frequency data at scale based on dynamically-generated processing code—generated based on a downstream processing configuration—that supports automatically processing the data.
CALL AND RETURN INSTRUCTIONS FOR CONFIGURABLE REGISTER CONTEXT SAVE AND RESTORE
Systems, devices, circuitries, and methods are disclosed for identifying, within a call instruction, context registers for storing prior to a jump to another subroutine. In one example, a method includes receiving, while executing a first subroutine, a call instruction that includes a first opcode and identifies a first target address, wherein the first target address stores instructions for performing a second subroutine. A first set of context registers identified by the call instruction is determined and the content of the first set of context registers is stored in first memory allocated for context storage for the first subroutine prior to executing the instruction stored in the first target address.
Software Control Techniques for Graphics Hardware that Supports Logical Slots
Disclosed embodiments relate to software control of graphics hardware that supports logical slots. In some embodiments, a GPU includes circuitry that implements a plurality of logical slots and a set of graphics processor sub-units that each implement multiple distributed hardware slots. Control circuitry may determine mappings between logical slots and distributed hardware slots for different sets of graphics work. Various mapping aspects may be software-controlled. For example, software may specify one or more of the following: priority information for a set of graphics work, to retain the mapping after completion of the work, a distribution rule, a target group of sub-units, a sub-unit mask, a scheduling policy, to reclaim hardware slots from another logical slot, etc. Software may also query status of the work.
WORKLOAD PERFORMANCE PREDICTION AND REAL-TIME COMPUTE RESOURCE RECOMMENDATION FOR A WORKLOAD USING PLATFORM STATE SAMPLING
Embodiments described herein are generally directed to improving predictions regarding workload performance to facilitate dynamic auto device selection. In an example, based on telemetry samples collected from a computer system in real-time and indicative of a state of the computer system, one or more workload performance prediction models are built or updated for a heterogeneous set of computer resources of the computer system with reference to one or more optimization goals. At a time of execution of a workload, a particular computer resource of the heterogeneous set of computer resources on which to dispatch the workload is dynamically determined by: (i) generating multiple predicted performance scores each corresponding to one of the computer resources based on the state of the computer system and the one or more workload performance prediction models; and (ii) selecting the particular computer resource based on the predicted performance scores.
APPLICATION USER JOURNEY MANAGEMENT
An application activation method includes enabling an activation of one or more applications, including an activation of a first application, on a computing device. A first plurality of interactions of a user with the one or more applications on the computing device are detected. A first offer to renew the activation of the first application is generated based on the first plurality of interactions of the user. The first offer is provided to the user via the computing device. An acceptance of the first offer is received from the user, and the activation of the first application is renewed responsive to receiving the acceptance of the first offer.