Patent classifications
G06F9/468
RULE-BASED APPLICATION ACCESS MANAGEMENT
A container that manages access to protected resources using rules to intelligently manage them includes an environment having a set of software and configurations that are to be managed. A rule engine, which executes the rules, may be called reactively when software accesses protected resources. The engine uses a combination of embedded and configurable rules. It may be desirable to assign and manage rules per process, per resource (e.g. file, registry, etc.), and per user. Access rules may be altitude-specific access rules.
Creation and execution of customized code for a data processing platform
A method of executing computer-readable code for interaction with one or more data resources on a data processing platform, the method performed using one or more processors, comprising: receiving a request message including an identifier identifying executable code stored in a data repository; determining, using the identifier, an execution environment of a plurality of stored execution environments mapped to the identified executable code, wherein determining the execution environment mapped to the identified executable code comprises: accessing mapping data identifying a mapping between the identifier and the execution environment of the plurality of stored execution environments, the mapping data including configuration data associated with the identifier, wherein the configuration data identifies one or more convention-based data libraries particular to the execution environment; configuring the determined execution environment to access the one or more convention-based data libraries during execution; executing the identified executable code using the determined execution environment; and passing requests made with the identified executable code to the one or more data resources via a proxy.
Automatically determining flags for a command-line interface in a distributed computing environment
Flags for a command-line interface (CLI) can be automatically determined. In one example, a system can receive a user input through the CLI to manipulate an object in a computing environment. The user input can include a flag for setting a customizable parameter of the object to a particular value. The system can also receive definition data specifying one or more customizable parameters for the object. The system can then determine one or more available flags associated with the one or more customizable parameters specified in the definition data, where the available flag(s) are usable for configuring the one or more customizable parameters of the object. Based on the available flag(s), the system can determine if the flag in the user input is valid. If so, the system can manipulate the object in the computing environment such that the manipulated object has the particular value for the customizable parameter.
SYSTEMS AND METHODS FOR QUEUE CONTROL BASED ON CLIENT-SPECIFIC PROTOCOLS
The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.
CONDITIONS-BASED CONTAINER ORCHESTRATION
A processor may identify one or more pieces of code in a container environment. The one or more pieces of code may adhere to respective agreements. The processor may generate respective digital twins associated with the respective agreements. The processor may analyze the digital twins for multifarious obligations. The processor may provide the one or more pieces of code to one or more specific containers. The providing of the one or more pieces of code may adhere to the multifarious obligations.
Matrix data broadcast architecture
Systems, apparatuses, and methods for efficient parallel execution of multiple work units in a processor by reducing a number of memory accesses are disclosed. A computing system includes a processor core with a parallel data architecture. The processor core executes a software application with matrix operations. The processor core supports the broadcast of shared data to multiple compute units of the processor core. A compiler or other code assigns thread groups to compute units based on detecting shared data among the compute units. Rather than send multiple read accesses to a memory subsystem for the shared data, the processor core generates a single access request. The single access request includes information to identify the multiple compute units for receiving the shared data when broadcasted by the processor core.
METHOD AND SYSTEM FOR COLLECTIVELY-DETERMINING STACKABLE SYSTEM ROLES IN AN INFORMATION HANDLING SYSTEM ENVIRONMENT
A method for managing information handling systems includes initiating, by a stackable system role (SSR) manager of an information handling system of the set of information handling systems, a boot sequence, making a first determination that the boot sequence does not specify a SSR of the information handling system, and based on the first determination: performing a hardware evaluation to determine a SSR for the information handling system, broadcasting the SSR to the set of information handling systems, obtaining, in response to the broadcasting, SSR responses from each information handling system in the set of information handling systems, making a second determination, based on the SSR responses, that an SSR agreement between the set of information handling systems is obtained, based on the second determination, determining a final SSR, and continuing the boot sequence using the final SSR.
METHOD AND SYSTEM FOR A SEMI-DEMOCRATIC DETERMINATION OF STACKABLE SYSTEM ROLES IN AN INFORMATION HANDLING SYSTEM ENVIRONMENT
A method for managing information handling systems includes obtaining, by a committee-leading information handling system of the set of information handling systems, a set of hardware resource information entries from a set of information handling systems in a first committee, performing a stackable system role (SSR) entry analysis based on the set of hardware resource information entries, and determining a set of SSRs, wherein each SSR in the set of SSRs corresponds to an information handling system in the set of information handling systems in the first committee, providing the set of SSRs to a leading information handling system, obtaining a response from the leading information handling system, and based on the response, executing a SSR instruction on the committee-leading information handling system.
ACCESS CONTROL CONFIGURATIONS FOR INTER-PROCESSOR COMMUNICATIONS
Methods, systems, and devices for access control configurations for inter-processor communications are described to support reconfiguration of a dynamic access control configuration at a device. For example, additional configuration fields may be added to existing access control rules of the device, where these additional fields may be configured by a processor sending information to a receiving processor, via a shared memory resource or region of the device. The additional fields may include a read-only value which may specify a processor which has exclusive write permission for a memory region of the share memory. This value may indicate the sending processor of the memory region, and the value may be set by access control hardware when the additional field is changed. Other processors of the device may be prevented from writing to the memory region.
Technologies for providing secure utilization of tenant keys
Technologies for providing secure utilization of tenant keys include a compute device. The compute device includes circuitry configured to obtain a tenant key. The circuitry is also configured to receive encrypted data associated with a tenant. The encrypted data defines an encrypted image that is executable by the compute device to perform a workload on behalf of the tenant in a virtualized environment. Further, the circuitry is configured to utilize the tenant key to decrypt the encrypted data and execute the workload without exposing the tenant key to a memory that is accessible to another workload associated with another tenant.