G06F9/46

A SYSTEM AND A METHOD FOR PERFORMING ATOMIC SWAP TRANSACTIONS OF DIGITIAL RECORDS AMONG A PLURALITY OF DITRIBUTED DATABASES
20230050160 · 2023-02-16 ·

The present invention relates to a system and a method for performing exchanges of digital data records among a plurality of distributed databases. More specifically, the present invention relates to a technology agnostic atomic swap system platform configured to perform atomic swap transactions of digital data records among a plurality of distributed ledger technology platforms.

CALL AND RETURN INSTRUCTIONS FOR CONFIGURABLE REGISTER CONTEXT SAVE AND RESTORE
20230051855 · 2023-02-16 ·

Systems, devices, circuitries, and methods are disclosed for identifying, within a call instruction, context registers for storing prior to a jump to another subroutine. In one example, a method includes receiving, while executing a first subroutine, a call instruction that includes a first opcode and identifies a first target address, wherein the first target address stores instructions for performing a second subroutine. A first set of context registers identified by the call instruction is determined and the content of the first set of context registers is stored in first memory allocated for context storage for the first subroutine prior to executing the instruction stored in the first target address.

Techniques for reconfiguring partitions in a parallel processing system

A parallel processing unit (PPU) can be divided into partitions. Each partition is configured to operate similarly to how the entire PPU operates. A given partition includes a subset of the computational and memory resources associated with the entire PPU. Software that executes on a CPU partitions the PPU for an admin user. A guest user is assigned to a partition and can perform processing tasks within that partition in isolation from any other guest users assigned to any other partitions. Because the PPU can be divided into isolated partitions, multiple CPU processes can efficiently utilize PPU resources.

Securing an injection of a workload into a virtual network hosted by a cloud-based platform

The disclosed system implements techniques to secure communications for injecting a workload (e.g., a container) into a virtual network hosted by a cloud-based platform. Based on a delegation instruction received from a tenant, a virtual network of the tenant can connect to and execute a workload via a virtual machine that is part of a virtual network that belongs to a resource provider. To secure calls and authorize access to the tenant's virtual network, authentication information provided with a call from the virtual network of the resource provider may need to match authorization information made available via a publication service of the cloud-based platform. Additionally or alternatively, an identifier of a NIC used to make a call may need to correspond to a registered name of the resource provider for the call to be authorized. These checks provide increased security by preventing unauthorized calls to the tenant's virtual network.

Unified operating system for distributed computing
11579848 · 2023-02-14 · ·

In some embodiments, a real-time event is detected and context is determined based on the real-time event. An application model is fetched based on the context and meta-data associated with the real-time event, the application model referencing a micro-function and including pre-condition and post-condition descriptors. A graph is constructed based on the micro-function. The micro-function is transformed into micro-capabilities by determining a computing resource for execution of a micro-capability by matching pre-conditions and post-conditions of the micro-capability, and enabling execution and configuration of the micro-capability on the computing resource by providing access in a target environment to an API capable of calling the micro-capability to configure and execute the micro-capability. A request is received from the target environment to execute and configure the micro-capability on the computing resource. The micro-capability is executed and configured on the computing resource, and an output of the micro-capability is provided to the target environment.

Software defined automation system and architecture

Embodiments of a software defined automation system that provides a reference architecture for designing, managing and maintaining a highly available, scalable and flexible automation system. In some embodiments, an SDA system can include a localized subsystem including a system controller node and multiple compute nodes. The multiple compute nodes can be communicatively coupled to the system controller node via a first communication network. The system controller node can manage the multiple compute nodes and virtualization of a control system on a compute node via the first communication network. The virtualized control system includes virtualized control system elements connected to a virtual network that is connected to a second communication network to enable the virtualized control system elements to control a physical control system element via the second communication network connected to the virtual network.

Tiered backup archival in multi-tenant cloud computing system

A system and method for backing up workloads for multiple tenants of a cloud computing system are disclosed. A method of backing up workloads for multiple tenants of a computing system includes triggering an archival process according to an archival policy set by a tenant, and executing the archival process by reading backup data of the tenant stored in a backup storage device of the computer system and transmitting the backup data to an archival store designated in the archival policy, and then deleting or invalidating the backup data stored in the backup storage device.

VGPU scheduling policy-aware migration
11579942 · 2023-02-14 · ·

Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU.

Methods and apparatuses for generating redo records for cloud-based database

Methods and apparatuses in a cloud-based database management system are described. Data in a database are stored in a plurality of pages in a page store of the database. A plurality of redo log records are received to be applied to the database. The redo log records within a predefined boundary are parsed to determine, for each given redo log record, a corresponding page to which the given log record is to be applied. The redo log records are reordered by corresponding page. The reordered redo log records are stored to be applied to the page store of the database.

Merging scaled-down container clusters using vitality metrics
11579935 · 2023-02-14 · ·

A system for container migration includes containers running instances of an application running on a cluster, an orchestrator with a controller, a memory, and a processor in communication with the memory. The processor executes to monitor a vitality metric of the application. The vitality metric indicates that the application is in either a live state or a dead state. Additionally, horizontal scaling for the application is disabled and the application is scaled-down until the vitality metric indicates that the application is in the dead state. Responsive to the vitality metric indicating that the application is in the dead state, the application is scaled-up until the vitality metric indicates that the application is in the live state. Also, responsive to the vitality metric indication transitioning from the dead state to the live state, the application is migrated to a different cluster while the horizontal scaling of the application is disabled.