G06F9/5022

Techniques for preventing concurrent execution of declarative infrastructure provisioners

Techniques for preventing concurrent execution of an infrastructure orchestration service are described. Worker nodes can receive instructions, or tasks, for deploying infrastructure resources and can provide heartbeat notifications to scheduler nodes, also considered a lease. A signing proxy can track the heartbeat notifications sent from the worker nodes to the scheduler node. The signing proxy can receive requests corresponding to a performance of the tasks assigned to the worker nodes. The signing proxy can determine whether the lease between each worker node and the scheduler is valid. If the lease is valid, the signing proxy may make a call to services on behalf of the worker node, and if the lease is not valid, the signing proxy may not make a call to services on behalf of the worker node. Instead, the signing proxy may cut off all outgoing network traffic, blocking access of the worker node to services.

MEMORY ALLOCATION USING GRAPHS

Apparatuses, systems, and techniques to generate one or more graph code nodes to allocate memory. In at least one embodiment, one or more graph code nodes to allocate memory are generated, based on, for example, CUDA or other parallel computing platform code.

MEMORY DEALLOCATION USING GRAPHS

Apparatuses, systems, and techniques to generate one or more graph code nodes to deallocate memory. In at least one embodiment, one or more graph code nodes to deallocate memory are generated, based on, for example, CUDA or other parallel computing platform code.

Display system using system level resources to calculate compensation parameters for a display module in a portable device
11545084 · 2023-01-03 · ·

A system including a display module and a system module. The display module is integrated in a portable device with a display communicatively coupled to one or more of a driver unit, a measurement unit, a timing controller, a compensation sub-module, and a display memory unit. The system module is communicatively coupled to the display module and has one or more interface modules, one or more processing units, and one or more system memory units. At least one of the processing units and the system memory units is programmable to calculate new compensation parameters for the display module during an offline operation.

Access control in an observe-notify network using callback
11546761 · 2023-01-03 · ·

Various systems and methods for implementing observe-notify callback context automation in a connected device framework are described herein. In an example, the techniques for context automation may include: expansion of RESTful permissions to include an OBSERVE command (e.g., as part of a CRUDON (Create, Retrieve, Update, Delete, Observe, Notify) command definition); configuration of a callback resource to implement the OBSERVE command; access control policies to implement the OBSERVE command; and OBSERVE registration events to be monitored within an access management service.

Controlling mark positions in documents

A document is represented as a node tree in a document processing system. Edits to a node are represented in a change record that has a one-way link to the node. A text mark has a one-way link to the change record. It deletes that link when the changes represented by the change record are reflected in the text mark. A memory management system releases the memory allocated to the change record when no other object links to it.

INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, AND METHOD FOR PROCESSING INFORMATION
20220413890 · 2022-12-29 · ·

An apparatus includes: a storing device including regions allocated to virtual machines (VMs); a processing device executing the VMs; a relay device executing a relaying process; and a transfer processor transferring data between the regions. The processing device stores a first and second numbers associated with a used entry among first entries allocated to the transfer processor and a used entry among second entries allocated to the relay device, respectively, the first and second numbers being included in numbers associated with entries of a reception buffer in a first region allocated to a first VM; and sets a smaller first and second numbers in the processing device to a number being set in the first region and representing an entry of data read from the reception buffer.

SYSTEM FOR IMPLEMENTING A TRANSACTIONAL TIMELOCK MECHANISM IN A DISTRIBUTED LEDGER

Systems, computer program products, and methods are described herein for implementing a transactional timelock mechanism in a distributed ledger. The present invention is configured to receive, from a computing device, a transaction to be registered in a ledger record associated with a distributed ledger at a future time; retrieve a required amount of resources for the one or more validation nodes to register the transaction in the ledger record; submit a validation request for the transaction to a memory pool associated with the one or more validation nodes with a proposed amount of resources less than the required amount of resources; continuously monitor the transaction in the memory pool until the future time; and at the future time, automatically re-submit the validation request for the transaction to the memory pool with the required amount of resources.

DYNAMIC CLUSTERING OF EDGE CLUSTER RESOURCES
20220413925 · 2022-12-29 ·

Methods, computer program products, and/or systems are provided that perform the following operations: identifying, in an environment that includes a plurality of edge clusters of edge nodes, a first edge cluster having a resource gap; broadcasting a resource requirement of the first edge cluster to other edge clusters in the plurality; obtaining resource commitments from one or more of the other edge clusters; selecting edge cluster resources from the one or more of the other edge clusters based, at least in part, on the resource commitments; and creating a new cluster including the first edge cluster and the selected edge cluster resources.

DATA CURATION WITH CAPACITY SCALING

A method may include allocating, based on a first load requirement of a first tenant, a first bin having a fixed capacity for handing the first load requirement of the first tenant. In response to the first load requirement of the first tenant exceeding a first threshold of the fixed capacity of the first bin, packing a second bin allocated to handle a second load requirement of a second tenant. The second bin may be packed by transferring, to the second bin, the first load requirement of the first tenant based on the transfer not exceeding the first threshold of the fixed capacity of the second bin. In response to the transfer exceeding the first threshold of the fixed capacity of the second bin, allocating a third bin to handle the first load requirement of the first tenant.