G06F2212/206

Systems and methods for policy execution processing

A system and method of processing instructions may comprise an application processing domain (APD) and a metadata processing domain (MTD). The APD may comprise an application processor executing instructions and providing related information to the MTD. The MTD may comprise a tag processing unit (TPU) having a cache of policy-based rules enforced by the MTD. The TPU may determine, based on policies being enforced and metadata tags and operands associated with the instructions, that the instructions are allowed to execute (i.e., are valid). The TPU may write, if the instructions are valid, the metadata tags to a queue. The queue may (i) receive operation output information from the application processing domain, (ii) receive, from the TPU, the metadata tags, (iii) output, responsive to receiving the metadata tags, resulting information indicative of the operation output information and the metadata tags; and (iv) permit the resulting information to be written to memory.

Apparatus, method and program for legacy boot processing
11150912 · 2021-10-19 · ·

An apparatus includes a plurality of root ports serving as roots of bus connection of a plurality of devices including boot devices from which legacy boot is executed to boot an operating system (OS). A processor included in the apparatus identifies a single boot device among the boot devices and a single root port connected to the single boot device, and allocates, as memory addresses to be used for memory mapped input and output, memory addresses with a bit width available during the legacy boot to devices connected to the identified single root port. The processor determines whether the memory addresses have been allocated to all devices connected to the single root port, and executes the legacy boot to boot the OS from the single boot device when the memory addresses have been allocated to all the devices connected to the single root port.

Methods and apparatus to implement multiple inference compute engines

Methods and apparatus to implement multiple inference compute engines are disclosed herein. A disclosed example apparatus includes a first inference compute engine, a second inference compute engine, and an accelerator on coherent fabric to couple the first inference compute engine and the second inference compute engine to a converged coherency fabric of a system-on-chip, the accelerator on coherent fabric to arbitrate requests from the first inference compute engine and the second inference compute engine to utilize a single in-die interconnect port.

Methods and apparatus for reduced overhead data transfer with a shared ring buffer
11176064 · 2021-11-16 · ·

Methods and apparatus for reducing bus overhead with virtualized transfer rings. The Inter-Processor Communications (IPC) bus uses a ring buffer (e.g., a so-called Transfer Ring (TR)) to provide Direct Memory Access (DMA)-like memory access between processors. However, performing small transactions within the TR inefficiently uses bus overhead. A Virtualized Transfer Ring (VTR) is a null data structure that doesn't require any backing memory allocation. A processor servicing a VTR data transfer includes the data payload as part of an optional header/footer data structure within a completion ring (CR).

MECHANISM TO DYNAMICALLY ALLOCATE PHYSICAL STORAGE DEVICE RESOURCES IN VIRTUALIZED ENVIRONMENTS
20210294507 · 2021-09-23 ·

A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.

MECHANISM TO DYNAMICALLY ALLOCATE PHYSICAL STORAGE DEVICE RESOURCES IN VIRTUALIZED ENVIRONMENTS
20210294759 · 2021-09-23 ·

A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.

Dirty data tracking in persistent memory systems

An example method of managing persistent memory (PM) in a computing system includes: issuing, by an application executing in the computing system, store instructions to an address space of the application, the address space including a region mapped to the PM; recording, by a central processing unit (CPU) in the computing system, cache line addresses in a log, the cache line addresses corresponding to cache lines in the address space of the application targeted by the store instructions; and issuing, by the application, one or more instructions to flush cache lines from cache of the CPU identified by the cache line addresses in the log.

Mechanism to dynamically allocate physical storage device resources in virtualized environments
11036533 · 2021-06-15 · ·

A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.

MEMORY DEVICES AND METHODS WHICH MAY FACILITATE TENSOR MEMORY ACCESS

Methods, apparatuses, and systems for tensor memory access are described. Multiple data located in different physical addresses of memory may be concurrently read or written by, for example, employing various processing patterns of tensor or matrix related computations. A memory controller, which may comprise a data address generator, may be configured to generate a sequence of memory addresses for a memory access operation based on a starting address and a dimension of a tensor or matrix. At least one dimension of a tensor or matrix may correspond to a row, a column, a diagonal, a determinant, or an Nth dimension of the tensor or matrix. The memory controller may also comprise a buffer configured to read and write the data generated from or according to a sequence of memory of addresses.

PURGEABLE MEMORY MAPPED FILES
20210110859 · 2021-04-15 ·

A device implementing purgeable memory mapped files includes at least one processor configured to receive a first request to store a first data object in volatile memory in association with a copy of the first data object stored in non-volatile memory, the first request indicating to lock the copy in the non-volatile memory. The processor is further configured to provide for storing the first data object in the volatile memory, and lock the copy stored in the non-volatile memory. The processor is further configured to receive a second request associated with clearing a portion of the non-volatile memory, provide an indication that a second data object is available for deletion from the non-volatile memory when the first data object is locked, and provide an indication that the first data object is available for deletion from the non-volatile memory when the first data object has been unlocked.