G06F2212/206

SECURE AND EFFICIENT MICROCODE(UCODE) HOT-UPGRADE FOR BARE METAL CLOUD
20210096848 · 2021-04-01 ·

A microcode (uCode) hot-upgrade method for bare metal cloud deployment and associated apparatus. Under the uCode hot-upgrade method, a uCode path is received at an out-of-band controller (e.g., baseboard management controller (BMC)) and buffered in a memory buffer in the out-of-band controller. The out-of-band controller exposes the memory buffer as a Memory-Mapped Input-Output (MMIO) range to a host CPU. A uCode upgrade interrupt service is triggered to upgrade uCode for one or more CPUs in a bare-metal cloud platform during runtime of a tenant host operating system (OS) using an out-of-bound process. This innovation enables cloud service providers to deploy uCode hot-patches to bare metal servers for live-patch without touching the tenant operating system environment.

TRANSLATION LOAD INSTRUCTION

A processor core processes a translation load instruction including a protection field specifying a desired access protection to be specified in a translation entry for a memory page. Processing the translation load instruction includes calculating an effective address within the memory page and ensuring that a translation entry containing the desired access protection is stored within at least one translation structure of the data processing system.

SYSTEMS AND METHODS FOR SIMULATING WORST-CASE CONTENTION TO DETERMINE WORST-CASE EXECUTION TIME OF APPLICATIONS EXECUTED ON A PROCESSOR

Techniques for determining worst-case execution time for at least one application under test are disclosed using memory thrashing. Memory thrashing simulates shared resource interference. Memory that is thrashed includes mapped memory, and optionally shared cache memory.

METHOD AND SYSTEM FOR SOLID STATE DRIVE (SSD)-BASED REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)
20230409245 · 2023-12-21 ·

A method and redundant array of independent disks (RAID) system are provided. An operation is received from an application at a file system (FS) of the RAID system. A memory mapping module of the RAID system receives at least an FS logical block address (LBA) in accordance with the operation. The memory mapping module creates a mapping from a virtual memory of the application to a RAID array in a system memory of the RAID system using at least the FS LBA.

PROTECTION DOMAINS FOR FILES AT FILE-LEVEL OR PAGE-LEVEL
20210055869 · 2021-02-25 ·

Methods, systems and computer program products are provided for managing protection domains (PDs) for files at a file-level or a page-level. PDs may be allocated for multiple purposes, e.g., to protect processes, files, buffers, etc. Files stored in nonvolatile memory (NVM) subject to direct access (DAX) may be protected by file-level or page-level PDs. PDs may comprise protection keys (PKEYs) with user-configurable read and write access control registers (PKRUs). NVM files may be protected from corruption (e.g. by stray writes) by leaving write access disabled except for temporary windows of time for valid writes. File PDs may be managed by a file manager while buffer PDs may be managed by a buffer pool manager. File associations between PDs, files and file address space may be maintained in a file object. Buffer associations between PDs, buffers and buffer address space may be maintained in a buffer descriptor.

MEMORY TIERING USING PCIe CONNECTED FAR MEMORY
20210049101 · 2021-02-18 ·

A processing device in a host system monitors a data temperature of a plurality of memory pages stored in a host-addressable region of a cache memory component operatively coupled with the host system. The processing device determines that a first memory page of the plurality of memory pages satisfies a first threshold criterion pertaining to the data temperature of the first memory page and sends a first migration command indicating the first memory page to a direct memory access (DMA) engine executing on a memory-mapped storage component operatively coupled with the cache memory component and with the memory-mapped storage component via a peripheral component interconnect express (PCIe) bus. The first migration command causes the DMA engine to initiate a first DMA transfer of the first memory page from the cache memory component to a host-addressable region of the memory-mapped storage component.

MMIO addressing using a translation table

A method for processing an instruction by a processor operationally connected to one or more buses comprises determining the instruction is to access an address of an address space. The address space maps a memory and comprises a range of MMIO addresses. The method determines the address being accessed is within the range of MMIO addresses and translates, based on determining that the address being accessed is within the range of MMIO addresses, the address being accessed using a translation table to a bus identifier identifying one of the buses and a bus address of a bus address space. The bus address space is assigned to the identified bus. The bus address resulting from the translation is assigned to a device accessible via the identified bus. Based on the instruction a request directed to the device is sent via the identified bus to the bus address resulting from the translation.

Shared memory usage tracking across multiple processes

An apparatus in one embodiment comprises a host device that includes at least one processor and an associated memory. The host device is configured to implement a plurality of processes each configured to access a shared region of the memory. The host device is further configured to establish a multi-process control group for the shared region, to maintain state information for the multi-process control group, and to track usage of the shared region by the processes based at least in part on the state information. At least a subset of the processes may comprise respective containers implemented utilizing operating system level virtualization of the processor of the host device. The multi-process control group established for the shared region illustratively comprises a coarse-grained control group having a granularity greater than a single page of the shared region.

Memory devices and methods which may facilitate tensor memory access

Methods, apparatuses, and systems for tensor memory access are described. Multiple data located in different physical addresses of memory may be concurrently read or written by, for example, employing various processing patterns of tensor or matrix related computations. A memory controller, which may comprise a data address generator, may be configured to generate a sequence of memory addresses for a memory access operation based on a starting address and a dimension of a tensor or matrix. At least one dimension of a tensor or matrix may correspond to a row, a column, a diagonal, a determinant, or an Nth dimension of the tensor or matrix. The memory controller may also comprise a buffer configured to read and write the data generated from or according to a sequence of memory of addresses.

Dynamically adjusting a number of memory copy and memory mapping windows to optimize I/O performance

A method to dynamically optimize utilization of data transfer techniques includes processing multiple I/O requests using one of several data transfer techniques depending on which data transfer technique is more efficient. The data transfer techniques include: a memory copy data transfer technique that copies cache segments associated with an I/O request from a cache memory to a permanently mapped memory; and a memory mapping data transfer technique that temporarily maps cache segments associated with an I/O request. In order to process the I/O requests, the method utilizes a first number of copy windows associated with the memory copy data transfer technique, and a second number of mapping windows associated with the memory mapping data transfer technique. The method dynamically adjusts one or more of the first number and the second number to optimize the processing of the I/O requests. A corresponding system and computer program product are also disclosed.