Patent classifications
G06F2212/683
LOCAL MEMORY TRANSLATION TABLE
Embodiments described herein provide techniques to facilitate access to local memory of a graphics processor by a guest software domain. The guest software domain can access the local memory via an address translation system that includes a local memory translation table.
Page protection layer
In an embodiment, a computer system comprises a page protection layer. The page protection layer may be the component in the system which manages the page tables for virtual to physical page mappings. Transactions to the page protection layer are used to create/manage mappings created in the page tables. The page protection layer may enforce dynamic security policies in the system (i.e. security policies that may not be enforced using only a static hardware configuration). In an embodiment, the page protection layer may ensure that it is the only component which is able to modify the page tables. The page protection layer may ensure than no component in the system is able to modify a page that is marked executable in any process' address space. The page protection may ensure that any page that is marked executable has code with a verified code signature, in an embodiment.
A METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS
Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PAs) in a processor-based system
Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting a memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PA) in a processor-based system is disclosed. In disclosed examples, a dedicated in-memory TLB is supported in system memory for each process so that one process's cached page table entries do not displace another process's cached page table entries. When a process is scheduled to execute in a central processing unit (CPU), the in-memory TLB address stored for such process can be used by page table walker circuit in the CPU MMU to access the dedicated in-memory TLB for executing the process to perform VA to PA translations in the event of a TLB miss to the MMU TLB. If a TLB miss occurs to the in-memory TLB, the page table walker circuit can walk the page table in the MMU.
MEMORY SCANNING METHODS AND APPARATUS
An example apparatus includes a scan manager to add a portion of a page of physical memory from a first sequence of mappings to a second sequence of mappings in response to determining the second sequence includes an address corresponding to the portion of the page of physical memory, and a scanner to scan the first sequence and the second sequence to determine whether at least one of first data in the first sequence or second data in the second sequence includes a pattern indicative of malware.
SYSTEM-ON-CHIP AND ACCELERATION METHOD FOR SYSTEM MEMORY ACCESSING
An acceleration technology for accessing system memory, which provides translation agent hardware that calculates the physical address of the system memory based on an access request issued from the device end. The translation agent hardware has a cache memory that stores information to speed up the calculation of the physical address. Each cache line corresponds to a last-recently used (LRU) index value, and the cache line with the greatest LRU index value is preferentially released to be reassigned. A counter counts a count value to show an isochronous caching demand. LRU index values of cache lines assigned to non-isochronous caching are kept not lower than the count value, and thereby isochronous caching takes precedence over non-isochronous caching.
JUST-IN-TIME SYNONYM HANDLING FOR A VIRTUALLY-TAGGED CACHE
An apparatus configured to provide just-in-time synonym handling, and related systems, methods, and computer-readable media, are disclosed. The apparatus includes a first cache comprising a translation lookaside buffer (TLB) and a hit/miss block. The first cache is configured to form a miss request associated with an access to the first cache and provide the miss request to a second cache. The miss request comprises a physical address provided by the TLB and miss information provided by the hit/miss block. The first cache is further configured to receive, from the second cache, previously-stored metadata associated with an entry in the second cache. The entry in the second cache is associated with the miss request. The first cache may further include a synonym detection block, which is configured to identify a cache line in the first cache for invalidation based on the previously-stored metadata received from the second cache
Pause communication from I/O devices supporting page faults
A processing device includes a core to execute instructions, and memory management circuitry coupled to, memory, the core and an I/O device that supports page faults. The memory management circuitry includes an express invalidations circuitry, and a page translation permission circuitry. The memory management circuitry is to, while the core is executing the instructions, receive a command to pause communication between the I/O device and the memory. In response to receiving the command to pause the communication, modify permissions of page translations by the page translation permission circuitry and transmit an invalidation request, by the express invalidations circuitry to the I/O device, to cause cached page translations in the I/O device to be invalidated.
Data storage devices and data processing methods
A data storage device includes a memory device and a memory controller. The memory controller selects a predetermined memory device to receive data and accordingly records multiple logical addresses in a first mapping table. The first mapping table records which logical page the data stored in each physical page of the predetermined memory block is directed to. When the predetermined memory block is full, the memory controller edits a second mapping table and a third mapping table according to the first mapping table. The second mapping table corresponds to multiple logical pages and records which memory block and which physical page is the data of each logical page stored in. The third mapping table corresponds to the physical pages of the predetermined memory block and indicates whether each physical page is a valid page or an invalid page.
Application processor, system-on chip and method of operating memory management unit
Memory management unit (MMU) in an application processor responds to an access request, corresponding to inspection request, including target context and target virtual address and the inspection request is for translating the target virtual address to a first target physical address. The MMU includes context cache, translation cache, invalidation queue and address translation manager (ATM). The context cache stores contexts and context identifiers of the stored contexts, while avoiding duplicating contexts. The translation cache stores first address and first context identifiers second addresses, the first address corresponds to virtual address, the first context identifiers corresponds to first context, and the second addresses corresponds to the first address and the first context. The invalidation queue stores at least one context identifier to be invalidated, of the context identifiers stored in the translation cache. The ATM controls the context cache, the translation cache and the invalidation queue.