Patent classifications
G06F2212/151
Real time input/output address translation for virtualized systems
In an example, a device includes a memory and a processor core coupled to the memory via a memory management unit (MMU). The device also includes a system MMU (SMMU) cross-referencing virtual addresses (VAs) with intermediate physical addresses (IPAs) and IPAs with physical addresses (PAs). The device further includes a physical address table (PAT) cross-referencing IPAs with each other and cross-referencing PAs with each other. The device also includes a peripheral virtualization unit (PVU) cross-referencing IPAs with PAs, and a routing circuit coupled to the memory, the SMMU, the PAT, and the PVU. The routing circuit is configured to receive a request comprising an address and an attribute and to route the request through at least one of the SMMU, the PAT, or the PVU based on the address and the attribute.
IMPROVING MEMORY ACCESS HANDLING FOR NESTED VIRTUAL MACHINES
Systems and methods for memory management for nested virtual machines. An example method may comprise running, by a host computer system, a Level 0 hypervisor managing a Level 1 virtual machine running a Level 1 hypervisor, wherein the Level 1 hypervisor manages a Level 2 virtual machine, wherein the Level 2 virtual machine is associated with a Peripheral Component Interconnect (PCI) device; generating, by the Level 0 hypervisor, a Level 1 page table by combining records from the guest page table with records from a host page table maintained by the Level 0 hypervisor; generating a Level 2 page table comprising a plurality of Level 2 page table entries; and causing a device driver of the Level 2 virtual machine to use the Level 2 page table for second level address translation.
Throttling access to high latency hybrid memory DIMMs
A throttling engine throttles access to a high latency hybrid memory. A request is received for partition mapping of a virtual address for an R/W memory page. An entry is added to a partition page table that maps a virtual address to a physical address and comprises access information that is R/W. A throttled flag is set in an entry of a partition page extension table. The throttle entry corresponds to the entry. The access information is saved in an original access part of the partition page extension table, and the access information is replaced with an R value. Upon application fault receipt, a throttling test is performed on an address of the application fault. If the throttling test is false, the fault is passed through to an operating system fault handler and the throttling fault stage is ended, otherwise, a delay is implemented for slowing access to the memory.
Memory protection circuit and memory protection method
To provide a memory protection circuit and a memory protection method suitable for quick data transfer between a plurality of virtual machines via a common memory, according to an embodiment, a memory protection circuit includes a first ID storing register that stores therein an ID of any of a plurality of virtual machines managed by a hypervisor, an access determination circuit that permits the virtual machine having the ID stored in the first ID storing register to access a memory, a second ID storing register that stores therein an ID of any of the virtual machines, and an ID update control circuit that permits the virtual machine having the ID stored in the second ID storing register to rewrite the ID stored in the first ID storing register.
Secure address translation services using bundle access control
Embodiments are directed to providing a secure address translation service. An embodiment of a system includes a memory device to store memory data in a plurality of physical pages shared by a plurality of devices, a first table to map each page of memory to an associated bundle identifier (ID) that identifies one or more devices having access to a page of memory, a second table to map each bundle ID to page access permissions that define access to one or more pages associated with a bundle ID and a translation agent to receive requests from the plurality of devices to perform memory operations on the memory and determine page access permissions for requests received from the plurality of devices using the first table and the second table.
STORAGE DEVICE AND OPERATING METHOD THEREOF
Disclosed is a method of operating a storage device which includes a non-volatile memory device. The method includes informing a host that a designation functionality for designating a data criticality and a priority to namespaces of the non-volatile memory device is possible, enabling the designation functionality in response to receiving an approval of the designation functionality, receiving, from the host, a first request for designating a data criticality and a priority for a first namespace of the namespaces, and generating a namespace mapping table in response to the first request.
OPERATING SYSTEM DEACTIVATION OF STORAGE BLOCK WRITE PROTECTION ABSENT QUIESCING OF PROCESSORS
Operating system deactivation of write protection for a storage block is provided absent quiescing of processors in a multi-processor computing environment. The process includes receiving an address translation protection exception interrupt resulting from an attempted write access by a processor to a storage block, and determining by the operating system whether write protection for the storage block is active. Based on write protection for the storage block not being active, the operating system issues an instruction to clear or modify translation lookaside buffer entries of the processor associated with the storage block, absent waiting for an action by another processor of multiple processors of the computing environment, to facilitate write access to the storage block proceeding at the processor.
VIRTUALIZED SYSTEM AND METHOD OF PREVENTING MEMORY CRASH OF SAME
A virtualized system is provided. The virtualized system includes: a memory device; a processor configured to provide a virtualization environment; a direct memory access device configured to perform a function of direct memory access to the memory device; and a memory management circuit configured to manage a core access of the processor to the memory device and a direct access of the direct memory access device to the memory device. The processor is further configured to provide: a plurality of guest operating systems that run independently from each other on a plurality of virtual machines of the virtualization environment; and a hypervisor configured to control the plurality of virtual machines in the virtualization environment and control the memory management circuit to block the direct access when a target guest operating system controlling the direct memory access device, among the plurality of guest operating systems is rebooted.
INCREASING PAGE SHARING ON NON-UNIFORM MEMORY ACCESS (NUMA)-ENABLED HOST SYSTEMS
In one set of embodiments, a hypervisor of a host system can determine that a delta between local and remote memory access latencies for each of a subset of NUMA nodes of the host system is less than a threshold. In response, the hypervisor can enable page sharing across the subset of NUMA nodes, where enabling page sharing comprises associating the subset of NUMA nodes with a single page sharing table, and where the single page sharing table holds entries identifying host physical memory pages of the host system that are shared by virtual machines (VMs) placed on the subset of NUMA nodes.
LIVE-MIGRATION OF PINNED DIRECT MEMORY ACCESS PAGES TO SUPPORT MEMORY HOT-REMOVE
A system on chip (SoC) coupled to a memory can perform a hot-remove operation in a computer system. In a hot-remove operation, software (e.g., operating system) and hardware (e.g., memory controller and interconnect circuitry) components migrate memory content from one region to another target region in the memory. A peripheral device can have direct memory access (DMA) to a page in the region of memory that is being hot-removed. The interconnect circuitry can migrate the page to the target region while maintaining the peripheral device's direct access to the memory. Interconnect circuitry uses hardware mirroring in response to a write command to a memory address in the region being hot-removed. With hardware mirroring, the data is stored in two locations; the first location is the memory address in the region being moved, and the second location is a memory address in the target region.