Patent classifications
G06F9/45558
Policy enforcement and performance monitoring at sub-LUN granularity
Techniques are provided for enforcing policies at a sub-logical unit number (LUN) granularity, such as at a virtual disk or virtual machine granularity. A block range of a virtual disk of a virtual machine stored within a LUN is identified. A quality of service policy object is assigned to the block range to create a quality of service workload object. A target block range targeted by an operation is identified. A quality of service policy of the quality of service policy object is enforced upon the operation using the quality of service workload object based upon the target block range being within the block range of the virtual disk.
Implementing deferred guest calls in a host-based virtual machine introspection system
Example methods are provided for virtual machine introspection in which a guest monitoring mode (GMM) module monitors the execution of guest calls by an agent that resides in a virtual machine (VM). The GMM module sets a bit in bit mask that corresponds to a guest call that the agent needs to execute, and inserts an invisible breakpoint in the code of the guest call. If the GMM module detects that despite the setting of the bit in the bit mask, the agent does not complete the execution of the code (due to the invisible breakpoint not being triggered), then the GMM module considers this condition as a potential hijack of the VM by malicious code.
Systems and methods for virtual machine resource optimization using machine learning techniques
Systems described herein may allow for the intelligent configuration of containers onto virtualized resources. As described, systems described herein may generate configurations based on received parameters for utilization to configure (e.g., install, instantiate, etc.) virtualized resources. Once generated, a configuration may be selected according to determined selection parameters and/or intelligent selection techniques.
Computing node identifier-based request allocation
Computing node identifiers can be used to encode information regarding the distance between requesting and available computing nodes. Computing node identifiers can be computed based on proximity values for respective computing nodes. Requests can be directed from one computing node to an available computing node based on information encoded by both the computing node identifiers of the requesting node and the receiving node. Using these computing node identifiers to direct request traffic among VMs can more efficiently leverages network resources.
Integrity-preserving cold migration of virtual machines
A method includes identifying a source virtual machine to be migrated from a source domain to a target domain, extracting file-in-use metadata and shared asset metadata from virtual machine metadata of the source virtual machine, and copying one or more files identified in the file-in-use metadata to a target virtual machine in the target domain. For each of one or more shared assets identified in the shared asset metadata, the method further includes (a) determining whether or not the shared asset already exists in the target domain, (b) responsive to the shared asset already existing in the target domain, updating virtual machine metadata of the target virtual machine to specify the shared asset, and (c) responsive to the shared asset not already existing in the target domain, copying the shared asset to the target domain and updating virtual machine metadata of the target virtual machine to specify the shared asset.
Policy driven latency control applied to a vehicular real time network apparatus
A system includes a real-time partitioning separation kernel installed on a multi-core processor. Guest operating systems are hosted with in hardware virtualized machines in the cores. Another hardware virtualized machine performs a real-time USB-CAN interface communicatively coupled to distributed electronic control units which acquire data and command actuators. A plurality of hardware virtualized machines support processes of various criticality. A secure shared memory serves as the communication means between processes performing different levels of functionality at suitable latency ranges. a policy to distinguish, allocate, and distribute clock, memory, and input/output resources to meet focused latency ranges to the Observation, Decision, and Execution processes. Remaining resources have diffuse latency ranges made available to the Observation, Decision, and Execution processes in an as available but guarded minimum and maximum buffet. A latency policy ensures that each process receives its minimum tranche before queueing for up to the maximum at the resource buffet.
Platform independent GPU profiles for more efficient utilization of GPU resources
Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilization of GPU resources. A virtual machine configuration can be identified to include a platform independent graphics computing requirement. Hosts can be identified as available in a computing environment based on the platform independent graphics computing requirement. The virtual machine can be placed on a host based on a consideration of host priority.
METHOD AND APPARATUS FOR DYNAMICALLY ADJUSTING PIPELINE DEPTH TO IMPROVE EXECUTION LATENCY
Apparatus and method for managing pipeline depth of a data processing device. For example, one embodiment of an apparatus comprises: an interface to receive a plurality of work requests from a plurality of clients; and a plurality of engines to perform the plurality of work requests; wherein the work requests are to be dispatched to the plurality of engines from a plurality of work queues, the work queues to store a work descriptor per work request, each work descriptor to include information needed to perform a corresponding work request, wherein the plurality of work queues include a first work queue to store work descriptors associated with first latency characteristics and a second work queue to store work descriptors associated with second latency characteristics; engine configuration circuitry to configure a first engine to have a first pipeline depth based on the first latency characteristics and to configure a second engine to have a second pipeline depth based on the second latency characteristics.
SECURE MEMORY ISOLATION FOR SECURE ENDPOINTS
A single input/output (I/O) controller for both secure partitionable endpoints (PEs) and non-secure PEs is enabled in a trusted execution environment (TEE) where secure memory portions are isolated from non-secure PEs. Security attributes for certain endpoints indicate secure memory access privilege of owning entities of the certain endpoints. A security monitor has exclusive access to the address translation control tables (TCE) stored in secure memory associated with a secure endpoint. When owning entity reassignment occurs, the endpoints are reinitialized to support a change in ownership from an outgoing owning entity having secure memory access and an incoming owning entity not having secure memory access.
SECURE GUEST IMAGE AND METADATA UPDATE
A secure guest generates an updated image for the secure guest, and computes one or more measurements for the updated image. The secure guest provides the one or more measurements to a trusted execution environment and obtains from the trusted execution environment metadata for the updated image. The metadata is generated based on metadata of the secure guest and obtaining the one or more measurements.