G06F2009/45579

ENVOY FOR MULTI-TENANT COMPUTE INFRASTRUCTURE

A data management and storage (DMS) cluster of peer DMS nodes manages data of a tenant of a multi-tenant compute infrastructure. The compute infrastructure includes an envoy connecting the DMS cluster to virtual machines of the tenant executing on the compute infrastructure. The envoy provides the DMS cluster with access to the virtual tenant network and the virtual machines of the tenant connected via the virtual tenant network for DMS services such as data fetch jobs to generate snapshots of the virtual machines. The envoy sends the snapshot from the virtual machine to a peer DMS node via the connection for storage within the DMS cluster. The envoy provides the DMS cluster with secure access to authorized tenants of the compute infrastructure while maintaining data isolation of tenants within the compute infrastructure.

ISOLATING OPERATING SYSTEM ENVIRONMENTS IN EMBEDDED DEVICES

A unique embedded system is disclosed that locally operates an application virtual machine (VM) and a system VM in isolation from each other. The application VM executes application-specific code for a given purpose of the embedded system. The system VM executes a host operating system (OS) and various security, compatibility, and updating functions independent of the application VM. Each VM is connected to its own unique hardware on the embedded system to ensure that changes to the application code or the system code do not impact the other.

VIRTUAL CONTROLLER ARCHITECTURE AND SYSTEMS AND METHODS IMPLEMENTING SAME
20230052049 · 2023-02-16 ·

In an approach to virtualizing communication channels between one or more hardware components and a controller, a system includes: a first controller implemented in a reconfigurable hardware device; and a virtual platform stratus having a plurality of input/output (I/O) ports for electrically coupling with the one or more hardware components and receiving one or more electrical signals therefrom, and where the VPS is configured to generate one or more data frames from the one or more electrical signals; and where the virtual platform stratus is configured to send the data frames to the first controller and/or provide electrical signaling to the one or the one or more hardware components based on data frames received from the first controller.

Virtualized file server smart data ingestion

In one embodiment, a system for managing a virtualization environment includes a set of host machines, each of which includes a hypervisor, virtual machines, and a virtual machine controller, and a data migration system configured to identify one or more existing storage items stored at one or more existing File Server Virtual Machines (FSVMs) of an existing virtualized file server (VFS). For each of the existing storage items, the data migration system is configured to identify a new FSVMs of a new VFS based on the existing FSVM, send a representation of the storage item from the existing FSVM to the new FSVM, such that representations of storage items are sent between different pairs of FSVMs in parallel, and store a new storage item at the new FSVM, such that the new storage item is based on the representation of the existing storage item received by the new FSVM.

Providing enhanced security for object access in object-based datastores
11580078 · 2023-02-14 · ·

A method of enhancing security in object based datastores is provided. The method mounts first and second datastores identified, respectively, by first and second datastore identifiers. The first and second datastores include, respectively, first and second namespace objects that are mapped to first and second subfolders in the first and second datastores. A first file within the first subfolder references a first object via a first object identifier, while a second file within the second subfolder references a second object via a second object identifier. The first and second objects are tagged with the first and second datastores' identifiers. The first and second datastores share an underlying storage and may be configured to have separate access permissions. The method receives a command to access the first object via a datastore identifier, compares the datastore identifier with the first datastore identifier, and if they match, allows access to the first object.

VGPU scheduling policy-aware migration
11579942 · 2023-02-14 · ·

Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU.

Dynamic allocation of compute resources at a recovery site

Examples of systems are described herein which may dynamically allocate compute resources to recovery clusters. Accordingly, a recovery site may utilize fewer compute resources in maintaining recovery clusters for multiple associate clusters, while ensuring that, during use, compute resources are allocated to a particular cluster. This may reduce and/or avoid vulnerabilities arising from a use of shared resources in a virtualized and/or cloud environment.

Policy enforcement and performance monitoring at sub-LUN granularity
11579910 · 2023-02-14 · ·

Techniques are provided for enforcing policies at a sub-logical unit number (LUN) granularity, such as at a virtual disk or virtual machine granularity. A block range of a virtual disk of a virtual machine stored within a LUN is identified. A quality of service policy object is assigned to the block range to create a quality of service workload object. A target block range targeted by an operation is identified. A quality of service policy of the quality of service policy object is enforced upon the operation using the quality of service workload object based upon the target block range being within the block range of the virtual disk.

Policy driven latency control applied to a vehicular real time network apparatus
11580060 · 2023-02-14 ·

A system includes a real-time partitioning separation kernel installed on a multi-core processor. Guest operating systems are hosted with in hardware virtualized machines in the cores. Another hardware virtualized machine performs a real-time USB-CAN interface communicatively coupled to distributed electronic control units which acquire data and command actuators. A plurality of hardware virtualized machines support processes of various criticality. A secure shared memory serves as the communication means between processes performing different levels of functionality at suitable latency ranges. a policy to distinguish, allocate, and distribute clock, memory, and input/output resources to meet focused latency ranges to the Observation, Decision, and Execution processes. Remaining resources have diffuse latency ranges made available to the Observation, Decision, and Execution processes in an as available but guarded minimum and maximum buffet. A latency policy ensures that each process receives its minimum tranche before queueing for up to the maximum at the resource buffet.

COMPUTER-READABLE RECORDING MEDIUM STORING APPLICATION CONTROL PROGRAM AND APPLICATION CONTROL METHOD
20230043057 · 2023-02-09 · ·

A recording medium stores an application control program for causing a computer to execute processing including: when a specific application included in a applications is executed in response to a processing request from a specific processing request source, referring to a storage unit that stores flow information that indicates a past execution order of the applications for each of the processing request sources of the applications, and calculating an execution probability that each of one or more applications that are likely to be executed after the specific application executed in response to the processing request from the specific processing request source among the applications is executed after the specific application executed in response to the processing request from the specific processing request source; specifying an application to be activated from the one or more applications based on the calculated execution probability; and activating the specified application to be activated.