Patent classifications
G06F9/45541
Orchestrating allocation of shared resources in a datacenter
A cluster configuration request to form a hyperconverged computing infrastructure (HCI) cluster in a cloud computing environment is processed. Based on the cluster configuration request and any other cluster specifications, a plurality of bare metal computing nodes of the cloud computing environment are configured to operate as an HCI cluster. First, a tenant-specific secure network overlay is formed on a first set of tenant-specific networking hardware resources. Then, the tenant-specific secure network overlay is used by an orchestrator to provision a second set of tenant-specific networking hardware resources. The second set of tenant-specific networking hardware resources are configured to interconnect node-local storage devices into a shared storage pool having a contiguous address space. Top-of-rack switches are configured to form a network overlay on the first set of tenant-specific networking hardware resources. Then, top-of-rack switches are configured to form a layer-2 subnet on the second set of tenant-specific networking hardware resources.
Automatically Deployed Information Technology (IT) System and Method
Disclosed herein are systems, methods, and apparatuses where a controller can automatically manage a physical infrastructure of a computer system based on a plurality of system rules, a system state for the computer system, and a plurality of templates. Techniques for automatically adding resources such as computer, storage, and/or networking resources to the computer system are described. Also described are techniques for automatically deploying applications and services on such resources. These techniques provide a scalable computer system that can serve as a turnkey scalable private cloud.
NUMA NODE VIRTUAL MACHINE PROVISIONING SYSTEM
A Non-Uniform Memory Access (NUMA) node virtual machine provisioning system includes a connection system and a physical NUMA node coupled to a NUMA node virtual machine provisioning subsystem that modifies NUMA node information in at least one database to create a first virtual NUMA node that is provided by a first subset of NUMA node resources in the physical NUMA node, modifies connection system information in the at least one database to dedicate a first subset of connection system resources in the connection system to the first virtual NUMA node, and deploys a first virtual machine on the first virtual NUMA node such that the first virtual machine performs operations using the first subset of NUMA node resources that provide the first virtual NUMA node, and using the first subset of connection system resources dedicated to the first virtual NUMA node.
Systems and methods to update add-on cards firmware and collect hardware information on any servers with any OS installed or bare-metal servers
Systems and methods described herein are directed to upgrading one or more of add-on firmware and disk firmware for a server, which can involve connecting a port of the server to an isolated network, the isolated network dedicated to firmware upgrades for the server; caching onto cache memory of the server, an operating system received through the isolated network; booting the operating system on the server from the cache memory; conducting an Network File System (NFS) mount on the server to determine hardware information associated with the upgrading of the one or more of the add-on firmware and the disk firmware; and upgrading the one or more of the add-on firmware and the disk firmware based on the hardware information.
Data migration and replication
A system, method, and computer program product for implementing data replication generation is provided. The method includes utilizing hardware and software resources within a hybrid cloud environment. A non-volatile memory host system and an associated target system are enabled for operational functionality and the non-volatile memory host system is connected to an I/O queueing component. In response, a plurality of queue structures is generated with respect to a host driver component and a connection between the non-volatile memory host system and the associated target system is detected. In response, a special purpose cache structure is generated and the plurality of queue structures and the special purpose cache structure are enabled such that remote data mirroring functionality is enabled.
Reboot migration between bare-metal servers
This disclosure describes systems, devices, and methods for performing and facilitating tenant migration between multiple bare-metal servers. An example method includes receiving an indication of an impending reboot of a first bare-metal server. The first bare-metal server may be hosting a tenant. The method further includes identifying a second bare-metal server in a pre-initialized state. The method also includes causing the first bare-metal server to migrate data associated with the tenant to the second bare-metal server in advance of the reboot.
FORWARDING ELEMENT WITH PHYSICAL AND VIRTUAL DATA PLANES
Some embodiments of the invention provide a novel method of performing network slice-based operations on a data message at a hardware forwarding element (HFE) in a network. For a received data message flow, the method has the HFE identify a network slice associated with the received data message flow. This network slice in some embodiments is associated with a set of operations to be performed on the data message by several network elements, including one or more machines executing on one or more computers in the network. Once the network slice is identified, the method has the HFE process the data message flow based on a rule that applies to data messages associated with the identified slice.
DATA MIGRATION AND REPLICATION
A system, method, and computer program product for implementing data replication generation is provided. The method includes utilizing hardware and software resources within a hybrid cloud environment. A non-volatile memory host system and an associated target system are enabled for operational functionality and the non-volatile memory host system is connected to an I/O queueing component. In response, a plurality of queue structures is generated with respect to a host driver component and a connection between the non-volatile memory host system and the associated target system is detected. In response, a special purpose cache structure is generated and the plurality of queue structures and the special purpose cache structure are enabled such that remote data mirroring functionality is enabled.
SIGNAL PROCESSING DEVICE AND VEHICLE DISPLAY APPARATUS INCLUDING THE SAME
The present disclosure relates to a signal processing device and a vehicle display apparatus including the same. The vehicle display apparatus according to an embodiment of the present disclosure includes: a display located in a vehicle; and a signal processing device configured to output an image signal to the display, wherein while outputting a first overlay based on a first operating system, the signal processing device is configured to display a second overlay based on a second operating system to be overlapped on at least a partial area of the first overlay, thereby seamlessly displaying the overlays based on heterogeneous operating systems.
Virtual access hub
A multi-tenant application that provides high speed data services to one or more subscriber devices. The multi-tenant application comprises one or more first servers that each perform packet switching and routing and one or more second servers that each perform FCAPS functions for the one or more subscriber devices. FCAPS functions comprise fault operations, configuration operations, accounting operations, performance operations, and security operations. Each of the one or more first and second servers are implemented entirely within an application-specific logical host composed of one or more application containers. The one or more second servers may optionally each further perform network functions and user plane functions for the one or more subscriber devices. The one or more second servers may optionally each further perform OLT control functions and OLT MAC/PHY functions for the one or more subscriber devices.