SYSTEMS AND METHODS FOR DATACENTER CAPACITY PLANNING
20220078086 · 2022-03-10
Assignee
Inventors
Cpc classification
G06F30/18
PHYSICS
H04L41/34
ELECTRICITY
International classification
Abstract
Systems and methods for datacenter capacity planning are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive user input; and suggest a location for placement of a device in a selected rack of a datacenter based on the user input, where the suggested location takes into account at least one of: (a) device clustering, or (b) network port availability.
Claims
1. An Information Handling System (IHS), comprising: a processor; and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive user input; and suggest a location for placement of a plurality of devices in a selected rack of a datacenter based on the user input, wherein the plurality of devices comprise a cluster in which the plurality of devices comprise a group of chassis that function together and are identified as part of the cluster, and wherein the suggested location takes into account a device clustering of the plurality of devices.
2. The IHS of claim 1, wherein the user input comprises: datacenter information and a device list.
3. The IHS of claim 2, wherein state of the datacenter information comprises: new or pre-existing information.
4. The IHS of claim 2, wherein the datacenter information comprises: rack identification, power capacity of each rack, and network port availability of each rack.
5. The IHS of claim 2, wherein the device list comprises at least one of: a device type, a device model, a number of devices, and a service tag.
6. The IHS of claim 5, wherein the device list includes the device type comprising at least one of a monolithic server, a modular server, a network device, a storage enclosure, and a cluster.
7. The IHS of claim 1, wherein the user input further comprises at least one of a greedy approach that considers device placement according to placement on a first selected rack before placement on a second selected rack, and a round-robin approach that suggests the location for the device based on maximum space availability to accommodate the device.
8. The IHS of claim 1, wherein the program instructions upon execution, further cause the IHS to: retrieve a device specification file based upon a service tag obtained from the user input, wherein the device specification file comprises a physical size and a power specification of the device associated with the service tag.
9. The IHS of claim 1, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to: sort a list of devices by weight or physical size, with the heaviest or largest device being at the bottom of the list, and the lightest or smallest device being at the top of the list; and select the rack based upon a comparison between the list of the devices and a slot availability of the rack.
10. The IHS of claim 9, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to sum the physical size or weight of two or more of the devices identified as part of the cluster and suggest the location for the cluster in a single rack.
11. The IHS of claim 9, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to: receive the network port availability from a Top-of-Rack (ToR) switch associated with the selected rack via a command-line (CLI) command; and verify that network port requirements of the device match the network port availability.
12. A memory storage device having program instructions stored thereon that, upon execution by a processor of an Information Handling System (IHS), cause the IHS to: receive user input; and suggest a location for placement of a plurality of devices in a selected rack of a datacenter based on the user input, wherein the plurality of devices comprise a cluster in which the plurality of devices comprise a group of chassis that function together and are identified as part of the cluster, and wherein the suggested location takes into account device clustering of the plurality of devices.
13. The memory storage device of claim 12, wherein the user input comprises: datacenter information and a device list, wherein the datacenter information comprises: rack identification, power capacity of each rack, and network port availability of each rack, and wherein the device list comprises at least one of: a device type, a device model, a number of devices, and a service tag.
14. The memory storage device of claim 12, further comprising retrieving a device specification file based upon a service tag obtained from the user input, wherein the device specification file comprises a physical size and a power specification of the device associated with the service tag.
15. The memory storage device of claim 12, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to: sort a list of devices by weight or physical size, with the heaviest or largest device being at the bottom of the list, and the lightest or smallest device being at the top of the list; and select the rack based upon a comparison between the list of the devices and a slot availability of the rack.
16. The memory storage device of claim 15, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to sum the physical size or weight of two or more of the devices identified as part of the cluster and suggest the location for the cluster in a single rack.
17. The memory storage device of claim 16, wherein to suggest the location, the program instructions, upon execution, further cause the IHS to: receive the network port availability from a Top-of-Rack (ToR) switch associated with the selected rack via a command-line (CLI) command; and verify that network port requirements of the device match the network port availability.
18. A method, comprising: receiving user input at an Information Handling System (IHS), wherein the user input comprises: rack identification, power capacity of each rack, a device type, a device model, a number of devices, and a service tag; retrieving, by the IHS, a device specification file based upon the service tag, wherein the device specification file comprises a physical size and a power specification of the device associated with the service tag; sorting, by the IHS, a list of devices by weight or physical size, with the heaviest or largest device at the bottom of the list, and the lightest or smallest device at the top of the list; selecting, by the IHS, the rack based upon a comparison between the list of the devices and a slot availability of the rack; and suggesting, by the IHS, a location for placement of a plurality of devices in a selected rack of a datacenter based on the user input and the device specification file, wherein the plurality of devices comprise a cluster in which the plurality of devices comprise a group of chassis that function together and are identified as part of the cluster, and wherein the suggested location takes into account device clustering of the plurality of devices.
19. The method of claim 18, wherein suggesting the location further comprises: receiving the network port availability from a Top-of-Rack (ToR) switch associated with the selected rack via a command-line (CLI) command; and verifying that network port requirements of the device match the network port availability.
20. The method of claim 18, wherein suggesting the location further comprises summing the physical size or weight of two or more of the devices identified as part of the cluster and suggest the location for the cluster in a single rack.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The present invention(s) is/are illustrated by way of example and is/are not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity, and have not necessarily been drawn to scale.
[0014]
[0015]
[0016]
[0017]
DETAILED DESCRIPTION
[0018] For purposes of this disclosure, an IHS may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an IHS may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., Personal Digital Assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. An IHS may include Random Access Memory (RAM), one or more processing resources, such as a Central Processing Unit (CPU) or hardware or software control logic, Read-Only Memory (ROM), and/or other types of nonvolatile memory.
[0019] Additional components of an IHS may include one or more disk drives, one or more network ports for communicating with external devices as well as various I/O devices, such as a keyboard, a mouse, touchscreen, and/or a video display. An IHS may also include one or more buses operable to transmit communications between the various hardware components. An example of an IHS is described in more detail below. It should be appreciated that although certain IHSs described herein may be discussed in the context of enterprise computing servers, other embodiments may be utilized.
[0020] As described, in a data center environment, an IHS may be installed within a chassis, in some cases along with other similar IHSs. A rack may house multiple such chassis and a data center may house numerous such racks. As such, each rack may host a large number of IHSs that are installed as components of a chassis and multiple chassis may be stacked and installed within racks.
[0021] In various embodiments, systems and methods described herein may provide IHS placement suggestions or recommendations in a selected rack and/or in a selected location within the given rack irrespective of the current state of the data center; that is, whether the IHS is being deployed in a brand new data center or within an existing data center with other IHS already placed.
[0022] Systems and methods described herein may use a decision tree-based approach that supports all IHS types such as servers (e.g., monolithic and modular), chassis (e.g., M1000e, FX2/FX2s, VRTX, MX7000), storage enclosures (e.g., rack and modular storage devices), and/or network devices (e.g., rack-level switches and chassis I/O modules), and clusters (e.g., a group of IHSs and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing).
[0023] In some cases, systems and methods described herein may consider parameters for rack space and network port availability along with user preferences (e.g., placement approach, Greedy or Round Robin) in case of a new data center. For an existing datacenter which is been monitored for a while in the appliance, metrics available for power and temperature may be considered in addition to the availability of rack space, network ports, and other user preferences. These systems and methods may also ensure that the placement suggestion is provided by displaying the heaviest devices towards the bottom of the rack and the lighter ones towards the upper rack slots. Devices entered as part of a cluster are maintained together.
[0024]
[0025] IHS 100 may include one or more processor(s) 105. In some embodiments, processor(s) 105 may include a main processor and a co-processor, each of which may include a plurality of processing cores. As illustrated, processor(s) 105 may include integrated memory controller 105a that may be implemented directly within the circuitry of processor(s) 105, or memory controller 105a may be a separate integrated circuit that is located on the same die as processor(s) 105. Memory controller 105a may be configured to manage the transfer of data to and from system memory 110 of IHS 100 via high-speed memory interface 105b.
[0026] System memory 110 may include memory components, such as such as static RAM (SRAM), dynamic RAM (DRAM), NAND Flash memory, suitable for supporting high-speed memory operations by processor(s) 105. System memory 110 may combine both persistent, non-volatile memory and volatile memory.
[0027] In certain embodiments, system memory 110 may include multiple removable memory modules. System memory 110 includes removable memory modules 110a-n. Each of removable memory modules 110a-n may utilize a form factor corresponding to a motherboard expansion card socket that receives a type of removable memory module 110a-n, such as a DIMM (Dual In-line Memory Module). Other embodiments of system memory 110 may be configured with memory socket interfaces that correspond to different types of removable memory module form factors, such as a Dual In-line Package (DIP) memory, a Single In-line Pin Package (SIPP) memory, a Single In-line Memory Module (SIMM), and/or a Ball Grid Array (BGA) memory.
[0028] IHS 100 may operate using a chipset that may be implemented by integrated circuits that couple processor(s) 105 to various other components of the motherboard of IHS 100. In some embodiments, all or portions of the chipset may be implemented directly within the integrated circuitry of an individual one of processor(s) 105. The chipset may provide the processor(s) 105 with access to a variety of resources accessible via one or more buses 115. Various embodiments may utilize any number of buses to provide the illustrated pathways provided by single bus 115. In certain embodiments, bus 115 may include a PCIe (PCI Express) switch fabric that is accessed via a root complex and coupled processor(s) 105 to a variety of internal and external PCIe devices.
[0029] In various embodiments, a variety of resources may be coupled to the processor(s) 105 of the IHS 100 via buses 115 managed by the processor chipset. In some cases, these resources may be components of the motherboard of IHS 100 or these resources may be resources coupled to IHS 100, such as via I/O ports 150. In some embodiments, IHS 100 may include one or more I/O ports 150, such as PCIe ports, that may be used to couple IHS 100 directly to other IHSs, storage resources or other peripheral components. In certain embodiments, I/O ports 150 may provide couplings to a backplane or midplane of the chassis in which the IHS 100 is installed. In some instances, I/O ports 150 may include rear-facing externally accessible connectors by which external systems and networks may be coupled to IHS 100.
[0030] As illustrated, IHS 100 may also include Power Supply Unit (PSU) 160 that provides the components of the chassis with appropriate levels of DC power. PSU 160 may receive power inputs from an AC power source or from a shared power system that is provided by a rack within which IHS 100 may be installed. In certain embodiments, PSU 160 may be implemented as a swappable component that may be used to provide IHS 100 with redundant, hot-swappable power supply capabilities.
[0031] Processor(s) 105 may also be coupled to network controller 125, such as provided by a Network Interface Controller (NIC) that is coupled to the IHS 100 and allows IHS 100 to communicate via an external network, such as the Internet or a LAN. Network controller 125 may include various microcontrollers, switches, adapters, and couplings used to connect IHS 100 to a network, where such connections may be established by IHS 100 directly or via shared networking components and connections provided by a rack in which chassis 100 is installed. In some embodiments, network controller 125 may allow IHS 100 to interface directly with network controllers from other nearby IHSs in support of clustered processing capabilities that utilize resources from multiple IHSs.
[0032] IHS 100 may include one or more storage controllers 130 that may be utilized to access storage drives 140a-n that are accessible via the chassis in which IHS 100 is installed. Storage controllers 130 may provide support for RAID (Redundant Array of Independent Disks) configurations of logical and physical storage drives 140a-n. In some embodiments, storage controller 155 may be an HBA (Host Bus Adapter) that provides limited capabilities in accessing physical storage drives 140a-n. In many embodiments, storage drives 140a-n may be replaceable, hot-swappable storage devices that are installed within bays provided by the chassis in which IHS 100 is installed. In some embodiments, storage drives 140a-n may also be accessed by other IHSs that are also installed within the same chassis as IHS 100. In various embodiments, storage drives 140a-n may include SAS (Serial Attached SCSI) magnetic disk drives, SATA (Serial Advanced Technology Attachment) magnetic disk drives, solid-state drives (SSDs) and other types of storage drives in various combinations.
[0033] As with processor(s) 105, storage controller 130 may also include integrated memory controller 130b that may be used to manage the transfer of data to and from one or more memory modules 135a-n via a high-speed memory interface. Through use of memory operations implemented by memory controller 130b and memory modules 135a-n, storage controller 130 may operate using cache memories in support of storage operations. Memory modules 135a-n may include memory components, such as such as static RAM (SRAM), dynamic RAM (DRAM), NAND Flash memory, suitable for supporting high-speed memory operations and may combine both persistent, non-volatile memory and volatile memory. As with system memory 110, memory modules 135a-n may utilize a form factor corresponding to a memory card socket, such as a DIMM (Dual In-line Memory Module).
[0034] IHS 100 includes a remote access controller (RAC) 155 that provides capabilities for remote monitoring and management of various aspects of the operation of IHS 100. In support of these monitoring and management functions, remote access controller 155 may utilize both in-band and sideband (i.e., out-of-band) communications with various internal components of IHS 100.
[0035] Remote access controller 155 may additionally implement a variety of management capabilities. In some instances, remote access controller 155 operate from a different power plane from processor(s) 105, storage drives 140a-n and other components of IHS 100, thus allowing remote access controller 155 to operate, and management tasks to proceed, while processor cores of IHS 100 are powered off. Various BIOS functions, including launching the operating system of IHS 100, may be implemented by remote access controller 155. In some embodiments, remote access controller 155 may perform various functions to verify the integrity of the IHS 100 and its hardware components prior to initialization of the IHS 100 (i.e., in a bare-metal state).
[0036] In various embodiments, an IHS may not include each of the components shown in
[0037]
[0038] In operation, data center capacity planning engine 200 may execute one or more of the various methods shown in
[0039] (A) User Inputs
[0040] User inputs may include information about the current data center hierarchy and/or a list of IHSs and/or devices to be considered for placement suggestions. Examples of inputs are a “State of Data Center” (e.g., a new data center or an existing data center) and “Data Center Hierarchy Details” (e.g., data center, room, aisle, and/or rack details).
[0041] In the case of an existing datacenter, in addition to an IHS/device list and placement approach, a user may add the existing schema of data center from a power manager software. If existing devices have been monitored by the power manager software, then metrics saved for power, temperature and space utilization may be referenced while providing placement recommendations. Conversely, for a new datacenter, user may provide rack size and power capacity for all available racks while entering the datacenter hierarchy details.
[0042] An IHS/device list may include a device type, model, and number of devices to be placed for each model. Examples of device types include, but are not limited to, monolithic servers, modular servers (including C-Series and M-Series servers), network devices, storage enclosures, and clusters. The term “cluster” refers to a group of servers/chassis functioning together or a group of servers along with storage enclosures and network devices attached. Device details including type, model and number of devices may be entered by the user for individual components of the defined cluster.
[0043] Additional inventory details may be referenced from stored IHS/device specification information. For example, an IHS/device specification file may be available (either online or offline) from which a number of device details such as device type, size (in Units U) and power specifications may be retrieved by querying a service tag.
[0044] Another user input may be a “placement approach,” which may be a “greedy” or “round robin” approach. The greedy approach considers IHS/device placement prioritizing the optimum utilization of available resources, such that it completes the placement on one rack completely before moving to the other one. The round robin approach suggests the best possible location for a device based on the resource availability across different hierarchical levels.
[0045] (B) Processing
[0046] The processing operation may utilize a decision tree algorithm along with user inputs for suggesting locations for IHS/device placement. All related inventory details (such as Power and Thermal specifications, U size, etc.) for the IHS/device models entered may be initially retrieved from the IHS/device specification file(s).
[0047] The algorithm may follow a sequential order with respect to the list of devices entered and identifies the number of devices. In the case of a single device or multiple devices entered, post device type retrieval, a sort operation may be applied on all devices so that the heaviest device (with maximum U size) are listed towards the bottom and the lightest device appears on the top of the list. In the case of clusters, a second, internal sorting operation may be applied so that the heaviest devices within the group are listed at the bottom of the cluster.
[0048] With respect to servers, the algorithm may identify the type of server and classify it either as modular or monolithic. Further details about the IHS/device required for providing placement suggestions (such as U size, device power capacity, etc.) may be retrieved from the IHS/device specification file(s). Modular servers may be mapped to their corresponding supported models of chassis and the server may be placed based on the placement approach and space availability. In addition to the above parameters, power and network port availability in the switches are taken into consideration.
[0049] Rack and chassis power capacity may be provided by the user as part of data center hierarchy details. For an existing datacenter hierarchy which is monitored for a while, the temperature may also be considered as a metric for providing placement suggestions.
[0050] The algorithm may retrieve the network port availability for Top-of-Rack (ToR) switches or IOM's via command-line (CLI) commands or from parent chassis inventory. In the case of monolithic servers, the placement approach and space availability may be taken into prior consideration. Thereafter, the device is placed based on power, thermal and network port availability in the selected rack. The internal sorting mechanism ensures that the heaviest device is placed at the lowest rack slots whereas the lightest device appears at the top.
[0051] As to chassis the algorithm identifies the type (model) of chassis and a similar placement approach is followed for PowerEdge MX7000 and M1000e models. Because MX7000 and M1000e stands among the heaviest devices, the algorithm first checks if there is available space (−10U) from the lowest rack slots. If so, the placement suggestion for MX7000 and M1000e is provided by considering the rack space capacity, power capacity, peak temperature values (for racks which are monitored for a while in an existing datacenter) and network port availability. For other chassis models such as FX2, FX2s, and VRTX, the placement logic is similar to that of monolithic servers.
[0052] In the case of clusters, the devices entered as part of the cluster list may be sorted internally with respect to device size. The sum of all individual device sizes is taken as the cluster size. As mentioned in the decision tree, placement suggestions are provided for clusters based on individual IHS placement suggestions, space capacity, power capacity, thermal values, and network port availability in the rack. In some implementations, valid placement suggestions may result only if there is space available in the same rack for placing all devices as part of the cluster. If not, the algorithm may exit with error messages by providing corrective actions to the user.
[0053] When dealing with storage and network devices, the algorithm may identify if the storage/network type is rack-based or chassis-based. In case of rack devices, the algorithm may consider the placement approach selected by the user. Thereafter, placement suggestions may be provided with respect to the analysis done for space capacity, power capacity, thermal values, and network port availability.
[0054] For chassis storage and network devices, the corresponding chassis type may be mapped and the logic may verify if the supported chassis type is entered as a part of the device list. If so, the most apt chassis slot may be identified based on placement suggestions, power and thermal metrics, and space capacity.
[0055] When there are no error conditions encountered by the algorithm, the recommendation may be provided for IHS/device placements in the hierarchical levels selected by the user. Also, there may be be a one-click option that replicates the device placement suggestions in the physical group section of power manager software. When an error is encountered by the algorithm, however, the hierarchy may be displayed excluding the devices which are in error state. Meanwhile, appropriate errors may be displayed and the one-click option to replicate physical groups may not be provided as the IHS/devices in error state needs to be accommodated.
[0056] (C) Recommendations and Suggestions
[0057] IHS/device placement recommendations and suggestions may be displayed via GUI 201 based upon user preferences and the logic defined as per the decision tree algorithm.
[0058]
[0059] At block 306, method 300 may select a level in the physical hierarchy where IHS/devices need to be placed. At block 307, method 300 provides an IHS/device list for all devices that need to be placed. Then, at block 308, method 300 specifies the placement approach. Block 309 uses device specification files to obtain specifications for all devices, and block 310 sorts all the devices. Block 311 uses a decision tree algorithm to compute placement suggestions, as described in
[0060] Block 312 displays suggestions to the user. At block 313, method 300 determines if there are any errors with placement. If not, block 314 allows the user to finish the placement process which creates physical groups and/or updates existing groups depending upon the state of the data center. If so, block 315 highlights errors and provides an option for the user to save or export the current structure. Method 300 ends at block 316.
[0061] In
[0062] If block 401 determines that more than one device is being placed in the data center, block 406 sorts devices based on U size, and picks the heaviest and/or largest device. If one of the devices is a cluster, block 407 sorts devices inside the cluster based on U-size. If the one of the devices is a chassis, monolithic server, rack storage, or rack network device, the device is used for further processing at block 408. If one of the devices is a modular server, modular storage, or modular network IOM, block 409 uses that device for further processing.
[0063] In
[0064] In
[0065] In
[0066] If the placement approach is the greedy approach, block 703 determines if the server can be placed in the chassis with the least space availability. If not, block 704 determines that the device cannot be placed. If so, control passes to block A3. Conversely, if the placement approach is the round robin approach, block 705 determines if the server can be placed in the chassis with the most or maximum space availability. If not, block 706 determines that the device cannot be placed. If so, control passes to block A3.
[0067] In
[0068] In
[0069] Conversely, if the user selected the round robin approach at node A4, block 905 determines whether the rack with most or maximum space availability can accommodate the device. If so, control passes to node A5. If not, block 906 determines if this is the last rack available. If so, block 907 determines that the device cannot be placed. Otherwise, block 908 moves onto the next rack meeting the aforementioned criteria.
[0070] In
[0071] If block 1003 determines that there is no available slot at the bottom of the rack, block 1006 determines if all devices present below the slot are heavier than the currently device under consideration. If so, control passes to block A6. If not, block 1007 determines that the device cannot be placed in the rack.
[0072] In
[0073]
[0074] In
[0075] In
[0076] Conversely, if the approach selected by user in block 504 of
[0077] In
[0078] If block 1503 determines that the available slot is not at the bottom of the rack, block 1506 determines if all devices present below the current slot are heavier than each device in the cluster. If not, block 1507 determines that the cluster cannot be placed. If so, control passes to node C2.
[0079] In
[0080]
[0081] From node D1, if the greedy approach is selected, block 1703 determines if the rack with least space availability can accommodate the storage device. If so, control passes to node D2. If not, block 1704 determines if this is the last rack available. If so, block 1705 determines that the device cannot be placed. Otherwise, block 1706 moves onto the next rack meeting the aforementioned criteria.
[0082] Conversely, if the round robin approach is selected, block 1707 determines if the rack with most or maximum space availability can accommodate the storage device. If so, control passes to node D2. If not, block 1708 determines if this is the last rack available. If so, block 1709 determines that the device cannot be placed. Otherwise, block 1710 moves onto the next rack meeting the aforementioned criteria.
[0083] In
[0084] If block 1803 determines that the available slot is not at the bottom of the rack, block 1506 determines if all devices present below the current slot are heavier than the device to be placed. If not, block 1807 determines that the device cannot be placed. If so, control passes to node D3.
[0085] Node D3 passes control to block 1808, where method 300 determines if there is enough power to accommodate all the devices in the cluster. If not, block 1812 determines that the cluster cannot be placed in the rack. If so, block 1809 determines if there are enough ports on the TOR to accommodate the device to be placed. If not, block 1811 determines that the device cannot be placed. If so, block 1810 identifies the device location and the device is placed at the first available slot.
[0086] In
[0087] In
[0088] If the placement approach is the greedy approach, block 2003 determines if the device can be placed in the chassis with the least space availability. If not, block 2004 determines that the device cannot be placed. If so, control passes to block D6. Conversely, if the placement approach is the round robin approach, block 2005 determines if the device can be placed in the chassis with the most or maximum space availability. If not, block 2006 determines that the device cannot be placed. If so, control passes to block D6.
[0089] In
[0090]
[0091] From node E1, if the greedy approach is selected, block 2203 determines if the rack with least space availability can accommodate the network device. If so, control passes to node E2. If not, block 2204 determines if this is the last rack available. If so, block 2205 determines that the device cannot be placed. Otherwise, block 2206 moves onto the next rack meeting the aforementioned criteria.
[0092] Conversely, if the round robin approach is selected, block 2207 determines if the rack with most or maximum space availability can accommodate the storage device. If so, control passes to node E2. If not, block 2208 determines if this is the last rack available. If so, block 2209 determines that the device cannot be placed. Otherwise, block 2210 moves onto the next rack meeting the aforementioned criteria.
[0093] In
[0094] If block 2303 determines that the available slot is not at the bottom of the rack, block 2306 determines if all devices present below the current slot are heavier than the device to be placed. If not, block 2307 determines that the device cannot be placed. If so, control passes to node E6.
[0095] Node E6 passes control to block 2308, where method 300 determines if there is enough power to accommodate the device. If not, block 2311 determines that the device cannot be placed in the rack. If so, block 2309 determines if there are enough ports on the TOR to accommodate the device to be placed. If not, block 2312 determines that the device cannot be placed. If so, block 2310 identifies the device location and the device is placed at the first available slot.
[0096] In
[0097] In
[0098] If the placement approach is the greedy approach, block 2503 determines if the device can be placed in the chassis with the least space availability. If not, block 2504 determines that the device cannot be placed. If so, control passes to block E6.
[0099] Conversely, if the placement approach is the round robin approach, block 2505 determines if the device can be placed in the chassis with the most or maximum space availability. If not, block 2506 determines that the device cannot be placed. If so, control passes to block E6.
[0100] In
[0101] In case of a new data center, method 300 assumes that a TOR switch is already be configured in the rack and we can proceed with device placements. In the case of a new rack without any TOR switches configured, method 300 may be modified as follows:
[0102] First, method 300 queries the user about the state of the data center, the placement approach and the list of devices. Then, method 300 check if network switches are available in the list of devices purchased and entered as part of devices to be placed in a rack. If network switches are available in the list, method 300 can start with placing network switches across the racks based on the placement approach selected and then proceed with device placements (network port related information may be retrieved from the device specification sheet and the available network ports may be mapped to individual servers as per requirement). Method 300 is updated only for the placement of network switches whereas device placement logic can be maintained as the same.
[0103] In case of an existing datacenter, method 300 may check if the existing network ports of already configured switches is sufficient or not. If not, the new switches in the input list may be distributed across the racks and method 300 may proceed with device placement logic. In various implementations, method 300 may be purely automated so that an administrator does not have to spend time in analyzing where and how the device needs to be placed in the datacenter.
[0104]
[0105] Placement suggestion 2700 shows the result of the greedy approach, whereby method 300 fills up Rack 1 initially and then moves on to Rack 2. As all available devices were placed, Rack 3 remains empty. Conversely, placement suggestion 2800 shows the result of the round robin approach, where the devices are evenly distributed across all the available racks.
[0106] In sum, systems and methods described herein may provide a solution for device placement suggestions in new and existing datacenters. In some cases, the placement suggestion may involve custom or user-selected physical groups—i.e., if a user selects only 2 racks from the whole set of monitored racks available in a data center, then only the 2 selected racks will be considered for providing placement suggestions.
[0107] Moreover, systems and methods described herein may provide consideration of network port availability as a parameter in placement suggestions, in addition to power, space and thermal attributes with considerations for clustered groups. Particularly, these systems and method may consider the clustered devices to be placed together, may consider placement suggestions for rack and chassis based storage devices, may provide placement suggestions for rack and chassis-based network devices, may allow existing rack schemas to be imported into the system and review the current device placements in a data center, and/or may enable one-click physical group creation in power management software by replicating the output of placement suggestions.
[0108] As such, systems and methods described herein provide a data center IHS placement suggestions with zero manual intervention. The automatic recommendation engine allows users not to bother about creating an explicit plan on where the devices need to be placed. In some cases, method 300 only needs device model information and the number of units purchased along with few rack parameters as inputs and the final outcome may be a ready-made plan.
[0109] In case of existing data centers, power, energy, and other related metrics from device inventory or metric details may be gathered. Maintaining a device specification sheet or file helps to provide device placement suggestions even when the required power and energy metrics are not available from the device. Moreover, a device specification file may provide support for capacity planning with devices whose Unit Size is not available particularly in the case of storage devices and network switches, where the protocol does not typically provide Unit Size details.
[0110] Systems and methods described herein may provide an intelligent solution that takes into account data center management guidelines on how the devices should be placed in a rack (e.g., heaviest devices at the bottom and lighter ones at the top). These systems and methods also provide support for clustered devices (e.g., in case of a cluster of servers or a set of MX7000 chassis in Multi Chassis Domain mode, method 300 provides customized placement suggestions by grouping these devices together). Moreover, these systems and methods may provide rack slot recommendation based on network port availability in a TOR switch. Particularly, method 300 considers network availability as a parameter for slot allocation.
[0111] It should be understood that various operations described herein may be implemented in software executed by processing circuitry, hardware, or a combination thereof. The order in which each operation of a given method is performed may be changed, and various operations may be added, reordered, combined, omitted, modified, etc. It is intended that the invention(s) described herein embrace all such modifications and changes and, accordingly, the above description should be regarded in an illustrative rather than a restrictive sense.
[0112] The terms “tangible” and “non-transitory,” as used herein, are intended to describe a computer-readable storage medium (or “memory”) excluding propagating electromagnetic signals; but are not intended to otherwise limit the type of physical computer-readable storage device that is encompassed by the phrase computer-readable medium or memory. For instance, the terms “non-transitory computer readable medium” or “tangible memory” are intended to encompass types of storage devices that do not necessarily store information permanently, including, for example, RAM. Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may afterwards be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link.
[0113] Although the invention(s) is/are described herein with reference to specific embodiments, various modifications and changes can be made without departing from the scope of the present invention(s), as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention(s). Any benefits, advantages, or solutions to problems that are described herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
[0114] Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The terms “coupled” or “operably coupled” are defined as connected, although not necessarily directly, and not necessarily mechanically. The terms “a” and “an” are defined as one or more unless stated otherwise. The terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a system, device, or apparatus that “comprises,” “has,” “includes” or “contains” one or more elements possesses those one or more elements but is not limited to possessing only those one or more elements. Similarly, a method or process that “comprises,” “has,” “includes” or “contains” one or more operations possesses those one or more operations but is not limited to possessing only those one or more operations.