Fusion Io Drivers Esxi Commands

Cisco HyperFlex System, a Hyperconverged Virtual Server Infrastructure Design and Deployment of Cisco HyperFlex™ System for a Hyperconverged Virtual Server Infrastructure with HX Data platform 1.7 and VMWare vSphere 6.0U2 NOTE: Works with document’s Advanced Properties “First Published” property. Click File Properties Advanced Properties Custom.

Last Updated: March 7, 2017 NOTE: Works with document’s Advanced Properties “Last Updated” property. Click File Properties Advanced Properties Custom. About Cisco Validated Designs The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments.

For more information visit. ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, 'DESIGNS') IN THIS MANUAL ARE PRESENTED 'AS IS,' WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS.

THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO. The past decade has witnessed major shifts in the data center, and the most significant one being the widespread adoption of virtualization of servers as the primary computing platform for most businesses. The flexibility, speed of deployment, ease of management, portability, and improved resource utilization has led many enterprises to adopt a “virtual first” stance, where all environments are deployed virtually unless circumstances made it impossible. While the benefits of virtualization are clear, the proliferation of virtual environments has brought other technology stacks into the spotlight, highlighting where they do not offer the same levels of simplicity, flexibility, and rapid deployment as the virtualized compute platforms do. Networking and storage systems in particular have come under increasing scrutiny to be as agile as hypervisors and virtual servers.

Cisco offers powerful solutions for rapid deployment and easy management of virtualized computing platforms, including integrated networking capabilities, with the Cisco Unified Computing System (Cisco UCS) product line. Now with the introduction of Cisco HyperFlex, we bring similar enhancements to the virtualized servers and Hyperconverged storage market. Cisco HyperFlex systems have been developed using the Cisco UCS platform, which combines Cisco HX-Series x86 servers and integrated networking technologies with Cisco UCS Fabric Interconnects, into a single management domain, along with industry leading virtualization hypervisor software from VMware, and new software defined storage technology. The combination creates a virtualization platform that also provides the network connectivity for the guest virtual machine (VM) connections, and the distributed storage to house the VMs using Cisco UCS x86 servers instead of specialized components. The unique storage features of the newly developed log based filesystem enable rapid cloning of VMs, snapshots without the traditional performance penalties, data deduplication and compression, without having to purchase all-flash based storage systems.

All configuration, deployment, management, and monitoring tasks of the solution can be done with the existing tools for Cisco UCS and VMware, such as Cisco UCS Manager and VMware vCenter. This powerful linking of advanced technology stacks into a single, simple, rapidly deployable solution makes Cisco HyperFlex a true second generation hyperconverged platform for the modern data center. The Cisco HyperFlex System provides an all-purpose virtualized server platform, with hypervisor hosts, network connectivity, and virtual server storage across a set of Cisco UCS HX-Series x86 rack-mount servers.

Deployment Guide for Cisco HyperFlex for Virtual Server Infrastructure 2.0.1 with All Flash Storage.

Fusion Io Drivers Esxi Commands

Legacy data center deployments relied on a disparate set of technologies, each performing a distinct and specialized function, such as network switches connecting endpoints and transferring Ethernet network traffic, and Fibre Channel (FC) storage arrays providing block based storage devices via a unique storage array network (SAN). Each of these systems had unique requirements for hardware, connectivity, management tools, operational knowledge, monitoring, and ongoing support. A legacy virtual server environment operated in silos, within which only a single technology operated, along with their correlated software tools and support staff. Silos were often divided between the x86 computing hardware, the networking connectivity of those x86 servers, SAN connectivity and storage device presentation, the hypervisors, virtual platform management, and the guest VMs themselves along with their Operating Systems and applications. This model proves to be inflexible, difficult to navigate, and is susceptible to numerous operational inefficiencies. To cater for the needs of the modern and agile data center, a new model called converged architecture gained wide acceptance. A converged architecture attempts to collapse the traditional siloed architecture by combining various technologies into a single environment, which has been designed to operate together in pre-defined, tested, and validated designs.

A key component of the converged architecture was the revolutionary combination of x86 rack and blade servers, along with converged Ethernet and Fibre Channel networking offered by the Cisco UCS platform. Converged architectures leverage Cisco UCS, plus new deployment tools, management software suites, automation processes, and orchestration tools to overcome the difficulties deploying traditional environments, and do so in a much more rapid fashion. These new tools place the ongoing management and operation of the system into the hands of fewer staff, with faster deployment of workloads based on business needs, while still remaining at the forefront in providing flexibility to adapt to changing workload needs, and offering the highest possible performance. Cisco has proved to be incredibly successful in these areas with our partners, developing leading solutions such as Cisco FlexPod, SmartStack, VersaStack, and vBlock architectures. Despite the advancements, since these converged architectures incorporate legacy technology stacks, particularly in the storage subsystems, there often remained a division of responsibility amongst multiple teams of administrators. Alongside the tremendous advantages of converged infrastructure approach, there is also a downside wherein these architectures use a complex combination of components, where a simpler system would suffice to serve the required workloads.

Significant changes in the storage marketplace have given rise to the software defined storage (SDS) system. Legacy FC storage arrays continued to utilize a specialized subset of hardware, such as Fibre Channel Arbitrated Loop (FC-AL) based controllers and disk shelves along with optimized Application Specific Integrated Circuits (ASIC), read/write data caching modules and cards, plus highly customized software to operate the arrays.

With the rise in the Serial Attached SCSI (SAS) bus technology and its inherent benefits, storage array vendors began to transition their internal architectures to SAS, and with dramatic increases in processing power in the recent x86 processor architectures, fewer or no custom ASICs are used. With the shrink in the disk physical sizes, servers began to have the same density of storage per rack unit (RU) as the arrays themselves, and with the proliferation of NAND based flash memory solid state disks (SSD), they also now had access to input/output (IO) devices whose speed rivaled that of dedicated caching devices. As servers now contained storage devices and technology to rival many dedicated arrays in the market, the remaining major differentiator between them was the software providing allocation, presentation and management of the storage, plus the advanced features many vendors offered. East West Symphonic Choirs Crack.

This led to the increased adoption of software defined storage, where the x86 servers with the storage devices ran software to effectively turn one or more of them into a storage array much the same as the traditional arrays were. In a somewhat unexpected turn of events, some of the major storage array vendors themselves were pioneers in this field, recognizing the shift in the market and attempting to profit from their unique software features, versus specialized hardware as they had done in the past. Some early uses of SDS systems simply replaced the traditional storage array in the converged architectures as described earlier. This infrastructure approach still used a separate storage system from the virtual server hypervisor platform, and depending on the solution provider, also still used separate network devices. If the server that hosted the virtual servers, also provided the SDS environment in the same model of servers, could they not simply do both things at once and collapse the two functions into one? This idea and combination of resources is what the industry has given the moniker of a hyperconverged infrastructure. Hyperconverged infrastructures combine the computing, memory, hypervisor, and storage devices of servers into a single monolithic platform for virtual servers.

There is no longer a separate storage system, as the servers running the hypervisors also provide the software defined storage resources to store the virtual servers, effectively storing the virtual machines on themselves. A hyperconverged infrastructure is far more self-contained, simpler to use, faster to deploy, easier to consume, yet flexible and with high performance. By combining the convergence of compute and network resources provided by Cisco UCS, along with the new hyperconverged storage software, the Cisco HyperFlex system uniquely provides the compute resources, network connectivity, storage, and hypervisor platform to run an entire virtual environment, all contained in a single system. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploying the Cisco HyperFlex System.

External references are provided wherever applicable, but readers are expected to be familiar with VMware specific technologies, infrastructure concepts, networking connectivity, and security policies of the customer installation. This document describes the steps required to deploy, configure, and manage a Cisco HyperFlex system.

The document is based on all known best practices using the software, hardware and firmware revisions specified in the document. As such, recommendations and best practices can be amended with later versions. This document showcases the installation and configuration of a single 8 node Cisco HyperFlex cluster, and a 4+4 hybrid Cisco HyperFlex cluster, in a typical customer data center environment. While readers of this document are expected to have sufficient knowledge to install and configure the products used, configuration details that are important to the deployment of this solution are provided in this CVD.

The Cisco HyperFlex system provides a fully contained virtual server platform, with compute and memory resources, integrated networking connectivity, a distributed high performance log-based filesystem for VM storage, and the hypervisor software for running the virtualized servers, all within a single Cisco UCS management domain. Figure 1 HyperFlex System Overview The following are components of a Cisco HyperFlex system: Cisco UCS 6248UP Fabric Interconnects Cisco HyperFlex HX220c M4S or HX240c M4SX rack-mount servers Cisco HX Data Platform Software VMware vSphere ESXi Hypervisor VMware vCenter Server (end-user supplied) Optional components for additional compute only resources are: Cisco UCS 5108 Chassis Cisco UCS 2204XP Fabric Extender Cisco UCS B200 M4 blade servers Figure 2 Cisco HyperFlex System. The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi chassis platform in which all resources participate in a unified management domain.

The main components of Cisco Unified Computing System are: Computing—The system is based on an entirely new class of computing system that incorporates rack-mount and blade servers based on Intel Xeon Processors. Network—The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements. Virtualization—The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments.

Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements. Storage access—The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. Format For Drivers Salary Slip Online. By unifying the storage access the Cisco Unified Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI. This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity, and management for increased productivity.

Management—The system uniquely integrates all system components which enable the entire solution to be managed as a single entity by the Cisco UCS Manager (UCSM). The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all system configuration and operations. The Cisco Unified Computing System is designed to deliver: A reduced Total Cost of Ownership and increased business agility. Increased IT staff productivity through just-in-time provisioning and mobility support. A cohesive, integrated system which unifies the technology in the data center.

The system is managed, serviced and tested as a whole. Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand. Industry standards supported by a partner ecosystem of industry leaders. The Cisco UCS 6200 Series Fabric Interconnect is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system.

The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Fibre Channel functions. The Cisco UCS 6200 Series provides the management and communication backbone for the Cisco UCS C-Series and HX-Series rack-mount servers, Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All servers and chassis, and therefore all blades, attached to the Cisco UCS 6200 Series Fabric Interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both the LAN and SAN connectivity for all blades within its domain. From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1Tb switching capacity, 160 Gbps bandwidth per chassis, and independent of packet size and enabled services. The product family supports Cisco low-latency, lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks.

The Fabric Interconnect supports multiple traffic classes over a lossless Ethernet fabric from a server through an interconnect. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated. The Cisco UCS 6248UP 48-Port Fabric Interconnect is a one rack unit (1 RU) 10 Gigabit Ethernet, FCoE and Fiber Channel switch offering up to 960-Gbps throughput and up to 48 ports.

The switch has 32 1/10-Gbps fixed Ethernet, FCoE and FC ports and one expansion slot. Figure 3 Cisco UCS 6248UP Fabric Interconnect A HyperFlex cluster requires a minimum of three HX-Series nodes.

Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single-node failure. The HX-Series nodes combine the CPU and RAM resources for hosting guest virtual machines, with the physical storage resources used by the HyperFlex software. Each HX-Series node is equipped with one high-performance SSD drive for data caching and rapid acknowledgment of write requests and is also is equipped with up to the platform’s physical capacity of spinning disks for maximum data capacity. The Cisco HyperFlex HX220c M4S rackmount server is one rack unit (1 RU) high and can mount in an industry-standard 19-inch rack. This small footprint configuration contains a minimum of three nodes with six 1.2 terabyte (TB) SAS drives that contribute to cluster storage capacity, a 120 GB SSD housekeeping drive, a 480 GB SSD caching drive, and two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as mirrored boot drives.

Figure 4 HX220c M4S Node The Cisco HyperFlex HX240c M4S rackmount server is two rack unit (2 RU) high and can mount in an industry-standard 19-inch rack. This capacity optimized configuration contains a minimum of three nodes, a minimum of six and up to twenty-three 1.2 TB SAS drives that contribute to cluster storage, a single 120 GB SSD housekeeping drive, a single 1.6 TB SSD caching drive, and two FlexFlash SD cards that act as mirrored boot drives. Figure 5 HX240c M4SX Node The Cisco UCS Virtual Interface Card (VIC) 1227 is a dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10-Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-motherboard (mLOM) adapter installed in the Cisco UCS HX-Series Rack Servers ( Figure 6). The mLOM slot can be used to install a Cisco VIC without consuming a PCIe slot, which provides greater I/O expandability. It incorporates next-generation converged network adapter (CNA) technology from Cisco, enabling a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). The personality of the card is determined dynamically at boot time using the service profile associated with the server.

The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. For workloads that require additional computing and memory resources, but not additional storage capacity or IOPs, a compute-intensive hybrid cluster configuration is allowed. This configuration contains a minimum of three HX240c M4SX Nodes with up to four Cisco UCS B200 M4 Blade Servers for additional computing capacity. The HX240c M4SX Nodes are configured as described previously, and the Cisco UCS B200 M4 servers are equipped with FlexFlash SD cards as boot drives, or can be configured to boot from an external SAN. Use of the B200 M4 compute nodes also requires the Cisco UCS 5108 blade server chassis, and a pair of Cisco UCS 2204XP Fabric Extenders. Figure 7 B200 M4 Node The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6 RU) high and can mount in an industry-standard 19-inch rack.

A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors. Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting 40 Gigabit Ethernet standards.

Figure 8 Cisco UCS 5108 Blade Chassis Front and Rear Views The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS fabric interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the fabric interconnect. At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic. The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis.

Figure 9 Cisco UCS 2204XP Fabric Extender The Cisco HyperFlex HX Data Platform is a purpose-built, high-performance, distributed file system with a wide array of enterprise-class data management services. The data platform’s innovations redefine distributed storage technology, exceeding the boundaries of first-generation hyperconverged infrastructures. The data platform has all the features that you would expect of an enterprise shared storage system, eliminating the need to configure and maintain complex Fibre Channel storage networks and devices. The platform simplifies operations and helps ensure data availability. Enterprise-class storage features include the following: Replication replicates data across the cluster so that data availability is not affected if single or multiple components fail (depending on the replication factor configured). Deduplication is always on, helping reduce storage requirements in virtualization clusters in which multiple operating system instances in client virtual machines result in large amounts of replicated data.

Compression further reduces storage requirements, lowering costs, and the log-structured file system is designed to store variable-sized blocks, minimizing internal fragmentation. Thin provisioning allows large volumes to be created without requiring storage to support them until the need arises, simplifying data volume growth and making storage a “pay as you grow” proposition. Fast, space-efficient clones, called HyperFlex ReadyClones, rapidly replicate storage volumes so that virtual machines can be replicated simply through a few small metadata operations, with actual data copied only for new write operations. Snapshots help facilitate backup and remote-replication operations: needed in enterprises that require always-on data availability. The Cisco HyperFlex HX Data Platform is administered through a VMware vSphere web client plug-in. Through this centralized point of control for the cluster, administrators can create volumes, monitor the data platform health, and manage resource use.

Administrators can also use this data to predict when the cluster will need to be scaled. Figure 10 vCenter HyperFlex Web Client Plugin A Cisco HyperFlex HX Data Platform controller resides on each node and implements the Cisco HyperFlex HX Distributed Filesystem. The controller runs in user space within a virtual machine and intercepts and handles all the I/O from guest virtual machines.

The platform controller VM uses the VMDirectPath I/O feature to provide PCI pass-through control of the physical server’s SAS disk controller. This method gives the controller VM full control of the physical disk resources, utilizing the SSD drives as a read/write caching layer, and the HDDs as a capacity layer for the distributed storage.

The controller integrates the data platform into VMware software through the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs): IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disks that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system. VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request advanced file system operations such as snapshots and cloning.

The controller implements these operations through manipulation of metadata rather than actual data copying, providing rapid response, and thus rapid deployment of new environments. The Cisco HyperFlex HX Data Platform controllers handle all read and write operation requests from the guest VMs to their virtual disks (VMDK) stored in the distributed datastores in the cluster. The data platform distributes the data across multiple nodes of the cluster, and also across multiple capacity disks of each node, according to the replication level policy selected during the cluster setup.

This method avoids storage hotspots on specific nodes, and on specific disks of the nodes, and thereby also avoids networking hotspots or congestion from accessing more data on some nodes versus others. Replication Factor The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is referred to as the replication factor (RF).

The default setting for the Cisco HyperFlex HX Data Platform is replication factor 3 (RF=3). Replication Factor 3: For every I/O write committed to the storage layer, 2 additional copies of the blocks written will be created and stored in separate locations, for a total of 3 copies of the blocks. Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate simultaneous failures of 2 disks, or 2 entire nodes without losing data and without resorting to restore from backup or other recovery processes. Replication Factor 2: For every I/O write committed to the storage layer, 1 additional copy of the blocks written will be created and stored in separate locations, for a total of 2 copies of the blocks.

Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate a failure of 1 disk, or 1 entire node without losing data and without resorting to restore from backup or other recovery processes. Data Write Operations For each write operation, data is written to the local caching SSD on the node where the write originated, and replica copies of that write are written to the caching SSD of the remote nodes in the cluster, according to the replication factor setting. For example, at RF=3 a write will be written locally where the VM originated the write, and two additional writes will be committed in parallel on two other nodes. The write operation will not be acknowledged until all three copies are written to the caching layer SSDs. Written data is also cached in a write log area resident in memory in the controller VM, along with the write log on the caching SSDs. This process speeds up read requests when reads are requested of data that has recently been written.

Data Destaging, Deduplication, and Compression The Cisco HyperFlex HX Data Platform constructs multiple write caching segments on the caching SSDs of each node in the distributed cluster. As write cache segments become full, and based on policies accounting for I/O load and access patterns, those write cache segments are locked and new writes roll over to a new write cache segment.

When the number of locked cache segments reaches a particular threshold, they are destaged to the HDD capacity layer of the filesystem. During the destaging process, data is deduplicated and compressed before being written to the HDD capacity layer.

The resulting data after deduplication and compression can now be written in a single sequential operation to the HDDs of the server, avoiding disk head seek thrashing and accomplishing the task in the minimal amount of time ( Figure 11). Since the data is already deduplicated and compressed before being written, the platform avoids additional I/O overhead often seen on competing systems, which must later do a read/dedupe/compress/write cycle. Deduplication, compression and destaging take place with no delays or I/O penalties to the guest VMs making requests to read or write data. Data Read Operations For data read operations, data may be read from multiple locations. For data that was very recently written, the data is likely to still exist in the write log of the local platform controller memory, or the write log of the local caching SSD. If local write logs do not contain the data, the distributed filesystem metadata will be queried to see if the data is cached elsewhere, either in write logs of remote nodes, or in the dedicated read cache area of the local and remote SSDs. Finally, if the data has not been accessed in a significant amount of time, the filesystem will retrieve the data requested from the HDD capacity layer.

As requests for reads are made to the distributed filesystem and the data is retrieved from the HDD capacity layer, the caching SSDs populate their dedicated read cache area to speed up subsequent requests for the same data. This multi-tiered distributed system with several layers of caching techniques ensures that data is served at the highest possible speed, leveraging the caching SSDs of the nodes fully and equally. Component Hardware Required Fabric Interconnect Two Cisco UCS 6248UP Fabric Interconnects (optional) plus 16 port UP expansion module Servers Eight Cisco HX-series HX220c M4S servers Or Eight Cisco HX-series HX240c M4SX servers Or Four Cisco HX-Series HX240c M4SX servers plus four Cisco UCS B200 M4 blade servers Chassis Cisco UCS 5108 Blade Chassis (only if using the B200 M4 servers) Fabric Extenders Cisco UCS 2204XP Fabric Extenders (required for the 5108 blade chassis and B200 M4 blades) The following table lists the hardware component options for the HX220c M4S server model.

HX220c M4S options Hardware Required Processors Chose a matching pair of any model of Intel E5-26xx v3 or v4 processors. HX240c M4SX options Hardware Required Processors Chose a matching pair of any model of Intel E5-26xx v3 or v4 processors. B200 M4 options Hardware Required Processors Chose any Intel E5-26xx v3 or v4 processor model. Use of all v3 or all v4 processors within the same HyperFlex cluster is recommended. Memory Any supported amount of total memory using 16 GB, 32 GB or 64 GB DDR4 2133-MHz RDIMM/PC4-17000 or 2400-MHz RDIMM/PC4-19200 modules Disk Controller none SSD none HDD none Network Cisco UCS VIC1340 VIC MLOM Boot Devices Two 64GB SD Cards for Cisco UCS Servers Or Configured to boot from an external SAN LUN via FC or iSCSI Cisco Solution IDs The following table lists the Cisco solution IDs that can be referenced by your Cisco Account Team, or your Cisco Partner sales team to accelerate the order process using pre-defined HyperFlex solution groupings. Additional Configure to Order (CTO) and SmartPlay bundles are also available, which give complete flexibility of all the individual components which can be ordered.

Use of a published Cisco custom ESXi ISO installer file is required. VMware vSphere Enterprise or Enterprise Plus licensing is recommended. VMware vSphere Standard, Essentials Plus, and ROBO editions are also supported but only for vSphere 6.0 versions. Currently for these editions, upgrades of HX Data Platform software would need to occur in an offline maintenance window. Management Server VMware vCenter Server for Windows or vCenter Server Appliance 5.5 update 3b or later.

Refer to for interoperability of your ESXi version and vCenter Server. Cisco HyperFlex HX Data Platform Cisco HyperFlex HX Data Platform Software 1.7.1 Cisco UCS Firmware Cisco UCS Infrastructure software, B-Series and C-Series bundles, revision 2.2(6f) To support Intel E5-26xx v4 processors, the Cisco UCS Infrastructure software, B-Series and C-Series bundles must be upgraded to revision 2.2(7c) Note that the software revisions listed in Table 6 are the only valid and supported configuration at the time of the publishing of this validated design. Special care must be taken not to alter the revision of the hypervisor, vCenter server, Cisco HX platform software, or the Cisco UCS firmware without first consulting the appropriate release notes and compatibility matrixes to ensure that the system is not being modified into an unsupported configuration.

This document does not cover the installation and configuration of VMware vCenter Server for Windows, or the vCenter Server Appliance. The vCenter Server must be installed and operational prior to the installation of the Cisco HyperFlex HX Data Platform software. The following best practice guidance applies to installations of HyperFlex 1.7.1: Do not modify the default TCP port settings of the vCenter installation. Using non-standard ports can lead to failures during the installation. Building the vCenter server as a virtual machine inside the HyperFlex cluster environment on a HX-Series node is not allowed.

There are no valid locations within the HX-Series servers that will contain enough usable space in a VMFS datastore to house a vCenter server, as once the software is installed, all of the available disks and space will be fully used by the HyperFlex cluster. Building any virtual machines on the HyperFlex servers prior to installing the HyperFlex HX Data Platform software is therefore not possible, as it will lead to installation failures. Please build the vCenter server on a physical server or in a virtual environment outside of the HyperFlex cluster. Cisco HyperFlex clusters currently scale up from a minimum of 3 to a maximum of 8 converged nodes per cluster, i.e. 8 nodes providing storage resources to the HX Distributed Filesystem.

For the compute intensive “hybrid” design, a configuration with 3-8 Cisco HX240c M4SX model servers can be combined with up to 4 Cisco B200 M4 blades, called compute-only nodes. The number of compute-only nodes cannot exceed the number of converged nodes. Once the maximum size of a cluster has been reached, the environment can be “scaled out” by adding additional servers to the Cisco UCS domain, installing an additional HyperFlex cluster on them, and controlling them via the same vCenter server. A maximum of 4 HyperFlex clusters can be managed by a single vCenter server, therefore the maximum size of a single HyperFlex environment is 32 converged nodes, plus up to 16 additional compute-only blades.

Overall usable cluster capacity is based on a number of factors. The number of nodes in the cluster must be considered, plus the number and size of the capacity layer disks.

Caching disk sizes are not calculated as part of the cluster capacity. The replication factor of the HyperFlex HX Data Platform also affects the cluster capacity as it defines the number of copies of each block of data written.

Disk drive manufacturers have adopted a size reporting methodology using calculation by powers of 10, also known as decimal prefix. As an example, a 120 GB disk is listed with a minimum of 120 x 10^9 bytes of usable addressable capacity, or 120 billion bytes. However, many operating systems and filesystems report their space based on standard computer binary exponentiation, or calculation by powers of 2, also called binary prefix.

In this example, 2^10 or 1024 bytes make up a kilobyte, 2^10 kilobytes make up a megabyte, 2^10 megabytes make up a gigabyte, and 2^10 gigabytes make up a terabyte. As the values increase, the disparity between the two systems of measurement and notation get worse, at the terabyte level, the deviation between a decimal prefix value and a binary prefix value is nearly 10%. The International System of Units (SI) defines values and decimal prefix by powers of 10 as follows. Value Symbol Name 1024 bytes KiB kibibyte 1024 KiB MiB mebibyte 1024 MiB GiB gibibyte 1024 GiB TiB tebibyte For the purpose of this document, the decimal prefix numbers are used only for raw disk capacity as listed by the respective manufacturers.

For all calculations where raw or usable capacities are shown from the perspective of the HyperFlex software, filesystems or operating systems, the binary prefix numbers are used. This is done primarily to show a consistent set of values as seen by the end user from within the HyperFlex vCenter Web Plugin when viewing cluster capacity, allocation and consumption, and also within most operating systems. The following table lists a set of HyperFlex HX Data Platform cluster usable capacity values, using binary prefix, for an array of cluster configurations. These values are useful for determining the appropriate size of HX cluster to initially purchase, and how much capacity can be gained by adding capacity disks. The calculations for these values are listed in. Installation of the HyperFlex system is primarily done via a deployable HyperFlex installer virtual machine, available for download at cisco.com as an OVA file. The installer VM does most of the Cisco UCS configuration work, it can be leveraged to simplify the installation of ESXi on the HyperFlex hosts, and also performs significant portions of the ESXi configuration.

Finally, the installer VM is used to install the HyperFlex HX Data Platform software and create the HyperFlex cluster. Because this simplified installation method has been developed by Cisco, this CVD will not give detailed manual steps for the configuration of all the elements that are handled by the installer. Instead, the elements configured will be described and documented in this section, and the subsequent sections will guide you through the manual steps needed for installation, and how to utilize the HyperFlex Installer for the remaining configuration steps. Cisco UCS network uplinks connect “northbound” from the pair of Cisco UCS Fabric Interconnects to the LAN in the customer data center. All Cisco UCS uplinks operate as trunks, carrying multiple 802.1Q VLAN IDs across the uplinks.

The default Cisco UCS behavior is to assume that all VLAN IDs defined in the Cisco UCS configuration are eligible to be trunked across all available uplinks. Cisco UCS Fabric Interconnects appear on the network as a collection of endpoints versus another network switch. Internally, the Fabric Interconnects do not participate in spanning-tree protocol (STP) domains, and the Fabric Interconnects cannot form a network loop, as they are not connected to each other with a layer 2 Ethernet link. All link up/down decisions via STP will be made by the upstream root bridges. Uplinks need to be connected and active from both of the Fabric Interconnects.

For redundancy, multiple uplinks can be used on each FI, either as 802.3ad Link Aggregation Control Protocol (LACP) port-channels, or using individual links. For the best level of performance and redundancy, uplinks can be made as LACP port-channels to a pair of upstream Cisco switches using the virtual port channel (vPC) feature. Using vPC uplinks allows all uplinks to be active passing data, plus protects against any individual link failure, and the failure of an upstream switch. Other uplink configurations can be redundant, but spanning-tree protocol loop avoidance may disable links if vPC is not available. All uplink connectivity methods must allow for traffic to pass from one Fabric Interconnect to the other, or from fabric A to fabric B. There are scenarios where cable, port or link failures would require traffic that normally does not leave the Cisco UCS domain, to now be forced over the Cisco UCS uplinks. Additionally, this traffic flow pattern can be seen briefly during maintenance procedures, such as updating firmware on the Fabric Interconnects, which requires them to be rebooted.

The following sections and figures detail several uplink connectivity options. Single Uplinks to Single Switch This connection design is susceptible to failures at several points; single uplink failures on either Fabric Interconnect can lead to connectivity losses or functional failures, and the failure of the single uplink switch will cause a complete connectivity outage. Figure 17 Connectivity with Single Uplink to Single Switch Port Channels to Single Switch This connection design is now redundant against the loss of a single link, but remains susceptible to the failure of the single switch. Figure 18 Connectivity with Port-Channels to Single Switch Single Uplinks or Port Channels to Multiple Switches This connection design is redundant against the failure of an upstream switch, and redundant against a single link failure. In normal operation, STP is likely to block half of the links to avoid a loop across the two upstream switches. The side effect of this is to reduce bandwidth between the Cisco UCS domain and the LAN.

If any of the active links were to fail, STP would bring the previously blocked link online to provide access to that Fabric Interconnect via the other switch. It is not recommended to connect both links from a single FI to a single switch, as that configuration is susceptible to a single switch failure breaking connectivity from fabric A to fabric B. For enhanced redundancy, the single links in the figure below could also be port-channels. Figure 19 Connectivity with Multiple Uplink Switches vPC to Multiple Switches This recommended connection design relies on using Cisco switches that have the virtual port channel feature, such as Catalyst 6000 series switches running VSS, and Cisco Nexus 5000, 7000 and 9000 series switches. Logically the two vPC enabled switches appear as one, and therefore spanning-tree protocol will not block any links.

This configuration allows for all links to be active, achieving maximum bandwidth potential, and multiple redundancies at each level. Figure 20 Connectivity with vPC. A dedicated network or subnet for physical device management is often used in data centers. In this scenario, the mgmt0 interfaces of the Fabric Interconnects would be connected to that dedicated network or subnet.

This is a valid configuration for HyperFlex installations with the following caveat; wherever the HyperFlex installer is deployed it must have IP connectivity to the subnet of the mgmt0 interfaces of the Fabric Interconnects, and also have IP connectivity to the subnets used by the hx-inband-mgmt and hx-storage-data VLANs listed above. All HyperFlex storage traffic traversing the hx-storage-data VLAN and subnet is configured to use jumbo frames, or to be precise all communication is configured to send IP packets with a Maximum Transmission Unit (MTU) size of 9000 bytes. Using a larger MTU value means that each IP packet sent carries a larger payload, therefore transmitting more data per packet, and consequently sending and receiving data faster. This requirement also means that the Cisco UCS uplinks must be configured to pass jumbo frames. Failure to configure the Cisco UCS uplink switches to allow jumbo frames can lead to service interruptions during some failure scenarios, particularly when cable or port failures would cause storage traffic to traverse the northbound Cisco UCS uplink switches. The HyperFlex Storage Platform Controller VMs configure a roaming management virtual IP address using UCARP, which relies on IPv4 multicast to function properly. The controller VMs advertise to one another using the standard VRRP IPv4 multicast address of 224.0.0.18, which is a link-local address, and is therefore not forwarded by routers or inspected by IGMP snooping.

This requirement means the Cisco UCS uplinks and uplink switches must allow IPv4 multicast traffic for the HyperFlex management VLAN. Failure to configure the Cisco UCS uplink switches to allow IPv4 multicast traffic can lead to service interruptions during some failure scenarios, particularly when cable or port failures would cause multicast management traffic to traverse the northbound Cisco UCS uplink switches.

Cisco Nexus switches allow multicast traffic and enable IGMP snooping by default. This section on Cisco UCS design will describe the elements within Cisco UCS Manager that are configured by the Cisco HyperFlex installer. Many of the configuration elements are fixed in nature, meanwhile the HyperFlex installer does allow for some items to be specified at the time of creation, for example, VLAN names and IDs, IP pools and more. Where the elements can be manually set during the installation, those items will be noted in >brackets. During the HyperFlex Installation a Cisco UCS Sub-Organization is created named “hx-cluster”. The sub-organization is created underneath the root level of the Cisco UCS hierarchy, and is used to contain all policies, pools, templates and service profiles used by HyperFlex.

This arrangement allows for organizational control using Role-Based Access Control (RBAC) and administrative locales at a later time if desired. In this way, control can be granted to administrators of only the HyperFlex specific elements of the Cisco UCS domain, separate from control of root level elements or elements in other sub-organizations. Figure 21 Cisco UCS HyperFlex Sub-Organization QoS System Classes Specific Cisco UCS Quality of Service (QoS) system classes are defined for a Cisco HyperFlex system. These classes define Class of Service (CoS) values that can be used by the uplink switches north of the Cisco UCS domain, plus which classes are active, along with whether packet drop is allowed, the relative weight of the different classes when there is contention, the maximum transmission unit (MTU) size, and if there is multicast optimization applied.

QoS system classes are defined for the entire Cisco UCS domain, the classes that are enabled can later be used in QoS policies, which are then assigned to Cisco UCS vNICs. The following table and figure details the QoS System Class settings configured for HyperFlex. The Silver QoS system class has the multicast optimized setting enabled because the Silver class is used by the Silver QoS policy, which is in turn used by the management network vNICs.

They transmit IPv4 multicast packets to determine the master server for the roaming cluster management IP address, configured by UCARP. QoS Policies In order to apply the settings defined in the Cisco UCS QoS System Classes, specific QoS Policies must be created, and then assigned to the vNICs, or vNIC templates used in Cisco UCS Service Profiles. The following table details the QoS Policies configured for HyperFlex, and their default assignment to the vNIC templates created: Table 12 QoS Policies. Name IGMP Snooping State IGMP Snooping Querier State HyperFlex Enabled Disabled Figure 23 Multicast Policy VLANs VLANs are created by the HyperFlex installer to support a base HyperFlex system, with a single VLAN defined for guest VM traffic, and a VLAN for vMotion. Names and IDs for the VLANs are defined in the Cisco UCS configuration page of the HyperFlex installer web interface.

The VLANs listed in Cisco UCS must already be present on the upstream network, and the Cisco UCS FIs do not participate in VLAN Trunk Protocol (VTP). The following table and figure details the VLANs configured for HyperFlex: Table 14 Cisco UCS VLANs. Name ID Type Transport Native VLAN Sharing >LAN Ether No None >LAN Ether No None >LAN Ether No None >LAN Ether No None Figure 24 Cisco UCS VLANs Management IP Address Pool A Cisco UCS Management IP Address Pool must be populated with a block of IP addresses. These IP addresses are assigned to the Cisco Integrated Management Controller (CIMC) interface of the rack-mount and blade servers that are managed in the Cisco UCS domain. The IP addresses are the communication endpoints for various functions, such as remote KVM, virtual media, Serial over LAN (SoL), and Intelligent Platform Management Interface (IPMI) for each rack-mount or blade server. Therefore, a minimum of one IP address per physical server in the domain must be provided. The IP addresses are considered to be an “out-of-band” address, meaning that the communication pathway uses the Fabric Interconnects’ mgmt0 ports, which answer ARP requests for the management addresses.

Because of this arrangement, the IP addresses in this pool must be in the same IP subnet as the IP addresses assigned to the Fabric Interconnects’ mgmt0 ports. The default pool, named “ext-mgmt” is populated with a block of IP addresses, a subnet mask, and a default gateway by the HyperFlex installer. Figure 25 Management IP Address Pool Figure 26 IP Address Block MAC Address Pools One of the core benefits of the Cisco UCS and Virtual Interface Card (VIC) technology is the assignment of the personality of the card via Cisco UCS Service Profiles. The number of virtual NIC (vNIC) interfaces, their VLAN association, MAC addresses, QoS policies and more are all applied dynamically as part of the association process. Media Access Control (MAC) addresses use 6 bytes of data as a unique address to identify the interface on the layer 2 network. All devices are assigned a unique MAC address, which is ultimately used for all data transmission and reception. Cisco UCS Manager picks a MAC address from a pool of addresses for each vNIC defined in the service profile, when that service profile is created.

The MAC addresses are then configured on the Cisco VIC cards when the service profile is assigned to a physical server. Best practices mandate that MAC addresses used for Cisco UCS domains use 00:25:B5 as the first three bytes, which is one of the Organizationally Unique Identifiers (OUI) registered to Cisco Systems, Inc. The fourth byte (for example, 00:25:B5: xx) is specified during the HyperFlex installation. The fifth byte is set automatically by the HyperFlex installer, to correlate to the Cisco UCS fabric and the vNIC placement order. Finally, the last byte is incremented according to the number of MAC addresses created in the pool. To avoid overlaps, you must ensure that the first four bytes of the MAC address pools are unique for each HyperFlex system installed in the same layer 2 network, and also different from other Cisco UCS domains which may exist.

The following table details the MAC Address Pools configured for HyperFlex, and their default assignment to the vNIC templates created. Name CDP MAC Register Mode Action on Uplink Fail MAC Security Used by vNIC Template: HyperFlex-infra Enabled Only Native VLAN Link-down Forged: Allow hv-mgmt-a hv-mgmt-b hv-vmotion-a hv-vmotion-b storage-data-a storage-data-b HyperFlex-vm Enabled Only Native VLAN Link-down Forged: Allow vm-network-a vm-network-b Figure 28 Network Control Policy vNIC Templates Cisco UCS Manager has a feature to configure vNIC templates, which can be used to simplify and speed up configuration efforts. VNIC templates are referenced in service profiles and LAN connectivity policies, versus configuring the same vNICs individually in each service profile, or service profile template. VNIC templates contain all the configuration elements that make up a vNIC, including VLAN assignment, MAC address pool selection, fabric A or B assignment, fabric failover, MTU, QoS policy, Network Control Policy, and more.

Templates are created as either initial templates, or updating templates. Updating templates retain a link between the parent template and the child object, therefore when changes are made to the template, the changes are propagated to all remaining linked child objects. The following tables detail the settings in each of the vNIC templates created by the HyperFlex installer. VNIC Template Name: vm-network-b Setting Value Fabric ID B Fabric Failover Disabled Target Adapter Type Updating Template MTU 1500 MAC Pool vm-network-b QoS Policy gold Network Control Policy HyperFlex-vm VLANs >Native: no LAN Connectivity Policies Cisco UCS Manager has a feature for LAN Connectivity Policies, which aggregates all of the vNICs or vNIC templates desired for a service profile configuration into a single policy definition. This simplifies configuration efforts by defining a collection of vNICs or vNIC templates once, and using that policy in the service profiles or service profile templates.

The HyperFlex installer configures a LAN Connectivity Policy named HyperFlex, which contains all of the vNIC templates defined in the previous section, along with an Adapter Policy named HyperFlex, also configured by the HyperFlex installer. The following table details the LAN Connectivity Policy configured for HyperFlex. Policy Name Use vNIC Template vNIC Name vNIC Template Used: Adapter Policy HyperFlex Yes hv-mgmt-a hv-mgmt-a HyperFlex hv-mgmt-b hv-mgmt-b hv-vmotion-a hv-vmotion-a hv-vmotion-b hv-vmotion-b storage-data-a storage-data-a storage-data-b storage-data-b vm-network-a vm-network-a vm-network-b vm-network-b Adapter Policies Cisco UCS Adapter Policies are used to configure various settings of the Converged Network Adapter (CNA) installed in the Cisco UCS blade or rack-mount servers. Various advanced hardware features can be enabled or disabled depending on the software or operating system being used.

The following figures detail the LAN Connectivity Policy configured for HyperFlex: Figure 29 Cisco UCS Adapter Policy Resources Figure 30 Cisco UCS Adapter Policy Options BIOS Policies Cisco HX-Series servers have a set of pre-defined BIOS setting defaults defined in Cisco UCS Manager. These settings have been optimized for the Cisco HX-Series servers running HyperFlex. The HyperFlex installer creates a BIOS policy named “HyperFlex”, with all settings set to the defaults, except for enabling the Serial Port A for Serial over LAN (SoL) functionality. This policy allows for future flexibility in case situations arise where the settings need to be modified from the default configuration. Boot Policies Cisco UCS Boot Policies define the boot devices used by blade and rack-mount servers, and the order that they are attempted to boot from. Cisco HX-Series rack-mount servers and compute-only B200 M4 blade servers have their VMware ESXi hypervisors installed to an internal pair of mirrored Cisco FlexFlash SD cards, therefore they require a boot policy defining that the servers should boot from that location.

The HyperFlex installer configures a boot policy named “HyperFlex” which specifies boot from SD card. The following figure details the HyperFlex Boot Policy configured to boot from SD card: Figure 31 Cisco UCS Boot Policy. A compute-only B200 M4 blade server can optionally be configured to boot from an FC based SAN LUN. A custom boot policy must be made to support this configuration, and used in the service profiles and templates assigned to the B200 M4 blades.

Host Firmware Packages Cisco UCS Host Firmware Packages represent one of the most powerful features of the Cisco UCS platform; the ability to control the firmware revision of all the managed blades and rack-mount servers via a policy specified in the service profile. Host Firmware Packages are defined and referenced in the service profiles and once a service profile is associated to a server, the firmware of all the components defined in the Host Firmware Package are automatically upgraded or downgraded to match the package. The HyperFlex installer creates a Host Firmware Package named “HyperFlex” which uses the simple package definition method, applying firmware revisions to all components that match a specific Cisco UCS firmware bundle, versus defining the firmware revisions part by part. The following figure details the Host Firmware Package configured by the HyperFlex installer: Figure 32 Cisco UCS Host Firmware Package Local Disk Configuration Policies Cisco UCS Local Disk Configuration Policies are used to define the configuration of disks installed locally within each blade or rack-mount server, most often to configure Redundant Array of Independent/Inexpensive Disks (RAID) levels when multiple disks are present for data protection. Since HX-Series converged nodes providing storage resources do not require RAID, the HyperFlex installer creates a Local Disk Configuration Policy named “HyperFlex” which allows any local disk configuration. The policy also enables the embedded FlexFlash SD cards used to boot the VMware ESXi hypervisor.

The following figure details the Local Disk Configuration Policy configured by the HyperFlex installer: Figure 33 Cisco UCS Local Disk Configuration Policy Maintenance Policies Cisco UCS Maintenance Policies define the behavior of the attached blades and rack-mount servers when changes are made to the associated service profiles. The default Cisco UCS Maintenance Policy setting is “Immediate” meaning that any change to a service profile that requires a reboot of the physical server will result in an immediate reboot of that server. The Cisco best practice is to use a Maintenance Policy set to “user-ack”, which requires a secondary acknowledgement by a user with the appropriate rights within Cisco UCS, before the server is rebooted to apply the changes.

The HyperFlex installer creates a Maintenance Policy named “HyperFlex” with the setting changed to “user-ack”. The following figure details the Maintenance Policy configured by the HyperFlex installer: Figure 34 Cisco UCS Maintenance Policy Power Control Policies Cisco UCS Power Control Policies allow administrators to set priority values for power application to servers in environments where power supply may be limited, during times when the servers demand more power than is available. The HyperFlex installer creates a Power Control Policy named “HyperFlex” with all power capping disabled, and fans allowed to run at full speed when necessary. The following figure details the Power Control Policy configured by the HyperFlex installer: Figure 35 Cisco UCS Power Control Policy Scrub Policies Cisco UCS Scrub Policies are used to scrub, or erase data from local disks, BIOS settings and FlexFlash SD cards. If the policy settings are enabled, the information is wiped when the service profile using the policy is disassociated from the server.

The HyperFlex installer creates a Scrub Policy named “HyperFlex” which has all settings disabled, therefore all data on local disks, SD cards and BIOS settings will be preserved if a service profile is disassociated. The following figure details the Scrub Policy configured by the HyperFlex installer: Figure 36 Cisco UCS Scrub Policy Serial over LAN Policies Cisco UCS Serial over LAN (SoL) Policies enable console output which is sent to the serial port of the server, to be accessible via the LAN. For many Linux based operating systems, such as VMware ESXi, the local serial port can be configured as a local console, where users can watch the system boot, and communicate with the system command prompt interactively.

Since many blade servers do not have physical serial ports, and often administrators are working remotely, the ability to send and receive that traffic via the LAN is very helpful. Connections to a SoL session can be initiated from Cisco UCS Manager. The HyperFlex installer creates a SoL named “HyperFlex” to enable SoL sessions. The following figure details the SoL Policy configured by the HyperFlex installer: Figure 37 Cisco UCS Serial over LAN Policy vMedia Policies Cisco UCS Virtual Media (vMedia) Policies automate the connection of virtual media files to the remote KVM session of the Cisco UCS blades and rack-mount servers. Using a vMedia policy can speed up installation time by automatically attaching an installation ISO file to the server, without having to manually launch the remote KVM console and connect them one-by-one. The HyperFlex installer creates a vMedia Policy named “HyperFlex” for future use, with no media locations defined. The following figure details the vMedia Policy configured by the HyperFlex installer: Figure 38 Cisco UCS vMedia Policy Cisco UCS Manager has a feature to configure service profile templates, which can be used to simplify and speed up configuration efforts when the same configuration needs to be applied to multiple servers.

Service profile templates are used to spawn multiple service profile copies to associate with the servers, versus configuring the same service profile manually each time it is needed. Service profile templates contain all the configuration elements that make up a service profile, including vNICs, vHBAs, local disk configurations, boot policies, host firmware packages, BIOS policies and more. Templates are created as either initial templates, or updating templates. Updating templates retain a link between the parent template and the child object, therefore when changes are made to the template, the changes are propagated to all remaining linked child objects.

The HyperFlex installer creates two service profile templates, named “hx-nodes” and “compute-nodes”, each with the same configuration. This simplifies future efforts, if the configuration of the compute only nodes needs to differ from the configuration of the HyperFlex converged storage nodes. The following table details the service profile templates configured by the HyperFlex installer. ESXi VMDirectPath relies on a fixed PCI address for the pass-through devices. If the vNIC configuration is changed (add/remove vNICs), then the order of the devices seen in the PCI tree will change. The administrator will have to reconfigure the ESXi VMDirectPath configuration to select the 12 Gbps SAS HBA card, and reconfigure the storage controller settings of the controller VM. The following sections detail the design of the elements within the VMware ESXi hypervisors, system requirements, virtual networking and the configuration of ESXi for the Cisco HyperFlex HX Distributed Data Platform.

The Cisco HyperFlex system has a pre-defined virtual network design at the ESXi hypervisor level. Four different virtual switches are created by the HyperFlex installer, each using two uplinks, which are each serviced by a vNIC defined in the Cisco UCS service profile. The vSwitches created are: vswitch-hx-inband-mgmt: This is the default vSwitch0 which is renamed by the ESXi kickstart file as part of the automated installation.

The default vmkernel port, vmk0, is configured in the standard Management Network port group. The switch has two uplinks, active on fabric A and standby on fabric B, without jumbo frames. A second port group is created for the Storage Platform Controller VMs to connect to with their individual management interfaces. By default, all traffic is untagged. Vswitch-hx-storage-data: This vSwitch is created as part of the automated installation. A vmkernel port, vmk1, is configured in the Storage Hypervisor Data Network port group, which is the interface used for connectivity to the HX Datastores via NFS.

The switch has two uplinks, active on fabric B and standby on fabric A, with jumbo frames required. A second port group is created for the Storage Platform Controller VMs to connect to with their individual storage interfaces. By default, all traffic is untagged. Vswitch-hx-vm-network: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on both fabrics A and B, and without jumbo frames. The HyperFlex installer does not configure any port groups by default, as it is expected to be done as part of a post-install step.

By default, all traffic to this vSwitch is tagged. Vmotion: This vSwitch is created as part of the automated installation. A vmkernel port, used for vMotion is not created, as it is expected to be done as part of a post-install step. The switch has two uplinks, active on fabric A and standby on fabric B, with jumbo frames required. By default, all traffic is untagged.

The following table and figures help give more details into the ESXi virtual networking design as built by the HyperFlex installer. B200 M4 compute-only blades also place a lightweight storage controller VM on a 3.5 GB VMFS datastore, provisioned from the boot drive. CPU Resource Reservations Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure CPU resource reservations for the controller VMs.

This reservation guarantees that the controller VMs will have CPU resources at a minimum level, in situations where the physical CPU resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. The following table details the CPU resource reservation of the storage controller VMs. Number of vCPU Shares Reservation Limit 8 Low 10800 MHz unlimited Memory Resource Reservations Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure memory resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have memory resources at a minimum level, in situations where the physical memory resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. The following table details the memory resource reservation of the storage controller VMs. Cisco HyperFlex systems are normally ordered with a factory pre-installation process having been done prior to the hardware delivery.

This factory integration work will deliver the HyperFlex servers with the proper firmware revisions pre-set, a copy of the VMware ESXi hypervisor software pre-installed, and some components of the Cisco HyperFlex software already installed. Once on site, the final steps to be performed by Cisco Advanced Services or our Cisco Partner companies’ technical staff are reduced and simplified due to the previous factory work. For the purpose of this document, the entire setup process is described as though no factory pre-installation work was done, yet still leveraging the tools and processes developed by Cisco to simplify the installation and dramatically reduce the deployment time. Installation of the Cisco HyperFlex system is primarily done via a deployable HyperFlex installer virtual machine, available for download at cisco.com as an OVA file. The installer VM performs most of the Cisco UCS configuration work, and it is used to simplify the installation of ESXi on the HyperFlex hosts. The HyperFlex installer VM also installs the HyperFlex HX Data Platform software and creates the HyperFlex cluster, while concurrently performing many of the ESXi configuration tasks automatically.

Because this simplified installation method has been developed by Cisco, this CVD will not give detailed manual steps for the configuration of all the elements that are handled by the installer. The following sections will guide you through the prerequisites and manual steps needed prior to using the HyperFlex installer, how to utilize the HyperFlex Installer, and finally how to perform the remaining post-installation tasks. Prior to beginning the installation activities, it is important to gather the following information: IP addresses for the Cisco HyperFlex system need to be allocated from the appropriate subnets and VLANs to be used. IP addresses that are used by the system fall into the following groups: Cisco UCS Manager: These addresses are used and assigned by Cisco UCS manager. Three IP addresses are used by Cisco UCS Manager, one address is assigned to each Cisco UCS Fabric Interconnect, and the third IP address is a roaming address for managing the active FI of the Cisco UCS cluster.

In addition, at least one IP address per Cisco UCS blade or HX-series rack-mount server is required for the default ext-mgmt IP address pool, which is assigned to the CIMC interface of the physical servers. Since these management addresses are assigned from a pool, they need to be provided in a contiguous block of addresses. These addresses must all be in the same subnet.

HyperFlex and ESXi Management: These addresses are used to manage the ESXi hypervisor hosts, and the HyperFlex Storage Platform Controller VMs. Two IP addresses per node in the HyperFlex cluster are required, and a single additional IP address is needed as the roaming HyperFlex cluster management interface from the same subnet. These addresses can be assigned from the same subnet at the Cisco UCS Manager addresses, or they may be separate. HyperFlex Storage: These addresses are used by the HyperFlex Storage Platform Controller VMs, as vmkernel interfaces on the ESXi hypervisor hosts, for sending and receiving data to/from the HX Distributed Data Platform Filesystem.

Two IP addresses per node in the HyperFlex cluster are required, and a single additional IP address is needed as the roaming HyperFlex cluster storage interface from the same subnet. It is recommended to provision a subnet that is not used in the network for other purposes, and it is also possible to use non-routable IP address ranges for these interfaces. Finally, if the Cisco UCS domain is going to contain multiple HyperFlex clusters, it is possible to use a different subnet and VLAN ID for the HyperFlex storage traffic for each cluster. This is a safer method, guaranteeing that storage traffic from multiple clusters cannot intermix.

VMotion: These IP addresses are used by the ESXi hypervisor hosts as vmkernel interfaces to enable vMotion capabilities. One or more IP addresses per node in the HyperFlex cluster are required from the same subnet.

Multiple addresses and vmkernel interfaces can be used if you wish to enable multi-nic vMotion. The following tables will assist with gathering the required IP addresses for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the addresses required, and an example configuration that will be used during the setup steps in this CVD.

Address Group: UCS Management HyperFlex and ESXi Management HyperFlex Storage VMotion VLAN ID: Subnet: Subnet Mask: Gateway: Device UCS Management Addresses ESXi Management Interface Storage Controller Management Interface ESXi Hypervisor Storage vmkernel Interface Storage Controller Storage Interface VMotion vmkernel Interface Fabric Interconnect A Fabric Interconnect B UCS Manager HyperFlex Cluster HyperFlex Node #1 HyperFlex Node #2 HyperFlex Node #3 HyperFlex Node #4 HyperFlex Node #5 HyperFlex Node #6 HyperFlex Node #7 HyperFlex Node #8. Address Group: UCS Management HyperFlex and ESXi Management HyperFlex Storage VMotion VLAN ID: Subnet: Subnet Mask: Gateway: Device UCS Management Addresses ESXi Management Interface Storage Controller Management Interface ESXi Hypervisor Storage vmkernel Interface Storage Controller Storage Interface VMotion vmkernel Interface Fabric Interconnect A Fabric Interconnect B UCS Manager HyperFlex Cluster HyperFlex Node #1 HyperFlex Node #2 HyperFlex Node #3 HyperFlex Node #4 Blade #1 Blade #2 Blade #3 Blade #4. Table cells shaded in black do not require an IP address. The Cisco UCS Management, and HyperFlex and ESXi Management IP addresses can come from the same subnet, or be separate, as long as the HyperFlex installer can reach them both. By default, the ESXi hypervisor installation is configured to use Dynamic Host Configuration Protocol (DHCP) for automatic IP address assignment to the default ESXi management interfaces. In many environments, these management interfaces are configured manually, and this CVD will document that procedure. However, if it is preferred to use DHCP, you may do so following these guidelines: Configure a DHCP server with the appropriate scope for the subnet containing the ESXi hypervisor management interfaces.

Configure a listening interface of the DHCP server in the same subnet as the ESXi hypervisor management interfaces, or configure a DHCP relay agent on the subnet’s router to forward the DHCP requests to the remote server. Configure a DHCP reservation for each ESXi hypervisor host, so that each host will always be offered the same IP address lease. The MAC addresses for the ESXi hypervisor management vNICs can be found in Cisco UCS Manager, and exported to a file to ease the task of creating the reservations. Figure 43 Export MAC Addresses.

An example PowerShell script for creating the DHCP reservations from the Cisco UCS Manager CSV export file on a Windows DHCP server is included in the appendix: DNS servers must be available to query in the HyperFlex and ESXi Management group. DNS records need to be created prior to beginning the installation. At a minimum, it is highly recommended to create records for the ESXi hypervisor hosts’ management interfaces. Additional records can be created for the Storage Controller Management interfaces, ESXi Hypervisor Storage interfaces, and the Storage Controller Storage interfaces if desired.

The following tables will assist with gathering the required DNS information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration that will be used during the setup steps in this CVD. Item Value NTP Server #1 192.168.0.4 NTP Server #2 Timezone (UTC-8:00) Pacific Time Prior to the installation, the required VLAN IDs need to be documented, and created in the upstream network if necessary.

At a minimum there are 4 VLANs that need to be trunked to the Cisco UCS Fabric Interconnects that comprise the HyperFlex system; a VLAN for the HyperFlex and ESXi Management group, a VLAN for the HyperFlex Storage group, a VLAN for the VMotion group, and at least one VLAN for the guest VM traffic. The VLAN IDs must be supplied during the HyperFlex Cisco UCS configuration step, and the VLAN names can optionally be customized. The following tables will assist with gathering the required VLAN information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration that will be used during the setup steps in this CVD.

Name ID hx-inband-mgmt 10 hx-storage-data 51 vm-prod-100 100 hx-vmotion 200 The Cisco UCS uplink connectivity design needs to be finalized prior to beginning the installation. One of the early manual tasks to be completed is to configure the Cisco UCS network uplinks and verify their operation, prior to beginning the HyperFlex installation steps. Refer to the network uplink design possibilities in the Network Design section. The following tables will assist with gathering the required network uplink information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration that will be used during the setup steps in this CVD: Several usernames and passwords need to be defined or known as part of the HyperFlex installation process. The following tables will assist with gathering the required username and password information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration that will be used during the setup steps in this CVD.

To support Intel E5-26xx v4 processors, the Cisco UCS Infrastructure software, B-Series and C-Series bundles must be upgraded to revision 2.2(7c). To synchronize the Cisco UCS environment time to the NTP server, complete the following steps: 1. In Cisco UCS Manager, click the Admin tab in the navigation pane. In the navigation pane, select All >Timezone Management.

In the Properties pane, select the appropriate time zone in the Timezone menu. Click Add NTP Server. Enter the NTP server IP address and click OK. Click Save Changes, and then click OK. Figure 45 NTP Settings The Ethernet ports of a Cisco UCS 6248UP Fabric Interconnect are all capable of performing several functions, such as network uplinks or server ports, and more. By default, all ports are unconfigured, and their function must be defined by the administrator. To define the specified ports to be used as network uplinks to the upstream network, complete the following steps: 1.

In Cisco UCS Manager, click the Equipment tab in the navigation pane. Select Fabric Interconnects >Fabric Interconnect A >Fixed Module or Expansion Module 2 as appropriate >Ethernet Ports 3. Select the ports that are to be uplink ports, right click them, and click Configure as Uplink Port.

Click Yes to confirm the configuration, and click OK. Select Fabric Interconnects >Fabric Interconnect B >Fixed Module or Expansion Module 2 as appropriate >Ethernet Ports 6. Select the ports that are to be uplink ports, right click them, and click Configure as Uplink Port. Click Yes to confirm the configuration, and click OK. Verify all the necessary ports are now configured as uplink ports, their role will be listed as network. Figure 46 Cisco UCS Uplink Ports If the Cisco UCS uplinks from one Fabric Interconnect are to be combined into a port channel or vPC, you must separately configure the port channels using the previously configured uplink ports. To configure the necessary port channels in the Cisco UCS environment, complete the following steps: 1.

In Cisco UCS Manager, click the LAN tab in the navigation pane. Under LAN >LAN Cloud, click to expand the Fabric A tree. Right-click Port Channels underneath Fabric A and click Create Port Channel. Enter the port channel ID number as the unique ID of the port channel. Enter the name of the port channel.

Click on each port from Fabric Interconnect A that will participate in the port channel, and click the >>button to add them to the port channel. Click Finish. Under LAN >LAN Cloud, click to expand the Fabric B tree. Right-click Port Channels underneath Fabric B and click Create Port Channel.

Enter the port channel ID number as the unique ID of the port channel. Enter the name of the port channel. Click on each port from Fabric Interconnect B that will participate in the port channel, and click the >>button to add them to the port channel. Click Finish. Verify the necessary port channels have been created. It can take a few minutes for the newly formed port channels to converge and come online.

If the Cisco HyperFlex system will use B200 M4 blades as compute-only nodes in a hybrid design, additional settings must be configured for connecting the Cisco UCS 5108 blade chassis. The Chassis Discovery policy defines the number of links between the Fabric Interconnect and the Cisco UCS 2204XP Fabric Extenders which must be connected and active, before the chassis will be discovered. This also effectively defines how many of those connected links will be used for communication. The Link Grouping Preference setting specifies if the links will operate independently, or if Cisco UCS Manager will automatically combine them into port-channels. Cisco best practices recommends using link grouping, and 4 links per side per Cisco UCS 5108 chassis. To configure the necessary policy and setting, complete the following steps: 1.

In Cisco UCS Manager, click the Equipment tab in the navigation pane, and click on Equipment in the top of the navigation tree on the left. In the properties pane, click Policies tab. Click the Global Policies sub-tab, set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled per side, between the chassis and the Fabric Interconnects. Set the Link Grouping Preference option to Port Channel. Click Save Changes. The Ethernet ports of a Cisco UCS Fabric Interconnect connected to the rack-mount servers, or to the blade chassis must be defined as server ports.

Once a server port is activated, the connected server or chassis will begin the discovery process shortly afterwards. Rack-mount servers and blade chassis are automatically numbered in the order which they are first discovered. For this reason, it is important to configure the server ports sequentially in the order you wish the physical servers and/or chassis to appear within Cisco UCS manager. For example, if you installed your servers in a cabinet or rack with server #1 on the bottom, counting up as you go higher in the cabinet or rack, then you need to enable the server ports to the bottom-most server first, and enable them one-by-one as you move upward. You must wait until the server appears in the Equipment tab of Cisco UCS Manager before configuring the ports for the next server. The same numbering procedure applies to blade server chassis. To define the specified ports to be used as server ports, complete the following steps: 1.

In Cisco UCS Manager, click the Equipment tab in the navigation pane. Select Fabric Interconnects >Fabric Interconnect A >Fixed Module or Expansion Module 2 as appropriate >Ethernet Ports. Select the first port that is to be a server port, right click it, and click Configure as Server Port. Click Yes to confirm the configuration, and click OK.

Select Fabric Interconnects >Fabric Interconnect B >Fixed Module or Expansion Module 2 as appropriate >Ethernet Ports. Select the matching port as chosen for Fabric Interconnect A that is to be a server port, right click it, and click Configure as Server Port.

Click Yes to confirm the configuration, and click OK. Wait for a brief period, until the rack-mount server appears in the Equipment tab underneath Equipment >Rack Mounts >Servers, or the chassis appears underneath Equipment >Chassis. Repeat Steps 2-8 for each server port, until all rack-mount servers and chassis appear in the order desired in the Equipment tab. Server Discovery As previously described, once the server ports of the Fabric Interconnects are configured and active, the servers connected to those ports will begin a discovery process.

During discovery the servers’ internal hardware inventories are collected, along with their current firmware revisions. Before continuing with the HyperFlex installation processes, which will create the service profiles and associate them with the servers, wait for all of the servers to finish their discovery process and to show as unassociated servers that are powered off, with no errors. To view the servers’ discovery status, complete the following steps: 1. In Cisco UCS Manager, click the Equipment tab in the navigation pane, and click on Equipment in the top of the navigation tree on the left. In the properties pane, click Servers tab. Click the Blade Servers or Rack-Mount Servers sub-tab as appropriate, and view the servers’ status in the Overall Status column.

HyperFlex Installer Deployment The Cisco HyperFlex software is distributed as a deployable virtual machine, contained in an Open Virtual Appliance (OVA) file format. The HyperFlex OVA file is available for download at cisco.com: This document is based on the Cisco HyperFlex 1.7.1 release filename: Cisco-HX-Data-Platform-Installer-v1.7.1-14786.ova The HyperFlex installer OVA file can be deployed as a virtual machine in an existing VMware vSphere environment, VMware Workstation, VMware Fusion, or other virtualization environment which supports the import of OVA format files. For the purpose of this document, the process describes uses an existing ESXi server to run the HyperFlex installer OVA, and deploying it via the VMware vSphere (thick) client.

The Cisco HyperFlex Installer VM must be deployed in a location that has connectivity to the following network locations and services: Connectivity to the vCenter Server or vCenter Appliance which will manage the HyperFlex cluster(s) to be installed. Connectivity to the management interfaces of the Fabric Interconnects that contain the HyperFlex cluster(s) to be installed.

Connectivity to the management interface of the ESXi hypervisor hosts which will host the HyperFlex cluster(s) to be installed. Connectivity to the DNS server(s) which will resolve host names used by the HyperFlex cluster(s) to be installed. Connectivity to the NTP server(s) which will synchronize time for the HyperFlex cluster(s) to be installed.

Connectivity from the staff operating the installer to the webpage hosted by the installer, and to log in to the installer via SSH. If the network where the HyperFlex installer VM is deployed has DHCP services available to assign the proper IP address, subnet mask, default gateway, and DNS servers, the HyperFlex installer can be deployed using DHCP. If a static address must be defined, use the following table to document the settings to be used for the HyperFlex installer VM.

Setting Value Hostname IP Address Subnet Mask Default Gateway DNS Server #1 DNS Server #2 To deploy the HyperFlex installer OVA, complete the following steps: 1. Open the vSphere (thick) client, connect and log in to an ESXi host or vCenter server where the installer OVA will be deployed.

Click File >Deploy OVF Template. Click Browse and locate the Cisco-HX-Data-Platform-Installer-v1.7.1-14786.ova file, click on the file and click Open. Click Next twice. Modify the name of the virtual machine to be created if desired, and click on a folder location to place the virtual machine, and then click Next. Click on a specific host or cluster to locate the virtual machine and click Next.

Click on the Thin Provision option and click Next. Modify the network port group selection from the drop down list in the Destination Networks column, choosing the network the installer VM will communicate on, and click Next. If DHCP is to be used for the installer VM, leave the fields blank and click Next. If static address settings are to be used, fill in the fields for the installer VM hostname, default gateway, DNS server, IP address, and subnet mask, then click Next. ( Figure 51) 10. Check the box to power on after deployment, and click Finish.

( Figure 52) The installer VM will take a few minutes to deploy, once it has deployed and the virtual machine has started, proceed to the next step. HyperFlex Installer Web Page The HyperFlex installer is accessed via a webpage using your local computer and a web browser. If the HyperFlex installer was deployed with a static IP address, then the IP address of the website is already known. If DHCP was used, open the local console of the installer VM.

In the console you will see an interface similar to the example below, showing the IP address that was leased: Data Platform Installer. ******************************************* You can start the installation by visiting the following URL: ******************************************* Cisco-HX-Data-Platform-Installer login: To access the HyperFlex installer webpage, complete the following steps: 1. Open a web browser on the local computer and navigate to the IP address of the installer VM. For example, open 2. Click Accept or Continue to bypass any SSL certificate errors. At the security prompt, enter the username: admin 4.

At the security prompt, enter the password: Cisco123 5. Verify the version of the installer in the lower right-hand corner of the Welcome page is the correct version, and click Continue. Read and accept the End User Licensing Agreement, and click Continue. The first step of the HyperFlex installation is to use the HyperFlex installer webpage to perform the remaining Cisco UCS configuration tasks.

The HyperFlex installer will perform the following tasks within Cisco UCS: Create the “hx-cluster” sub-organization. Configure QoS system classes.

Create the VLANs. Create the out-of-band management IP address block. Set the HX-Series servers’ FlexFlash cards in a mirrored configuration. Create the required pools and policies used by HyperFlex.

Create the service profile templates and service profiles for the HX-Series servers. Associate the service profiles with the HX-Series servers. To configure the Cisco UCS settings, policies, and templates for HyperFlex, complete the following steps: 1. Open the HyperFlex installer webpage ().

Click Configure UCS Manager. (Figure 54) 3. Enter the Cisco UCS manager DNS hostname or IP address, the admin username, and the password, then click Continue. ( Figure 55) 4. On the following page, click the boxes next to the discovered and unassociated HX-Series servers which will be used to build the HyperFlex cluster(s). ( Figure 56) 5.

Enter a custom 4 th byte for the MAC address pools. MAC addresses can only be entered in hexadecimal format (00 to FF).

( Figure 57) 6. Click to expand the LAN Configuration section. Enter the desired VLAN names and VLAN IDs for the four required VLANs.

( Figure 58) 7. Enter the IP address range, subnet mask and default gateway values for the default “ext-mgmt” management IP address pool. ( Figure 59) 8.

Click to expand the Advanced Configuration section. Customize the HyperFlex Cluster Name value as desired. This text is entered as the User Label in Cisco UCS manager for the service profiles. ( Figure 60) 9.

Click Configure. On the following page, monitor the configuration process until it completes. Click Show Details to see detailed information on the steps being performed. The configuration will take a few minutes. ( Figure 61) To verify the HyperFlex installer has properly configured Cisco UCS Manager, complete the following steps: 1.

Click Launch UCS Manager from the Cisco UCS Manager Configuration summary page of the HyperFlex installer webpage, or open Cisco UCS manager as documented in. In Cisco UCS Manager, click Servers tab in the navigation pane. Expand Servers >Service Profiles >root >Sub-Organizations >hx-cluster and verify the service profiles were configured for the servers you selected in the configuration page during the previous task. ( Figure 62) 4. In Cisco UCS Manager, click the LAN tab in the navigation pane.

Click VLANs in the navigation tree, and verify in the configuration pane the VLANs have been created with the names and IDs specified in the previous task. ( Figure 63) Tagged VLANs Alternate Configuration The default Cisco UCS configuration uses native, or untagged traffic for the HyperFlex and ESXi management group, the HyperFlex storage group, and the VMotion network group. In some cases, the standards set at the customer data center may require that all traffic be tagged with the appropriate 802.1Q VLAN ID. In order to accommodate that requirement, changes must first be made in the Cisco UCS configuration, before the VLAN IDs can be properly assigned and used in later configuration steps. To modify the Cisco UCS settings for tagged VLANs, complete the following steps: 1.

In Cisco UCS Manager, click the LAN tab in the navigation pane. Expand LAN >Policies >root >Sub-Organizations >hx-cluster >vNIC-Templates. Click on the vNIC template named “hv-mgmt-a”. In the configuration pane, click Modify VLANs. In the Modify VLANs window, click the radio button in the Native VLAN column which is already selected, to clear or unselect that button, and click OK. ( Figure 64) 6.

Repeat steps 3-6 for each vNIC Template that is required to carry tagged traffic. For example, to have all vNICs carry tagged traffic, the changes must be made to the following vNIC templates: a. Hv-vmotion-a d.

Hv-vmotion-b e. Storage-data-a f.

Storage-data-b ESXi Hypervisor Installation The HyperFlex installer VM can be used to build a custom ESXi installation ISO using a kickstart file, which automates the installation process of ESXi to the HyperFlex nodes. The HyperFlex system requires a Cisco custom ESXi ISO file to be used, which has Cisco hardware specific drivers pre-installed to ease the installation process, as detailed in section. The Cisco custom ESXi ISO file is available for download at cisco.com: This document is based on the Cisco custom ESXi 6.0 Update 1B ISO release filename: Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso The kickstart process will automatically perform the following tasks with no user interaction required: Accept the End User License Agreement. Configure the root password to: Cisco123 Install ESXi to the internal mirrored Cisco FlexFlash SD cards.

Set the default management network to use vmnic0, and obtain an IP address via DHCP. Enable SSH access to the ESXi host. Enable the ESXi shell. Enable serial port com1 console access to facilitate Serial over LAN access to the host. Configure the ESXi configuration to always use the current hardware MAC address of the network interfaces, even if they change. Rename the default vSwitch0 to vswitch-hx-inband-mgmt.

To create the custom kickstart ESXi installation ISO file, complete the following steps: 1. Copy the base ISO file, Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso to the HyperFlex installer VM using SCP, SFTP or any available method. Place the file in the /var/hyperflex/source-images folder. Login to the HyperFlex installer VM via SSH, using Putty or a similar tool. Username is: root Password is: Cisco123 3.

Enter the following commands to integrate the kickstart file with the base ISO file, and create a new ISO file from them: # cd /opt/springpath/install-esxi #./makeiso.sh /opt/springpath/install-esxi/cisco-hx-esxi-6.0.ks.cfg /var/hyperflex/source-images/Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso /var/www/localhost/images/HX-Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso 4. You will see output from the command as below: mount: block device /var/hyperflex/source-images/Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso is write-protected, mounting read-only Warning: creating filesystem that does not conform to ISO-9660.

A sample script exists in the HyperFlex installer VM to configure the ESXi hosts’ management interfaces via SoL. To use the script, log in to the installer VM via SSH and run /bin/setstaticip_ucs.py Figure 74 ESXi Network Test After the ESXi management network has been configured on each server, there are a few remaining configuration steps necessary before beginning the HyperFlex installation. These final steps prepare the ESXi hypervisor hosts with the remaining prerequisite settings, using the vCenter Web Client. Create vCenter Datacenter and Cluster In the vCenter server Web Client, create the Datacenter object and vSphere cluster where the HyperFlex hosts will be placed. To configure the vCenter Datacenter and cluster, complete the following steps: 1. Open the vCenter Web Client webpage and login, For example, 2.

In the home pane, click Hosts and Clusters ( Figure 75) 3. In the Navigator pane, right-click on the vCenter server name, and click New Datacenter. Enter the datacenter name and click OK.

( Figure 76) 5. In the navigator pane, right-click the datacenter that was just created, and click New Cluster. Enter the cluster name, leave all other settings disabled or turned off, and click OK.

( Figure 77) Add ESXi Hosts to vCenter The ESXi hosts must be added to the vCenter server prior to beginning the HyperFlex installation. To configure the vCenter data center and cluster, complete the following steps: 1. In the vSphere Web Client, right-click the cluster object where the HyperFlex nodes will reside and click Add Host. Enter the name of the host to be added and click Next. ( Fig ure 78) 3. Enter the root username and password, and click Next. ( Figure 79) 4.

Click Yes to accept the security certificate. Click Next after reviewing the host summary. Click on a license key to apply to the server, if applicable, and click Next. Leave the lockdown mode option set to disabled, which is the default, and click Next. On the summary page, click Finish. Repeat steps 1-8 for each additional host in the HyperFlex cluster.

Configure NTP NTP is required for proper operation of the HyperFlex environment. To configure NTP settings on the ESXi hosts, complete the following steps: 1. In the vSphere Web Client, right-click the ESXi host to be configured, and click Settings. Under the System section, click Time Configuration. Select the Use Network Time Protocol option. ( Figure 80) 4.

Change the NTP Service Startup Policy to Start and Stop with the Host. Enter the NTP server address(es) in the NTP Servers field. Click on the Start button 7. Repeat steps 1-7 for each additional host in the HyperFlex cluster. Maintenance Mode The ESXi hosts must be placed into maintenance mode prior to beginning the HyperFlex installation. To place the ESXi hosts into maintenance mode, complete the following steps: 1. In the vSphere Web Client, right-click the ESXi host to be configured, click Maintenance Mode, and click Enter Maintenance Mode.

Repeat steps 1-2 for each additional host in the HyperFlex cluster. Figure 81 vCenter Enter Maintenance Mode. An example script to configure NTP, place the ESXi hosts into maintenance mode, and add them to vCenter is provided in the appendix: The installation of the HyperFlex platform software is done via the HyperFlex installer webpage, installed as documented in section. To deploy a HyperFlex cluster, complete the following steps: 1. Open the HyperFlex installer webpage (). Click the Configure Cluster button.

Enter the IP address values or DNS names in the first section. ( Figure 82) 4. Click the Add button if more than 4 hosts are being installed, until there are enough lines to accommodate the number of nodes being installed.

If IP addresses are entered into the first line, which has a grey background, the lines underneath it will auto-populate with incremented IP addresses. It is necessary to enter the gateway addresses for both fields, even though they are marked as optional. Enter the values in the Cisco HX Cluster section.

( Figure 83) 8. Enter the values in the Hypervisor Credentials section. Enter the desired password in the Controller VM section. Enter the values in the vCenter Configuration section. Enter the values in the System Services section.

Enter the values in the Auto Support section. ( Figure 85) 13. Click the arrow to expand the Advanced Configuration section. Check the box to Clean-Up Disk Partitions.

(Optional) If the management and/or storage interfaces have been configured to carry VLAN tagged traffic, as documented in, check the box to override the default network settings, then continue to step 14. If traffic is untagged, continue to step 16.

Enter the VLAN IDs for the management and storage networks if you are using tagged traffic, and optionally change the names of the vSwitches that are created. Click Validate. If the validation checks return any errors, re-check the values entered and correct as necessary, otherwise click Deploy. ( Figure 86). The cluster configuration values entered can be saved in a JSON file format for re-use at a later time by clicking the link Save Configuration File.

The HyperFlex installer combines many steps, configuring the ESXi hosts, rebooting them, deploying storage controller VMs, installing software and creating the cluster. The installation can take over 30 minutes to complete. Figure 87 HyperFlex Installer Deployment Progress Figure 88 HyperFlex Installer Creating Cluster Figure 89 HyperFlex Installation Complete When the HyperFlex installation has completed, continue with the post-installation tasks as documented in. If a hybrid cluster is being built, continue with the tasks documented in. The process to expand a HyperFlex cluster can be used to grow an existing HyperFlex cluster with additional converged storage nodes, or to expand an existing cluster with additional compute-only nodes to create a hybrid cluster.

Prior to beginning any expansion, the following requirements must be met: Verify the storage cluster state is healthy in the HyperFlex Web Client Plugin. The new node is a server that meets the compute node system requirements listed in, including network and disk requirements.

The software and firmware versions on the node match the Cisco HX Data Platform software version and the vSphere software version, and match the existing servers. The new node must be physically installed and discovered as described in. The new node uses the same network configuration as the other nodes in the storage cluster, including the VLAN IDs and VLAN tagging. This is accomplished via the use of Cisco UCS service profiles.

To prevent errors during cluster expansions, it is recommended to disable vSphere High Availability (HA) during the expansion, and then re-enable the feature after the expansion completes. Prior to expanding the cluster, the new nodes must be prepared and configured similar to a cluster installation. There are some specific differences in the processes which will be outlined below.

To expand an existing standard HyperFlex cluster, complete the following steps: Configure Cisco UCS 1. Open the HyperFlex installer webpage (). Click Configure UCS Manager. Enter the Cisco UCS manager DNS hostname or IP address, the admin username, and the password, and click Continue. On the following page, click the boxes next to the discovered and unassociated HX-Series servers which will be used to expand the HyperFlex cluster(s).

( Figure 90) 5. Click Configure. The appropriate service profile(s) will be created and associated with the new server that will expand the cluster. Install ESXi Install and configure the ESXi hypervisor as documented in the section. The node must meet the following requirements before continuing with the HyperFlex cluster expansion: The new node must be in maintenance mode while configuring the storage cluster. The new node must be a member in the same vCenter cluster and vCenter datacenter as the existing nodes.

The new node has SSH enabled. The new node has at least one valid DNS and NTP server configured. Expand the Cluster 1. Open the HyperFlex installer webpage.

Click Expand Cluster. Enter the IP address values or DNS names in the first section. Click the Add button if more than 1 new host is being added, until there are enough lines to accommodate the number of nodes being added. If IP addresses are entered into the first line, which has a grey background, the lines underneath it will auto-populate with incremented IP addresses. The first line with the grey background is not a node that is being configured.

Enter the existing HyperFlex cluster management IP address. Enter the values in the Hypervisor Credentials section. Enter the password of the existing cluster Controller VMs. Enter the values in the vCenter Configuration section for the existing vCenter server. Click the arrow to expand the Advanced Configuration section.

Check the box to Clean-up Disk Partitions. Click Validate. If the validation checks return any errors, re-check the values entered and correct as necessary, otherwise click Deploy. Figure 91 HyperFlex Installer Expand Cluster After the cluster expansion has completed, continue with the steps. Hybrid Cluster Expansion Hybrid HyperFlex clusters are built by first installing a standard cluster, then expanding the cluster with compute-only nodes. Prior to expanding the cluster, the new nodes must be prepared and configured similar to a cluster installation.

There are some specific differences in the processes which will be outlined below. To expand an existing HyperFlex cluster, creating a hybrid cluster, complete the following steps: Configure Cisco UCS 1. In Cisco UCS Manager, click the Servers tab in the navigation pane. Expand Servers >Service Profile Templates >root >Sub-Organizations >hx-cluster.

Right-click the Service Profile Template named “compute-nodes” and click Create Service Profiles from Template. Enter the naming prefix of the service profiles that will be created from the template. Enter the starting number of the service profiles being created and the number of service profiles to be created. ( Figure 92) 6. Click OK twice. Expand Servers >Service Profiles >root >Sub-Organizations >hx-cluster. Click on the first service profile created for the additional B200 M4 compute-only nodes, and right click.

Click Change Service Profile Association. In the Server Assignment dropdown list, change the selection to select Existing Server. Select Available Servers. In the list of available servers, chose the B200 M4 blade server to assign the service profile to and click OK. ( Figure 93) 13. Click Yes to accept that the server will reboot to apply the service profile. Repeat steps 7-14 for each additional compute-only B200 M4 blade that will be added to the cluster.

Install ESXi Install and configure the ESXi hypervisor as documented in the section. The node must meet the following requirements before continuing with the HyperFlex cluster expansion: The new node must be in maintenance mode while configuring the storage cluster. The new node must be a member in the same vCenter cluster and vCenter datacenter as the existing nodes. The new node has SSH enabled. The new node has at least one valid DNS and NTP server configured. Expand the Cluster 1. Open the HyperFlex installer webpage.

Click Expand Cluster. Click the black X next to the first line of IP addresses to delete it.

Click the downward arrow on the side of the Add button, and then click Add Compute Node to add a line for a compute-only node to be added to the cluster. Click the downward arrow on the side of the Add button, and then click Add Compute Node if more than 1 new host is being added, until there are enough lines to accommodate the number of nodes being added. Enter the IP address values or DNS names for the new node(s). Enter the existing HyperFlex cluster management IP address. Enter the values in the Hypervisor Credentials section. Enter the password of the existing cluster Controller VMs. Enter the values in the vCenter Configuration section for the existing vCenter server.

Click Validate. If the validation checks return any errors, re-check the values entered and correct as necessary, otherwise click Deploy. Figure 94 HyperFlex Installer Expand Hybrid Cluster After the cluster expansion has completed, continue with the steps. HyperFlex Post Installation Configuration After the HyperFlex cluster has been installed, there are some post-installation configuration steps that need to be completed prior to installing or migrating any virtual machines to the cluster.

The new HyperFlex cluster has no default datastores configured for virtual machine storage; therefore the datastores must be created using the VMware vCenter Web Client plugin. A minimum of two datastores is recommended to satisfy vSphere High Availability datastore heartbeat requirements, although one of the two datastores can be very small. It is important to recognize that all HyperFlex datastores are thinly provisioned, meaning that their configured size can far exceed the actual space available in the HyperFlex cluster. Alerts will be raised by the HyperFlex system in the vCenter plugin when actual space consumption results in low amounts of free space, and alerts will be sent via auto support email alerts. Overall space consumption in the HyperFlex clustered filesystem is optimized by the default deduplication and compression features.

Figure 95 HyperFlex Datastore Thin Provisioning To create HyperFlex datastores, complete the following steps: 1. After the HyperFlex installation is complete, close the vCenter Web Client, and re-open it from the start. In the home pane, click Hosts and Clusters. In the Navigator pane, click on the vSphere cluster name.

In the center pane, a new section titled Cisco HyperFlex Systems now exists. Click on the link with the HyperFlex cluster name.

( Figure 96) 5. Alternatively, from the home screen click on VCenter Inventory Lists. In the Navigator pane, click on Cisco HX Data Platform.

( Figure 97) 7. In the Navigator pane, click the name of the HyperFlex cluster. In the center pane, click Manage. Click Datastores.

Click the button to Create Datastore. ( Figure 98) 11. Enter the datastore name and size desired, and click OK. Repeat steps 7-8 for each datastore required.

Auto support email settings for the email server address and sender email address are configured during the HyperFlex installation, however the recipient address needs to be configured for auto support emails to be sent and received successfully. To configure the auto support email settings, complete the following steps: 1. Log in to the management IP address of the first storage platform controller VM via SSH, or open the interactive CLI web interface of each controller VM.

Enter the commands as shown in the example below to set the recipient address, and the confirm the configuration: # stcli services asup recipients set --recipients support@cisco.com # stcli services asup show recipientList: support@cisco.com enabled: True # stcli services smtp show smtpServer: 192.168.0.4 fromAddress: hx-cluster1@hx.lab.cisco.com # sendasup –t 3. Repeat steps 1-2 for each remaining storage platform controller VM in the HyperFlex cluster. After the HyperFlex cluster has been installed, additional features available in vCenter should be enabled for the cluster, specifically vCenter High Availability (HA) and Distributed Resource Scheduler (DRS). To configure the HA and DRS features, complete the following steps: 1. Open the vCenter Web Client webpage and login.

In the home pane, click on Hosts and Clusters. Right-click on the cluster containing the HyperFlex nodes and click Settings. In the center pane, under Services, click on vSphere DRS. In the Edit Cluster Settings window, check the box to Turn on vSphere DRS, and leave the remaining settings at their defaults.

( Figure 99) 6. In the Edit Cluster Settings window, click on vSphere HA on the left.

Check the box to turn on vSphere HA. Click on the section Datastore for Heartbeating to expand the section and view the options. Click the option Automatically select datastores accessible from the host. ( Figure 100) 10. In many cases the configuration of the HyperFlex system requires configuration of multiple VLAN IDs for the guest virtual machines to communicate on. If additional VLANs are required, the VLANs must be defined in UCS Manager before they can be configured as part of the ESXi host networking port groups. To configure additional VLANs for guest VM traffic, complete the following steps: 1.

In Cisco UCS Manager, click the LAN tab in the navigation pane. Select LAN >LAN Cloud >VLANs.

Right-click VLANs and select Create VLANs. Enter the name of the VLAN. Select the Multicast Policy named “HyperFlex”. Leave the setting for Common/Global. Enter the VLAN ID number.

Leave the Sharing Type set to none. Click OK twice. Repeat steps 3-9 for each additional VLAN required. Figure 101 Cisco UCS Create VLAN The HyperFlex installer does not configure any ESXi port groups to connect the guest virtual machines. The configuration of virtual machine port groups used by the guest VMs must be completed after the HyperFlex installation, and configured according to the guest VM requirements. To configure guest VM networking, complete the following steps: 1. Open the vCenter Web Client webpage and login.

In the home pane, click on Hosts and Clusters. Right-click on the first ESXi host and click Settings. In the center pane, click Networking. Click the name of the virtual switch for guest VM traffic; by default this is named vswitch-hx-vm-network. Click on the Add Host Networking button.

( Figure 102) 7. In the Add Networking window, select Virtual Machine Port Group for a Standard Switch, and click Next. ( Figure 103) 8. Verify the correct virtual switch is selected and click Next. Enter the desired name for the port group, and enter the VLAN ID number that matches the VLAN ID for the guest VM traffic, then click Next. ( Figure 104) 10. Click Finish.

Repeat steps 3-10 for each additional host in the HyperFlex cluster. Repeat the steps in this section for each additional port group required for guest VM traffic, each of them participating in a different VLAN as defined in Cisco UCS Manager. Figure 102 The HyperFlex installer does not configure any ESXi port groups to enable vMotion of the guest virtual machines. The configuration of a vMotion port group and vmkernel port must be completed after the HyperFlex installation. To configure vMotion networking, complete the following steps: 1. Open the vCenter Web Client webpage and login. In the home pane, click on Hosts and Clusters.

Right-click on the first ESXi host and click Settings. In the center pane, click Networking. Click the name of the virtual switch for vMotion traffic; by default this is named vMotion. Click on the Add Host Networking button. In the Add Networking window, select VMkernel Networking Adapter, and click Next.

( Figure 105) 8. Verify the correct virtual switch is selected and click Next. Enter the desired name of the port group. ( Figure 106) 10. If the vMotion interfaces have been configured to carry VLAN tagged traffic, as documented in, then modify the VLAN ID to match the value of the vMotion VLAN defined in Cisco UCS Manager. Check the box for vMotion traffic, and click Next. Select Use Static IPv4 Settings, enter the desired IP address and subnet mask, and click Next.

( Figure 107) 13. Click Finish. In the center pane, click to expand the VMkernel port that was just created, and click on the port. ( Figure 108) 15. Click the Edit Settings button. In the Edit Settings window, click NIC Settings. Change the MTU value to 9000, and click OK.

( Figure 109) 18. Repeat steps 3-17 for each additional host in the HyperFlex cluster. By default, ESXi will store system and debug logs locally to the Cisco HX-Series rack mount servers. On the HX220c-M4S model, ESXi will generate an alert regarding this configuration due to the logs being stored on the SD cards, whose controller being a USB device, is considered to be a removable device. To clear this fault, it is necessary to configure sending logs via Syslog to the vCenter server. For the HX240c-M4SX this fault will be cleared after a reboot of ESXi, since the logs will be written to the 120 GB housekeeping disk, nevertheless it is recommended to store the logs remotely.

To configure syslog, complete the following steps: 1. Open the vCenter Web Client webpage and login.

In the home pane, click on Hosts and Clusters. Right-click on the first ESXi host and click Settings.

In the center pane, under the System section, click Advanced System Settings. From the list on the right, select Syslog.global.logHost, then click Edit Settings button. Enter the IP address of the vCenter server or vCenter appliance which will receive the syslog messages, followed by the TCP port, which is 514 by default. For example, 192.168.10.101:514 7. Repeat steps 3-7 for each additional host in the HyperFlex cluster.

Figure 110 ESXi Configure Syslog. The Cisco HyperFlex vCenter Web Client Plugin is installed by the HyperFlex installer to the specified vCenter server or vCenter appliance. The plugin is accessed as part of the vCenter Web Client interface, and is the primary tool used to monitor and configure the HyperFlex cluster.

To manage HyperFlex cluster using the plugin, complete the following steps: 1. Open the vCenter Web Client, and login. In the home pane, click on Hosts and Clusters. In the Navigator pane, click on the vSphere cluster name.

In the center pane, a section titled Cisco HyperFlex Systems exists. Click on the link with the HyperFlex cluster name. Alternatively, from the home screen click on vCenter Inventory Lists. In the Navigator pane, click on Cisco HX Data Platform. In the Navigator pane, click the name of the HyperFlex cluster. Figure 111 HyperFlex Web Client Plugin Summary Summary From the Web Client Plugin Summary screen, several elements are presented: Overall cluster usable capacity, used capacity, free capacity, datastore capacity provisioned, and the amount of datastore capacity provisioned beyond the actual cluster capacity.

Deduplication and compression savings percentages calculated against the data stored in the cluster. The cluster health state, and the number of node failures that can occur before the cluster goes into read-only or offline mode.

A snapshot of performance over the previous hour, showing IOPS, throughput, and latencies. From the Web Client Plugin Monitor tab, several elements are presented: Clicking on the Performance button gives a larger view of the performance charts. If a full webpage screen view is desired, click the Preview Interactive Performance charts hyperlink. Clicking on the Events button displays a HyperFlex event log, which can be used to diagnose errors and view system activity events. Figure 112 HyperFlex Plugin Performance Monitor Figure 113 HyperFlex Plugin Events From the Web Client Plugin Manage tab, several elements are presented: Clicking on the Cluster button gives an inventory of the HyperFlex cluster and the physical assets of the cluster hardware. Clicking on the Datastores button allows datastores to be created, edited, deleted, mounted and unmounted, along with space summaries and performance snapshots of that datastore.

Figure 114 HyperFlex Plugin Manage Cluster Figure 115 HyperFlex Plugin Manage Datastores In this section, various best practices and guidelines are given for management and ongoing use of the Cisco HyperFlex system. These guidelines and recommendations apply only to the software versions upon which this document is based, listed in, and may be amended in future releases. For the best possible performance and functionality of the virtual machines that will be created using the HyperFlex ReadyClone feature, the following guidelines for preparation of the base VMs to be cloned should be followed: Base VMs must be stored in a HyperFlex datastore. All virtual disks of the base VM must be stored in the same HyperFlex datastore. Base VMs can only have HyperFlex native snapshots, no VMware redo-log based snapshots can be present. Create a copy of the base VM to be cloned on each of the nodes in the HyperFlex cluster. To follow this recommendation, temporarily disable the vSphere DRS feature, afterwards the base VM can be copied into the HyperFlex system multiple times, targeting each node in turn.

Alternatively, the base VM can be created in the HyperFlex cluster on one node, and then cloned via the standard vSphere clone feature to the other nodes. Create ReadyClones from the multiple base VMs evenly. For example, on a 4 node cluster where 80 clones are required, create one base VM located on each of the 4 nodes, then create 20 ReadyClones of each base VM. Base VMs being cloned must be powered off. HyperFlex native snapshots are high performance snapshots that are space-efficient, crash-consistent, and application consistent, taken by the HyperFlex Distributed Filesystem, rather than using VMware native redo-log based snapshots. For the best possible performance and functionality of HyperFlex native snapshots, the following guidelines should be followed: Ensure that the first snapshot taken of a guest VM is a HyperFlex native snapshot, by using the “Cisco HX Data Platform” menu item in the vSphere Web Client, and choosing Snapshot Now or Schedule Snapshot. Failure to do so reverts to VMware redo-log based snapshots.

( Figure 116) Take the initial HyperFlex native snapshot with the virtual machine powered off. This creates what is called the Sentinel snapshot. The Sentinel snapshot becomes a base snapshot that all future snapshots are added to, and prevents the VM from reverting to VMware redo-log based snapshots. Failure to do so can cause performance degradation when taking snapshots later, while the VM is performing large amounts of storage IO. Additional snapshots can be taken via the “Cisco HX Data Platform” menu, or the standard vSphere client snapshot menu. As long as the initial snapshot was a HyperFlex native snapshot, each additional snapshot is also considered to be a HyperFlex native snapshot.

Do not delete the Sentinel snapshot unless you are deleting all the snapshots entirely. Do not revert the VM to the Sentinel snapshot. ( Figure 117) If large numbers of scheduled snapshots need to be taken, distribute the time of the snapshots taken by placing the VMs into multiple folders or resource pools. For example, schedule two resource groups, each with several VMs, to take snapshots separated by 15 minute intervals in the scheduler window. Snapshots will be processed in batches of 8 at a time, until the scheduled task is completed. ( Figure 118) The Cisco HyperFlex Distributed Filesystem can create multiple datastores for storage of virtual machines.

While there can be multiple datastores for logical separation, all of the files are located within a single distributed filesystem. As such, performing storage vMotions of virtual machine disk files has little value in the HyperFlex system. Furthermore, storage vMotions create additional filesystem consumption and generate additional unnecessary metadata within the filesystem, which must later be cleaned up via the filesystem’s internal cleaner process. It is recommended to not perform storage vMotions of the guest VMs between datastores within the same HyperFlex cluster. Storage vMotions between different HyperFlex clusters, or between HyperFlex and non-HyperFlex datastores are permitted.

HyperFlex clusters can create multiple datastores for logical separation of virtual machine storage, yet the files are all stored in the same underlying distributed filesystem. The only difference between one datastore and another are their names and their configured sizes.

Due to this, there is no compelling reason for a virtual machine’s virtual disk files to be stored on a particular datastore versus another. All of the virtual disks that make up a single virtual machine must be placed in the same datastore. Spreading the virtual disks across multiple datastores provides no benefit, and can cause ReadyClone and snapshot errors. Within the vCenter Web Client, a specific menu entry for “HX Maintenance Mode” has been installed by the HyperFlex plugin.

This option directs the storage platform controller on the node to shutdown gracefully, redistributing storage IO to the other nodes with minimal impact. Using the standard Maintenance Mode menu in the vSphere Web Client, or the vSphere (thick) Client can be used, but graceful failover of storage IO and shutdown of the controller VM is not guaranteed. In order to minimize the performance impact of placing a HyperFlex converged storage node into maintenance mode, it is recommended to use the HX Maintenance Mode menu selection to enter or exit maintenance mode whenever possible. This section provides a description of several failure scenarios within a HyperFlex system, the response behavior by the HyperFlex software, ESXi and the VMs, and recovery behavior for each scenario. The following scenarios apply to the default HyperFlex cluster settings of replication factor set to 3, and access policy set to strict: HX-Series Server Link Failure One network link between an HX-Series server and the Fabric Interconnect fails, either due to a physical port failure, cable failure, or an incorrect configuration change within Cisco UCS Manager. Response Within ESXi, all vmnic interfaces connected to the failed link will go offline. Traffic that was previously transmitted and received on that link, based on the explicit vSwitch uplink failover order, will now traverse the surviving link.

All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen. Recovery Once the link failure is repaired, all VMnic interfaces will return to an online state, and all network traffic will resume their normal pathways. Cisco UCS Uplink Failure All Cisco UCS network uplinks from a single Fabric Interconnect fail, either due to physical port failures, cable failures, an incorrect configuration change in Cisco UCS Manager, or a failure or misconfiguration of the upstream switch.

Response Due to the Network Control Policy “Action on Uplink Fail” setting of link down, loss of all uplinks on one FI will cause all of that fabric’s vNICs to go offline, and the corresponding ESXi vmnic interfaces will show link down. Traffic that was previously transmitted and received on that fabric, based on the explicit vSwitch uplink failover order, will now traverse the opposite fabric for all nodes.

All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen. Recovery Once the uplink failure is repaired, all vNICs will return to an online state, and all network traffic will resume their normal pathways. Additional east-west traffic going from fabric A to fabric B, and vice-versa, across the Cisco UCS network uplinks will occur briefly, as the vNICs are not always brought online at the exact same times. Fabric Interconnect Failure A Fabric Interconnect fails, or goes offline due to planned or unplanned maintenance activity. Response All of the vNICs serviced by the failed FI fabric go offline and the corresponding ESXi vmnic interfaces will show link down. Traffic that was previously transmitted and received on that fabric, based on the explicit vSwitch uplink failover order, will now traverse the opposite fabric for all nodes. All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.

Recovery Once the FI is returned to service, after the network uplinks automatically reconnect and come online, all vNICs will return to an online state, and all network traffic will resume their normal pathways. Additional east-west traffic going from fabric A to fabric B, and vice-versa, across the Cisco UCS network uplinks will occur briefly, as the vNICs are not always brought online at the exact same times. Capacity HDD Failure A capacity HDD fails in an HX-Series converged node, or is removed. Response The HyperFlex cluster state changes to unhealthy. Alerts are raised in vCenter for the disk failure and cluster health. An immediate rebalance job is triggered to return the system to the specified replication factor, replicating the missing data on the disk from the remaining copies.

When the rebalance job finishes, the cluster state returns to healthy. All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen. Total cluster storage capacity will be reduced accordingly. Recovery Once the failed disk is replaced, the HX software will automatically begin using the replaced disk for new data storage. A default nightly automatic rebalance job will evenly distribute data across the cluster, including the new disk, ensuring that there are no hotspots or cold spots where space consumption is unnecessarily high or low. Total cluster storage capacity will return to previous levels. Caching SSD Failure A caching SSD fails in an HX-Series converged node, or is removed.

Response The HyperFlex cluster state changes to unhealthy. Alerts are raised in vCenter for the disk failure and cluster health. An immediate rebalance job is triggered. When the rebalance job finishes, the cluster state returns to healthy. All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.

Recovery Once the failed disk is replaced, the HX software will not automatically begin using the replaced disk for storage, a manual rebalance job must be started from the command line of one of the controller VMs: # stcli rebalance start --force Housekeeping SSD Failure A housekeeping SSD fails in an HX-Series converged node, or is removed. Response HX220c-M4S server: the storage controller VM remains online, although services within the VM will fail. All guest VMs remain online and fully functional, no migrations occur and no storage interruptions are seen. HX240c-M4SX server: the response is functionally identical to a. Recovery Once the failed disk is replaced, specific recovery steps must be taken to return the node to full service. Please contact Cisco TAC for assistance.

Maintenance Mode An HX-Series converged node must be taken offline for planned maintenance. Response With VMware DRS enabled: placing the node into HX Maintenance Mode will evacuate the guest VMs via vMotion, the controller VM will shut down, and the node is ready to shut down or reboot. Without VMware DRS enabled: placing the node into HX Maintenance Mode will not succeed until the guest VMs are manually moved via vMotion or powered off, then the controller VM will shut down, and the node is ready to shut down or reboot. The cluster state changes to unhealthy.

Alerts are raised in vCenter for cluster health and node removal. All VMs remain online and fully functional, and no storage interruptions are seen. If the node remains offline for more than 2 hours, a rebalance job will be triggered to redistribute data across the cluster and bring the number of data copies back in line with the configured replication factor. The cluster state would then return to healthy. Total cluster storage capacity will be reduced accordingly. Recovery Once the node is back online, exiting from maintenance mode will trigger the storage controller VM to start. After a brief period for services on the controller VM to start, the cluster state will return to healthy.

The node is available to host VMs, by manually moving them or allowing DRS to automatically balance the load. If the node was offline for more than 2 hours and the auto rebalance job ran, the node is considered to be empty when it re-enters the cluster; even if the disks still have their old data, the data will be discarded. To redistribute the data across the node, either wait for the default automatic rebalance job to run, or manually run a rebalance from the command line of one of the controller VMs: # stcli rebalance start –force Total cluster storage capacity will return to previous levels. Node Failure An HX-Series converged node goes offline due to an unexpected failure. Response With VMware HA enabled: the guest VMs will be restarted on the remaining nodes of the cluster after a brief period of time.

Without VMware HA enabled: the guest VMs will not be restarted on the remaining nodes of the cluster and will remain offline without manual intervention. The cluster state changes to unhealthy. Alerts are raised in vCenter for cluster health and node removal.

All VMs’ functionality will return to normal after they are restarted, and no storage interruptions are seen. If the node remains offline for more than 2 hours, a rebalance job will be triggered to redistribute data across the cluster and bring the number of data copies back in line with the configured replication factor. The cluster state would then return to healthy. Total cluster storage capacity will be reduced accordingly. Recovery Once the node is back online the storage controller VM will start.

After a brief period for services on the controller VM to start, the cluster state will return to healthy. The node is available to host VMs, by manually moving them or allowing DRS to automatically balance the load. If the node was offline for more than 2 hours and the auto rebalance job ran, the node is considered to be empty when it re-enters the cluster; even if the disks still have their old data, the data will be discarded. To redistribute the data across the node, either wait for the default automatic rebalance job to run, or manually run a rebalance from the command line of one of the controller VMs: # stcli rebalance start –force Total cluster storage capacity will return to previous levels. This section provides a list of items that should be reviewed after the HyperFlex system has been deployed and configured. The goal of this section is to verify the configuration and functionality of the solution, and ensure that the configuration supports core availability requirements. The following tests are critical to functionality of the solution, and should be verified before deploying for production: Verify the expected number of converged storage nodes and compute-only nodes are members of the HyperFlex cluster in the vSphere Web Client plugin manage cluster screen.

Verify the expected cluster capacity is seen in the vSphere Web Client plugin summary screen. Create a test virtual machine that accesses the HyperFlex datastore and is able to perform read/write operations. Perform the virtual machine migration (vMotion) of the test virtual machine to a different host in the cluster. During the vMotion of the virtual machine, make sure the test virtual machine can perform a continuous ping to default gateway and to check if the network connectivity is maintained during and after the migration.

The following redundancy checks can be performed to verify the robustness of the system. Network traffic, such as a continuous ping from VM to VM, or from vCenter to the ESXi hosts should not show significant failures (one or two ping drops might be observed at times).

Also, all of the HyperFlex datastores must remain mounted and accessible from all the hosts at all times. Administratively disable one of the server ports on Fabric Interconnect A which is connected to one of the HyperFlex converged storage hosts. The ESXi virtual switch uplinks for fabric A should now show as failed, and the standby uplinks on fabric B will be in use for the management and vMotion virtual switches.

Upon administratively re-enabling the port, the uplinks in use should return to normal. Administratively disable one of the server ports on Fabric Interconnect B which is connected to one of the HyperFlex converged storage hosts.

The ESXi virtual switch uplinks for fabric B should now show as failed, and the standby uplinks on fabric A will be in use for the storage virtual switch. Upon administratively re-enabling the port, the uplinks in use should return to normal. Place a representative load of guest virtual machines on the system. Put one of the ESXi hosts in maintenance mode, using the HyperFlex HX maintenance mode option.

All the VMs running on that host should be migrated via vMotion to other active hosts via vSphere DRS, except for the storage platform controller VM, which will be powered off. No guest VMs should lose any network or storage accessibility during or after the migration. This test assumes that enough RAM is available on the remaining ESXi hosts to accommodate VMs from the host put in maintenance mode. The HyperFlex cluster will show in an unhealthy state. Reboot the host that is in maintenance mode, and exit it from maintenance mode after the reboot.

The storage platform controller will automatically start when the host exits maintenance mode. The HyperFlex cluster will show as healthy after a brief time to restart the services on that node. VSphere DRS should rebalance the VM distribution across the cluster over time. VCenter alerts do not automatically clear, even when the fault has been resolved. Once the cluster health is verified, the alerts must be manually cleared. Reboot one of the Cisco UCS Fabric Interconnects while traffic is being sent and received on the storage datastores and the network. The reboot should not affect the proper operation of storage access and network traffic generated by the VMs.

Numerous faults and errors will be noted in Cisco UCS Manager, but all will be cleared after the FI comes back online. Brian Everitt, Technical Marketing Engineer, Cisco Systems, Inc. Brian Everitt is a Technical Marketing Engineer with the Cisco UCS Data Center Engineering Solutions group. He is an IT industry veteran with over 18 years of experience deploying server, network, and storage infrastructures for companies around the world. During his tenure at Cisco, he has previously been a lead Advanced Services Solutions Architect for Microsoft solutions, virtualization, and SAP Hana on UCS. Currently his focus is on Cisco’s portfolio of Software Defined Storage (SDS) and Hyperconverged Infrastructure solutions.

Jeffrey Fultz, Technical Marketing Engineer, Cisco Systems, Inc. API Application Programming Interface. A set of remote calls, verbs, queries or actions exposed to external users for automation of various application tasks. ASIC Application Specific Integrated Circuit. An integrated circuit specially designed for a specific use versus a general purpose processor. CDP Cisco Discovery Protocol.

A Cisco developed protocol to share information about directly connected networking devices. CLI Command Line Interface. A text based method of entering commands one-by-one in successive lines. DHCP Dynamic Host Configuration Protocol. A protocol for dynamically assigning and distributing network settings to clients on demand. DNS Domain Name System.

A hierarchical service primarily used for mapping of host and domain names to the IP addresses of the computers configured to use those names and addresses. FC Fibre Channel. A high speed, lossless networking protocol, using fixed packet sizes and a worldwide unique identification technique, primarily used to connect computing systems to storage devices.

FC-AL Fibre Channel Arbitrated Loop. A Fibre Channel protocol topology where the devices are connected via a high speed, one-way ring network. FCoE Fibre Channel over Ethernet. Fibre channel traffic encapsulated in Ethernet frames, carried over Ethernet networks. FI Fabric Interconnect.

The primary component of the Cisco UCS system, connecting all end devices and networks, and providing management services. GbE Gigabit Ethernet. One gigabit of network transmission rate is equal to one billion bits transferred per second. GUI Graphical User Interface. In contrast to a CLI, a user interface where the primary interaction between the user and the computer is via a series of graphical windows and buttons, typically using an on screen pointer controlled by a computer mouse. HBA Host Bus Adapter.

A hardware device that connects the computer host to an external device, such as a storage system or other network device. HDD Hard Disk Drive. A computer storage device which stores data on rotating magnetic discs or platters, read and written to by a moving magnetic head. IPMI Intelligent Platform Management Interface. A specification for an independent computer subsystem that provides management and monitoring capabilities separate from the host system's CPU and operating system.

IO Input/Output. IOM IO Module. A common name for Cisco Fabric Extender modules installed in the rear of a Cisco UCS 5108 blade chassis.

IOPS Input/Output operations per second. KVM Keyboard/Video/Mouse.

The three primary methods of human to computer interaction. LACP Link Aggregation Control Protocol. A specification for bundling multiple physical network interfaces into a single logical interface, providing additional bandwidth and failure tolerance. LAN Local Area Network.

Interconnection of computer devices within a limited area, in contrast to a WAN. MTU Maximum Transmission Unit. The maximum size of the data unit transmittable by the network protocol. NAND Negative-AND logic gate.

A Boolean function circuit design used in SSD devices, emulating the bit state of traditional magnetic media as used in hard disk drives. NAS Network Attached Storage. A storage system providing file-level access to resources via a network protocol, in contrast to a block-level storage system. NFS Network File System. A file system protocol for accessing files via a server across a network.

NTP Network Time Protocol. A protocol for synchronizing computer system time clocks over data networks. NIC Network Interface Controller. A hardware component connecting a computer to a data network. OUI Organizationally Unique Identifier.

A registry of 24-bit (3-byte) identifiers maintained by the Institute of Electrical and Electronics Engineers (IEEE) Registration Authority to uniquely identify vendors and manufacturers. OVA Open Virtual Appliance. A compressed open virtualization format (OVF) file containing all the files necessary to deploy and run a pre-created virtual machine. PCI Peripheral Component Interconnect. A computer bus interface for connecting internal hardware components. PCIe PCI Express.

A revised standard replacing PCI, that uses a serial architecture, higher speeds and fewer pins on the devices. QoS Quality of Service. The overall performance of the network as seen by the end users and computers. Various standards and methods have been developed to provide different levels of control and prioritization of network traffic used by specific devices and applications. RBAC Role-based Access Control. Restricting system access to authorized users, and granting them privileges in the system based on a defined functional or job role. RF Replication Factor.

The number of times a written block of data is replicated across independent nodes in a HyperFlex storage cluster. RU Rack Unit. EIA-310 specification define the height of each unit of height in a 19 inch wide electronics/computer rack to be 1.75”. SAN Storage Area Network. A data network, typically using Fibre Channel protocol, to provide computer hosts access to centralized storage devices. SAS Serial Attached SCSI.

A serial computer bus for connecting storage devices using SCSI commands. SCSI Small Computer Systems Interface.

A set of standards for connections between computers and storage devices and peripherals, using a common command set. SD Secure Digital. A portable and non-volatile format for memory card storage devices. SDS Software Defined Storage. A method for providing access to and provisioning of computer storage resources that are independent of the underlying physical storage resources. SoL Serial over LAN. Transmission of serial port output via a LAN versus the physical serial port.

SSD Solid State Disk. A computer data storage device typically using NAND based flash memory for persistent data storage, versus spinning magnetic media as used in a hard disk. SSH Secure Shell.

An encrypted protocol allowing remote login across unsecured networks. STP Spanning-Tree Protocol. A network protocol that creates loop-free networks by establishing preferred network pathways in a LAN.

Cisco UCS Cisco Unified Computing System. The Cisco product line combining rack-mount servers, blade servers, and Fabric Interconnects into a single domain, with integrated networking and management. UCSM Cisco UCS Manager. The management GUI and CLI software for control of a Cisco UCS domain. VIC Virtual Interface Card.

A Cisco product offering the ability to create multiple virtual network or storage interfaces on a single physical hardware card. VLAN Virtual LAN. A partitioned and isolated LAN segment, breaking a flat network into multiple subdivided networks. VM Virtual Machine. A running operating system atop a set of virtualized hardware resources, abstracted from the real underlying physical components.

VMDK Virtual Machine Disk. A file format for storing the contents of a typical hard disk as a file, used by virtual machines as their hard disk contents. VMFS Virtual Machine File System. VMware’s proprietary filesystem used to store virtual machine definitions, VMDK files, snapshots and configuration files. VNIC Virtual NIC. The virtualized definition of a traditional NIC running in the Cisco UCS system.

The vNIC is defined in a service profile, and dynamically programmed to a VIC. VPC Virtual Port Channel. A Cisco technology for connecting network devices to multiple partner devices without creating loops and without using STP.

WAN Wide Area Network. Interconnection of computer devices across long distances and geographically large areas, in contrast to a LAN. Cisco HyperFlex HX220c M4 Node Installation Guide: 2.

Cisco HyperFlex HX240c M4 Node Installation Guide: 3. Cisco HyperFlex HX220c M4 Node Specifications: 4. Cisco HyperFlex HX240c M4 Node Specifications: 5.

Cisco HyperFlex Systems Getting Started Guide: 6. Cisco HyperFlex Systems Pre-Install Checklist: 7. Cisco HyperFlex Systems Administration Guide: 8. Cisco HyperFlex Systems Command Line Interface Reference: 9.

Cisco HyperFlex Hardware and Software Interoperability Matrix: 10. Cisco HyperFlex Documentation Roadmap: NOTE: Available paragraph styles are listed in the Quick Styles Gallery in the Styles group on the Home tab. Alternatively, they can be accessed via the Styles window (press Alt + Ctrl + Shift + S).

Cisco HyperFlex for Virtual Server Infrastructure 2.0.1a with All Flash Storage Deployment Guide for Cisco HyperFlex for Virtual Server Infrastructure 2.0.1a with All Flash Storage NOTE: Works with document’s Advanced Properties “First Published” property. Click File Properties Advanced Properties Custom. Last Updated: May 22, 2017 NOTE: Works with document’s Advanced Properties “Last Updated” property. Click File Properties Advanced Properties Custom. About the Cisco Validated Design (CVD) Program The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, 'DESIGNS') IN THIS MANUAL ARE PRESENTED 'AS IS,' WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE.

USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS.

USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

The last decade has seen major shifts in the datacenter and arguably the most significant has been the widespread adoption of virtualization of servers as the primary computing platform for most businesses. The flexibility, speed of deployment, ease of management, portability, and improved resource utilization has led many enterprises to adopt a “virtual first” stance, where all environments are deployed virtually unless circumstances make that impossible. While the benefits of virtualization are clear, the proliferation of virtual environments has shone a light on other technology stacks where they do not offer the same levels of simplicity, flexibility, and rapid deployment as virtualized compute platforms do. Networking and storage systems in particular have come under increasing scrutiny to be as agile as hypervisors and virtual servers. Cisco offers strong solutions for rapid deployment and easy management of virtualized computing platforms, including integrated networking capabilities, with the Cisco Unified Computing System (UCS) product line.

Now with the introduction of Cisco HyperFlex, we bring similar dramatic enhancements to the virtualized servers and hyperconverged storage market. Cisco HyperFlex systems have been developed using the Cisco UCS platform, which combines Cisco HX-Series x86 servers and integrated networking technologies via the Cisco UCS Fabric Interconnects, into a single management domain, along with industry leading virtualization hypervisor software from VMware, and new software defined storage technology.

The combination creates a virtualization platform that also provides the network connectivity for the guest virtual machine (VM) connections, and the distributed storage to house the VMs, spread across all of the Cisco UCS x86 servers instead of using specialized components. The unique storage features of the newly developed log based filesystem enable rapid cloning of VMs, snapshots without the traditional performance penalties, and inline data deduplication and compression. All configuration, deployment, management, and monitoring of the solution can be done with existing tools for Cisco UCS and VMware, such as Cisco UCS Manager and VMware vCenter. This powerful linking of advanced technology stacks into a single, simple, rapidly deployed solution makes Cisco HyperFlex a true second generation hyperconverged platform for the modern datacenter. Now with the introduction of Cisco HyperFlex All-Flash systems in HXDP 2.0, customers have more choices to support different types of workloads without comprising the performance requirements.

Customers can choose to deploy SSD-only All-Flash HyperFlex clusters for improved performance, increased density, and reduced latency, or use HyperFlex hybrid clusters which combine high-performance SSDs and low-cost, high-capacity HDDs to optimize the cost of storing data. In addition, from HXDP 2.0 onward it is supported to add Cisco HyperFlex nodes to an existing Cisco UCS-FI domain. This helps customers protect their past investments by leveraging the existing unified fabric infrastructure for deployment of the new hyperconverged solutions. Another improvement simplifies the process for connecting a HyperFlex system to existing third-party storage devices. The Cisco HyperFlex System provides an all-purpose virtualized server platform, with hypervisor hosts, networking connectivity, and virtual server storage across a set of Cisco UCS C-Series x86 rack-mount servers. Legacy datacenter deployments relied on a disparate set of technologies, each performing a distinct and specialized function, such as network switches connecting endpoints and transferring Ethernet network traffic, and Fibre Channel (FC) storage arrays providing block based storage devices via a unique storage array network (SAN).

Each of these systems had unique requirements for hardware, connectivity, management tools, operational knowledge, monitoring, and ongoing support. A legacy virtual server environment was often divided up into areas commonly referred to as silos, within which only a single technology operated, along with their correlated software tools and support staff. Silos could often be divided between the x86 computing hardware, the networking connectivity of those x86 servers, SAN connectivity and storage device presentation, the hypervisors and virtual platform management, and finally the guest VM themselves along with their OS and applications.

This model proves to be inflexible, difficult to navigate, and is susceptible to numerous operational inefficiencies. A more modern datacenter model was developed called a converged architecture.

Converged architectures attempt to collapse the traditional silos by combining these technologies into a more singular environment, which has been designed to operate together in pre-defined, tested, and validated designs. A key component of the converged architecture was the revolutionary combination of x86 rack and blade servers, along with converged Ethernet and Fibre Channel networking offered by the Cisco UCS platform. Converged architectures leverage Cisco UCS, plus new deployment tools, management software suites, automation processes, and orchestration tools to overcome the difficulties deploying traditional environments, and do so in a much more rapid fashion. These new tools place the ongoing management and operation of the system into the hands of fewer staff, with more rapid deployment of workloads based on business needs, while still remaining at the forefront of flexibility to adapt to workload needs, and offering the highest possible performance.

Cisco has had incredible success in these areas with our various partners, developing leading solutions such as Cisco FlexPod, SmartStack, VersaStack, and VBlock architectures. Despite these advances, since these converged architectures relied on some legacy technology stacks, particularly in the storage subsystems, there often remained a division of responsibility amongst multiple teams of administrators. Alongside, there is also a recognition that these converged architectures can still be a somewhat complex combination of components, where a simpler system would suffice to serve the workloads being requested.

Significant changes in the storage marketplace have given rise to the software defined storage (SDS) system. Legacy FC storage arrays often continued to utilize a specialized subset of hardware, such as Fibre Channel Arbitrated Loop (FC-AL) based controllers and disk shelves along with optimized Application Specific Integrated Circuits (ASIC), read/write data caching modules and cards, plus highly customized software to operate the arrays. With the rise of Serial Attached SCSI (SAS) bus technology and its inherent benefits, storage array vendors began to transition their internal architectures to SAS, and with dramatic increases in processing power from recent x86 processor architectures, they also used fewer or no custom ASICs at all. As disk physical sizes shrank, servers began to have the same density of storage per rack unit (RU) as the arrays themselves, and with the proliferation of NAND based flash memory solid state disks (SSD), they also now had access to input/output (IO) devices whose speed rivaled that of dedicated caching devices. If servers themselves now contained storage devices and technology to rival many dedicated arrays on the market, then the major differentiator between them was the software providing allocation, presentation and management of the storage, plus the advanced features many vendors offered. This led to the rise of software defined storage, where the x86 servers with the storage devices ran software to effectively turn one or more of them into a storage array much the same as the traditional arrays were.

In a somewhat unexpected turn of events, some of the major storage array vendors themselves were pioneers in this field, recognizing the shift in the market, and attempting to profit from the software features they offered versus specialized hardware as had been done in the past. Some early uses of SDS systems simply replaced the traditional storage array in the converged architectures as described earlier.

That configuration still had a separate storage system from the virtual server hypervisor platform, and depending on the solution provider, still remained separate from the network devices. If the server that hosted the virtual servers, and also provided the SDS environment were in fact the same model of servers, could they simply do both things at once and collapse the two functions into one? This combination of resources becomes what the industry has given the moniker of a hyperconverged infrastructure. Hyperconverged infrastructures combine the computing, memory, hypervisor, and storage devices of servers into a single monolithic platform for virtual servers.

There is no longer a separate storage system, as the servers running the hypervisors also provide the software defined storage resources to store the virtual servers, effectively storing the virtual machines on themselves. Now nearly all the silos are gone, and a hyperconverged infrastructure becomes something almost completely self-contained, simpler to use, faster to deploy, easier to consume, yet still flexible and with very high performance. And by combining the convergence of compute and network resources provided by Cisco UCS, along with the new hyperconverged storage software, the Cisco HyperFlex system uniquely provides the compute resources, network connectivity, storage, and hypervisor platform to run an entire virtual environment, all contained in a single system. Some key advantages of Hyperconverged infrastructures are the simplification of deployment, day to day management operations, as well as increased agility, thereby reducing the amount operational costs. Since hyperconverged storage can be easily managed by an IT generalist, this can also reduce technical debt going forward that is often accrued by implementing complex systems that need dedicated management and skillsets. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploying the Cisco HyperFlex System. External references are provided wherever applicable, but readers are expected to be familiar with VMware specific technologies, infrastructure concepts, networking connectivity, and security policies of the customer installation.

This document describes the steps required to deploy, configure, and manage a Cisco HyperFlex system. The document is based on all known best practices using the software, hardware and firmware revisions specified in the document. As such, recommendations and best practices can be amended with later versions. This document showcases the installation, configuration and expansion of Cisco HyperFlex standard and also hybrid clusters, including both converged nodes and compute-only nodes, in a typical customer datacenter environment. While readers of this document are expected to have sufficient knowledge to install and configure the products used, configuration details that are important to the deployment of this solution are provided in this CVD.

The Cisco HyperFlex system has several new capabilities and enhancements in version 2.0: Figure 1 Addition of HX All-Flash Nodes in 2.0 New All-Flash HX server models are added to the Cisco HyperFlex product family that offer all flash storage using SSDs for persistent storage devices. Cisco HyperFlex now support the latest generation of Cisco UCS software, Cisco UCS Manager 3.1(2f) and beyond. For new All-Flash deployments, verify that Cisco UCS Manager 3.1(2f) or later is installed. Support for adding external storage (iSCSI or Fibre Channel) adapters to HX nodes during HX Data Platform software installation, which simplifies the process to connect external storage arrays to the HX domain. Support for adding HX nodes to an existing Cisco UCS-FI domain. Support for Cisco HyperFlex Sizer — A new end to end sizing tool for compute, capacity and performance.

For comprehensive documentation suite, refer to the following location on the Cisco UCS HX-Series Documentation Roadmap. For the Documentation Roadmap, a login is required.

Hyperconverged Infrastructure web link: The Cisco HyperFlex system provides a fully contained virtual server platform, with compute and memory resources, integrated networking connectivity, a distributed high performance log-based filesystem for VM storage, and the hypervisor software for running the virtualized servers, all within a single Cisco UCS management domain. The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi chassis platform in which all resources participate in a unified management domain. The main components of Cisco Unified Computing System are: Computing—The system is based on an entirely new class of computing system that incorporates rack-mount and blade servers based on Intel Xeon Processors.

Network—The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today.

The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements. Virtualization—The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.

Storage access—The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access the Cisco Unified Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI. This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity, and management for increased productivity. Management—The system uniquely integrates all system components which enable the entire solution to be managed as a single entity by the Cisco UCS Manager (UCSM). The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all system configuration and operations.

The Cisco Unified Computing System is designed to deliver: A reduced Total Cost of Ownership and increased business agility. Increased IT staff productivity through just-in-time provisioning and mobility support.

A cohesive, integrated system which unifies the technology in the data center. The system is managed, serviced and tested as a whole.

Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand. Industry standards supported by a partner ecosystem of industry leaders. The Cisco UCS 6200 Series Fabric Interconnect is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Fibre Channel functions.

The Cisco UCS 6200 Series provides the management and communication backbone for the Cisco UCS C-Series and HX-Series rack-mount servers, Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All servers and chassis, and therefore all blades, attached to the Cisco UCS 6200 Series Fabric Interconnects become part of a single, highly available management domain.

In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both the LAN and SAN connectivity for all blades within its domain. From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1Tb switching capacity, 160 Gbps bandwidth per chassis, independent of packet size and enabled services. The product family supports Cisco low-latency, lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple traffic classes over a lossless Ethernet fabric from a server through an interconnect.

Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated. The Cisco UCS 6248UP 48-Port Fabric Interconnect is a one-rack-unit (1RU) 10 Gigabit Ethernet, FCoE and Fiber Channel switch offering up to 960-Gbps throughput and up to 48 ports. The switch has 32 1/10-Gbps fixed Ethernet, FCoE and FC ports and one expansion slot. Figure 4 Cisco UCS 6248UP Fabric Interconnect The Cisco UCS 6296UP 96-Port Fabric Interconnect is a two-rack-unit (2RU) 10 Gigabit Ethernet, FCoE, and native Fibre Channel switch offering up to 1920 Gbps of throughput and up to 96 ports.

The switch has forty-eight 1/10-Gbps fixed Ethernet, FCoE, and Fibre Channel ports and three expansion slots. Figure 5 Cisco UCS 6296UP Fabric Interconnect A HyperFlex cluster requires a minimum of three HX-Series nodes (with disk storage).

Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single-node failure. Each node that has disk storage is equipped with at least one high-performance SSD drive for data caching and rapid acknowledgment of write requests. Each node also is equipped with up to the platform’s physical capacity of spinning disks for maximum data capacity.

This small footprint configuration of Cisco HyperFlex all-flash nodes contains two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as the boot drives, a single 120-GB solid-state disk (SSD) data-logging drive, a single 800-GB SSD write-log drive, and up to six 3.8-terabyte (TB) or six 960-GB SATA SSD drives for storage capacity. A minimum of three nodes and a maximum of eight nodes can be configured in one HX cluster. Figure 6 HXAF220c-M4S All-Flash Node This capacity optimized configuration of Cisco HyperFlex all-flash node includes two FlexFlash SD cards that act as boot drives, a single 120-GB SSD data-logging drive, a single 800-GB SSD for write logging, and up to ten 3.8-TB or ten 960-GB SSDs for storage capacity. A minimum of three nodes and a maximum of eight nodes cluster can be configured in one HX cluster. This small footprint configuration contains a minimum of three nodes with six 1.2 terabyte (TB) SAS drives that contribute to cluster storage capacity, a 120 GB SSD housekeeping drive, a 480 GB SSD caching drive, and two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as boot drives. This capacity optimized configuration contains a minimum of three nodes, a minimum of fifteen and up to twenty-three 1.2 TB SAS drives that contribute to cluster storage, a single 120 GB SSD housekeeping drive, a single 1.6 TB SSD caching drive, and two FlexFlash SD cards that act as the boot drives. The Cisco UCS Virtual Interface Card (VIC) 1227 is a dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10-Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-motherboard (mLOM) adapter installed in the Cisco UCS HX-Series Rack Servers (Figure 10).

The mLOM slot can be used to install a Cisco VIC without consuming a PCIe slot, which provides greater I/O expandability. It incorporates next-generation converged network adapter (CNA) technology from Cisco, providing investment protection for future feature releases. The card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. Figure 10 Cisco VIC 1227 mLOM Card For workloads that require additional computing and memory resources, but not additional storage capacity, a compute-intensive hybrid cluster configuration is allowed.

This configuration requires a minimum of three (up to eight) HyperFlex converged nodes with one to eight Cisco UCS B200-M4 Blade Servers for additional computing capacity. The HX-series Nodes are configured as described previously, and the Cisco UCS B200-M4 servers are equipped with boot drives. Use of the Cisco UCS B200-M4 compute nodes also requires the Cisco UCS 5108 blade server chassis, and a pair of Cisco UCS 2204XP Fabric Extenders. The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and can mount in an industry-standard 19-inch rack.

A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors. Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender.

The chassis is capable of supporting 40 Gigabit Ethernet standards. Figure 12 Cisco UCS 5108 Blade Chassis Front and Rear Views The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the fabric interconnect.

At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic. The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis.

Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis. The Cisco UCS C220 M4 Rack Server is an enterprise-class infrastructure server in an 1RU form factor.

It incorporates the Intel Xeon processor E5-2600 v4 and v3 product family, next-generation DDR4 memory, and 12-Gbps SAS throughput, delivering significant performance and efficiency gains. Cisco UCS C220 M4 Rack Server can be used to build a compute-intensive hybrid HX cluster, for an environment where the workloads require additional computing and memory resources but not additional storage capacity, along with the HX-series converged nodes. This configuration contains a minimum of three (up to eight) HX-series converged nodes with one to eight Cisco UCS C220-M4 Rack Servers for additional computing capacity. The Cisco UCS C240 M4 Rack Server is an enterprise-class 2-socket, 2-rack-unit (2RU) rack server. It incorporates the Intel Xeon processor E5-2600 v4 and v3 product family, next-generation DDR4 memory, and 12-Gbps SAS throughput that offers outstanding performance and expandability for a wide range of storage and I/O-intensive infrastructure workloads. Cisco UCS C240 M4 Rack Server can be used to expand additional computing and memory resources into a compute-intensive hybrid HX cluster, along with the HX-series converged nodes.

This configuration contains a minimum of three (up to eight) HX-series converged nodes with one to eight Cisco UCS C240-M4 Rack Servers for additional computing capacity. The Cisco HyperFlex HX Data Platform is a purpose-built, high-performance, distributed file system with a wide array of enterprise-class data management services. The data platform’s innovations redefine distributed storage technology, exceeding the boundaries of first-generation hyperconverged infrastructures.

The data platform has all the features that you would expect of an enterprise shared storage system, eliminating the need to configure and maintain complex Fibre Channel storage networks and devices. The platform simplifies operations and helps ensure data availability.

Enterprise-class storage features include the following: Replication replicates data across the cluster so that data availability is not affected if single or multiple components fail (depending on the replication factor configured). Deduplication is always on, helping reduce storage requirements in virtualization clusters in which multiple operating system instances in client virtual machines result in large amounts of replicated data. Compression further reduces storage requirements, reducing costs, and the log- structured file system is designed to store variable-sized blocks, reducing internal fragmentation. Thin provisioning allows large volumes to be created without requiring storage to support them until the need arises, simplifying data volume growth and making storage a “pay as you grow” proposition.

Fast, space-efficient clones rapidly replicate storage volumes so that virtual machines can be replicated simply through metadata operations, with actual data copied only for write operations. Snapshots help facilitate backup and remote-replication operations: needed in enterprises that require always-on data availability. The Cisco HyperFlex HX Data Platform is administered through a VMware vSphere web client plug-in. Through this centralized point of control for the cluster, administrators can create volumes, monitor the data platform health, and manage resource use. Administrators can also use this data to predict when the cluster will need to be scaled. For customers that prefer a light weight web interface there is a tech preview URL management interface available by opening a browser to the IP address of the HX cluster interface. Additionally, there is an interface to assist in running cli commands via a web browser.

In addition, an all HTML 5 based Web UI is available for use as a technical preview. All of the functions found within the vCenter Web Client Plugin are also available in the HTML Web UI. To use the technical preview Web UI, connect to HX controller cluster IP: Figure 17 Tech Preview HX GUI Command line interface (CLI) commands can also be run against the system via the Web UI. To run CLI commands via HTTP, connect to HX controller cluster IP using a web browser, click Web CLI from the left column on the GUI. A Cisco HyperFlex HX Data Platform controller resides on each node and implements the distributed file system.

The controller runs in user space within a virtual machine and intercepts and handles all I/O from guest virtual machines. The platform controller VM uses the VMDirectPath I/O feature to provide PCI pass-through control of the physical server’s SAS disk controller.

This method gives the controller VM full control of the physical disk resources, utilizing the SSD drives as a read/write caching layer, and the HDDs or SDDs as a capacity layer for distributed storage. The controller integrates the data platform into VMware software through the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs): IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disks that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system. VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request advanced file system operations such as snapshots and cloning.

The controller implements these operations through manipulation of metadata rather than actual data copying, providing rapid response, and thus rapid deployment of new environments. The Cisco HyperFlex HX Data Platform controllers handle all read and write operation requests from the guest VMs to their virtual disks (VMDK) stored in the distributed datastores in the cluster. The data platform distributes the data across multiple nodes of the cluster, and also across multiple capacity disks of each node, according to the replication level policy selected during the cluster setup. This method avoids storage hotspots on specific nodes, and on specific disks of the nodes, and thereby also avoids networking hotspots or congestion from accessing more data on some nodes versus others.

Replication Factor The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is referred to as the replication factor (RF). The default setting for the Cisco HyperFlex HX Data Platform is replication factor 3 (RF=3).

Replication Factor 3: For every I/O write committed to the storage layer, 2 additional copies of the blocks written will be created and stored in separate locations, for a total of 3 copies of the blocks. Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate simultaneous failures 2 entire nodes without losing data and resorting to restore from backup or other recovery processes. Replication Factor 2: For every I/O write committed to the storage layer, 1 additional copy of the blocks written will be created and stored in separate locations, for a total of 2 copies of the blocks. Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate a failure 1 entire node without losing data and resorting to restore from backup or other recovery processes.

Data Write Operations For each write operation, data is written to the caching SSD of the node designated as its primary, and replica copies of that write are written to the caching SSD of the remote nodes in the cluster, according to the replication factor setting. For example, at RF=3 a write will be written locally where the VM originated the write, and two additional writes will be committed in parallel on two other nodes. The write operation will not be acknowledged until all three copies are written to the caching layer SSDs. Written data is also cached in a write log area resident in memory in the controller VM, along with the write log on the caching SSDs (Figure 19). This process speeds up read requests when reads are requested of data that has recently been written. Data Destaging, Deduplication and Compression The Cisco HyperFlex HX Data Platform constructs multiple write caching segments on the caching SSDs of each node in the distributed cluster.

As write cache segments become full, and based on policies accounting for I/O load and access patterns, those write cache segments are locked and new writes roll over to a new write cache segment. The data in the now locked cache segment is destaged to the HDD capacity layer of the nodes for the Hybrid system or to the SDD capacity layer of the nodes for the All-Flash system. During the destaging process, data is deduplicated and compressed before being written to the capacity storage layer. The resulting data after deduplication and compression can now be written to the HDDs or SDDs of the server.

When the data is destaged to a HDD, it is written in a single sequential operation, avoiding disk head seek thrashing on the spinning disks and accomplishing the task in the minimal amount of time (Figure 19). Since the data is already deduplicated and compressed before being written, the platform avoids additional I/O overhead often seen on competing systems, which must later do a read/dedupe/compress/write cycle.

Deduplication, compression and destaging take place with no delays or I/O penalties to the guest VMs making requests to read or write data, which benefits both the HDD and SDD configurations. Figure 19 HyperFlex HX Data Platform Data Movement Data Read Operations For data read operations, data may be read from multiple locations. For data that was very recently written, the data is likely to still exist in the write log of the local platform controller memory, or the write log of the local caching layer. If local write logs do not contain the data, the distributed filesystem metadata will be queried to see if the data is cached elsewhere, either in write logs of remote nodes, or in the dedicated read cache area of the local and remote caching SSDs of hybrid nodes. Finally, if the data has not been accessed in a significant amount of time, the filesystem will retrieve the requested data from the distributed capacity layer.

As requests for reads are made to the distributed filesystem and the data is retrieved from the capacity layer, the caching SSDs of hybrid nodes populate their dedicated read cache area to speed up subsequent requests for the same data. This multi-tiered distributed system with several layers of caching techniques, insures that data is served at the highest possible speed, leveraging the caching SSDs of the nodes fully and equally. All-flash configurations, however, do not use a read cache because data caching does not provide any performance benefit; the persistent data copy already resides on high-performance SSDs. In a summary the Cisco HyperFlex HX Data Platform implements a distributed, log-structured file system that performs data operations in two form factors: In a Hybrid configuration, the data platform uses a caching layer in SSDs to accelerate read requests and write responses, and it implements the storage capacity layer in HDDs. In an All-Flash configuration, the data platform uses a caching layer in SSDs to accelerate write responses, and it implements a capacity layer also in SSDs.

Read requests are fulfilled directly from the capacity SSDs. A dedicated read cache is not needed to accelerate read operations. Component Hardware Required Fabric Interconnect Two Cisco UCS 6248UP Fabric Interconnects, or Two Cisco UCS 6296UP Fabric Interconnects (Optional) plus 16 port UP expansion modules Servers Eight Cisco HyperFlex HXAF220c-M4S All-Flash rack servers, or Eight Cisco HyperFlex HXAF240c-M4S All-Flash rack servers, or Eight Cisco HyperFlex HX220c-M4S rack servers, or Eight Cisco HyperFlex HX240c-M4SX rack servers, or Eight Cisco HX-Series servers plus Eight Cisco UCS B200-M4 blade servers. Note: you can also use Cisco UCS C220-M4 or C240-M4 series servers in place of the Cisco UCS B200-M4 for compute only nodes. Chassis Cisco UCS 5108 Blade Chassis (only if using the B200-M4 servers) Fabric Extenders Cisco UCS 2204XP Fabric Extenders (required for the 5108 blade chassis and B200-M4 blades) Table 2 lists some of the available processor models for the Cisco HX-Series servers.

For a complete list and more information please refer to the links below: Compare models: HXAF220c-M4S Spec Sheet: HXAF240c-M4S Spec Sheet: HX220c-M4S Spec Sheet: HX240c-M4SX Spec Sheet. Model Cores Clock Speed Cache RAM Speed E5-2699 v4 22 2.2 GHz 55 MB 2400 MHz E5-2698 v4 20 2.2 GHz 50 MB 2400 MHz E5-2697 v4 18 2.3 GHz 45 MB 2400 MHz E5-2690 v4 14 2.6 GHz 35 MB 2400 MHz E5-2683 v4 16 2.1 GHz 40 MB 2400 MHz E5-2680 v4 14 2.4 GHz 35 MB 2400 MHz E5-2667 v4 8 3.2 GHz 25 MB 2400 MHz E5-2660 v4 14 2.0 GHz 35 MB 2400 MHz E5-2650 v4 12 2.2 GHz 30 MB 2400 MHz E5-2640 v4 10 2.4GHz 25 MB 2133 MHz E5-2630 v4 10 2.2 GHz 25 MB 2133 MHz E5-2620 v4 8 2.1 GHz 20 MB 2133 MHz Table 3 lists the hardware component options for the HXAF220c-M4S server model.

HXAF240c-M4S Options Hardware Required Processors Chose a matching pair from the previous table of CPU options. HX220c-M4S Options Hardware Required Processors Chose a matching pair from the previous table of CPU options. Memory 128 GB to 1.5TB of total memory using 16 GB to 64 GB DDR4 2133-2400 MHz 1.2v modules Disk Controller Cisco 12Gbps Modular SAS HBA SSD One 120 GB 2.5 Inch Enterprise Value 6G SATA SSD And One 480 GB 2.5 Inch Enterprise Performance 6G SATA SSD (3X endurance) HDD Six 1.2 TB SAS 12Gbps 10K rpm SFF HDD Network Cisco UCS VIC1227 VIC MLOM – Dual 10 GbE port Boot Devices Two 64GB Cisco FlexFlash SD Cards for Cisco UCS Servers Table 6 lists the hardware component options for the HX240c-M4SX server model.

HX240c-M4SX Options Hardware Required Processors Chose a matching pair from the previous table of CPU options. The software revisions listed in Table 10 are the only valid and supported configuration at the time of the publishing of this validated design.

Special care must be taken not to alter the revision of the hypervisor, vCenter server, Cisco HX platform software, or the Cisco UCS firmware without first consulting the appropriate release notes and compatibility matrixes to ensure that the system is not being modified into an unsupported configuration. This document does not cover the installation and configuration of VMware vCenter Server for Windows, or the vCenter Server Appliance. The vCenter Server must be installed and operational prior to the installation of the Cisco HyperFlex HX Data Platform software. The following best practice guidance applies to installations of HyperFlex 2.0.1a: Do not modify the default TCP port settings of the vCenter installation. Using non-standard ports can lead to failures during the installation. It is recommended to build the vCenter server on a physical server or in a virtual environment outside of the HyperFlex cluster.

Building the vCenter server as a virtual machine inside the HyperFlex cluster environment is highly discouraged. There is a tech note for a method of deployment using a USB SSD as temporary storage if no external server is available. Cisco HyperFlex clusters currently scale up from a minimum of 3 to a maximum of 8 converged nodes per cluster, i.e. 8 nodes providing storage resources to the HX Distributed Filesystem.

For the compute intensive “hybrid” cluster design, a configuration with 3-8 Cisco HX-series converged nodes can be combined with up to 8 compute nodes. Cisco B200-M4 blades, C220-M4, or C240-M4 servers can be used for the compute only nodes. It is required that the number of Compute-only nodes should be less than or equal to number of Converged nodes. Once the maximum size of a cluster has been reached, the environment can be “scaled out” by adding additional HX model servers to the Cisco UCS domain, installing an additional HyperFlex cluster on them, and controlling them via the same vCenter server. A maximum of 8 HyperFlex clusters can be managed by a single vCenter server, therefore the maximum size of a single HyperFlex environment is 64 converged nodes, plus up to 64 additional compute only blades.

Overall usable cluster capacity is based on a number of factors. The number of nodes in the cluster must be considered, plus the number and size of the capacity layer disks.

Caching disk sizes are not calculated as part of the cluster capacity. The replication factor of the HyperFlex HX Data Platform also affects the cluster capacity as it defines the number of copies of each block of data written. Disk drive manufacturers have adopted a size reporting methodology using calculation by powers of 10, also known as decimal prefix. As an example, a 120 GB disk is listed with a minimum of 120 x 10^9 bytes of usable addressable capacity, or 120 billion bytes. However, many operating systems and filesystems report their space based on standard computer binary exponentiation, or calculation by powers of 2, also called binary prefix. In this example, 2^10 or 1024 bytes make up a kilobyte, 2^10 kilobytes make up a megabyte, 2^10 megabytes make up a gigabyte, and 2^10 gigabytes make up a terabyte.

As the values increase, the disparity between the two systems of measurement and notation get worse, at the terabyte level, the deviation between a decimal prefix value and a binary prefix value is nearly 10%. The International System of Units (SI) defines values and decimal prefix by powers of 10 as follows: Table 11 SI Unit Values (Decimal Prefix).

Value Symbol Name 1024 bytes KiB Kibibyte 1024 KiB MiB Mebibyte 1024 MiB GiB Gibibyte 1024 GiB TiB Tebibyte For the purpose of this document, the decimal prefix numbers are used only for raw disk capacity as listed by the respective manufacturers. For all calculations where raw or usable capacities are shown from the perspective of the HyperFlex software, filesystems or operating systems, the binary prefix numbers are used.

This is done primarily to show a consistent set of values as seen by the end user from within the HyperFlex vCenter Web Plugin when viewing cluster capacity, allocation and consumption, and also within most operating systems. Table 13 lists a set of HyperFlex HX Data Platform cluster usable capacity values, using binary prefix, for an array of cluster configurations. These values are useful for determining the appropriate size of HX cluster to initially purchase, and how much capacity can be gained by adding capacity disks. The calculations for these values are listed in. The HyperFlex tool to help the sizing is listed in.

Installation of the HyperFlex system is primarily done through a deployable HyperFlex installer virtual machine, available for download at cisco.com as an OVA file. The installer VM does most of the Cisco UCS configuration work, it can be leveraged to simplify the installation of ESXi on the HyperFlex hosts, and also performs significant portions of the ESXi configuration. Finally, the installer VM is used to install the HyperFlex HX Data Platform software and create the HyperFlex cluster. Because this simplified installation method has been developed by Cisco, this CVD will not give detailed manual steps for the configuration of all the elements that are handled by the installer. Instead, the elements configured will be described and documented in this section, and the subsequent sections will guide you through the manual steps needed for installation, and how to utilize the HyperFlex Installer for the remaining configuration steps. Cisco UCS network uplinks connect “northbound” from the pair of Cisco UCS Fabric Interconnects to the LAN in the customer datacenter.

All Cisco UCS uplinks operate as trunks, carrying multiple 802.1Q VLAN IDs across the uplinks. The default Cisco UCS behavior is to assume that all VLAN IDs defined in the Cisco UCS configuration are eligible to be trunked across all available uplinks.

Cisco UCS Fabric Interconnects appear on the network as a collection of endpoints versus another network switch. Internally, the Fabric Interconnects do not participate in spanning-tree protocol (STP) domains, and the Fabric Interconnects cannot form a network loop, as they are not connected to each other with a layer 2 Ethernet link. All link up/down decisions via STP will be made by the upstream root bridges. Uplinks need to be connected and active from both Fabric Interconnects. For redundancy, multiple uplinks can be used on each FI, either as 802.3ad Link Aggregation Control Protocol (LACP) port-channels, or using individual links. For the best level of performance and redundancy, uplinks can be made as LACP port-channels to multiple upstream Cisco switches using the virtual port channel (vPC) feature.

Using vPC uplinks allows all uplinks to be active passing data, plus protects against any individual link failure, and the failure of an upstream switch. Other uplink configurations can be redundant, but spanning-tree protocol loop avoidance may disable links if vPC is not available.

All uplink connectivity methods must allow for traffic to pass from one Fabric Interconnect to the other, or from fabric A to fabric B. There are scenarios where cable, port or link failures would require traffic that normally does not leave the Cisco UCS domain, to now be forced over the Cisco UCS uplinks. Additionally, this traffic flow pattern can be seen briefly during maintenance procedures, such as updating firmware on the Fabric Interconnects, which requires them to be rebooted. The following sections and figures detail several uplink connectivity options. Single Uplinks to Single Switch This connection design is susceptible to failures at several points; single uplink failures on either Fabric Interconnect can lead to connectivity losses or functional failures, and the failure of the single uplink switch will cause a complete connectivity outage. Figure 26 Connectivity with Single Uplink to Single Switch Port Channels to Single Switch This connection design is now redundant against the loss of a single link, but remains susceptible to the failure of the single switch.

Single Uplinks or Port Channels to Multiple Switches This connection design is redundant against the failure of an upstream switch, and redundant against a single link failure. In normal operation, STP is likely to block half of the links to avoid a loop across the two upstream switches.

The side effect of this is to reduce bandwidth between the Cisco UCS domain and the LAN. If any of the active links were to fail, STP would bring the previously blocked link online to provide access to that Fabric Interconnect via the other switch. It is not recommended to connect both links from a single FI to a single switch, as that configuration is susceptible to a single switch failure breaking connectivity from fabric A to fabric B. For enhanced redundancy, the single links in the figure below could also be port-channels. Figure 28 Connectivity with Multiple Uplink Switches vPC to Multiple Switches This recommended connection design relies on using Cisco switches that have the virtual port channel feature, such as Catalyst 6000 series switches running VSS, Cisco Nexus 5000 series, and Cisco Nexus 9000 series switches.

Logically the two vPC enabled switches appear as one, and therefore spanning-tree protocol will not block any links. This configuration allows for all links to be active, achieving maximum bandwidth potential, and multiple redundancy at each level.

For the base HyperFlex system configuration, multiple VLANs need to be carried to the Cisco UCS domain from the upstream LAN, and these VLANs are also defined in the Cisco UCS configuration. The following table lists the VLANs created by the HyperFlex installer in Cisco UCS, and their functions: Table 14 VLANs. A dedicated network or subnet for physical device management is often used in datacenters.

In this scenario, the mgmt0 interfaces of the two Fabric Interconnects would be connected to that dedicated network or subnet. This is a valid configuration for HyperFlex installations with the following caveat; wherever the HyperFlex installer is deployed it must have IP connectivity to the subnet of the mgmt0 interfaces of the Fabric Interconnects, and also have IP connectivity to the subnets used by the hx-inband-mgmt VLANs listed above. All HyperFlex storage traffic traversing the hx-storage-data VLAN and subnet is configured to use jumbo frames, or to be precise all communication is configured to send IP packets with a Maximum Transmission Unit (MTU) size of 9000 bytes.

Using a larger MTU value means that each IP packet sent carries a larger payload, therefore transmitting more data per packet, and consequently sending and receiving data faster. This requirement also means that the Cisco UCS uplinks must be configured to pass jumbo frames. Failure to configure the Cisco UCS uplink switches to allow jumbo frames can lead to service interruptions during some failure scenarios, particularly when cable or port failures would cause storage traffic to traverse the northbound Cisco UCS uplink switches. This section about Cisco UCS design will describe the elements within Cisco UCS Manager that are configured by the Cisco HyperFlex installer. Many of the configuration elements are fixed in nature, meanwhile the HyperFlex installer does allow for some items to be specified at the time of creation, for example VLAN names and IDs, external management IP pools and more.

Where the elements can be manually set during the installation, those items will be noted in >brackets. During the HyperFlex Installation a Cisco UCS Sub-Organization is created. You can specify a unique Sub-Organization name, for example “hx-cluster”. The sub-organization is created underneath the root level of the Cisco UCS hierarchy, and is used to contain all policies, pools, templates and service profiles used by HyperFlex. This arrangement allows for organizational control using Role-Based Access Control (RBAC) and administrative locales at a later time if desired. In this way, control can be granted to administrators of only the HyperFlex specific elements of the Cisco UCS domain, separate from control of root level elements or elements in other sub-organizations.

You can change the name of the HX Sub-Organization during the HyperFlex installation. It is also required that each cluster have different Sub-Organization name if multiple HyperFlex clusters are connected to the same Cisco UCS domain. QoS System Classes Specific Cisco UCS Quality of Service (QoS) system classes are defined for a Cisco HyperFlex system. These classes define Class of Service (CoS) values that can be used by the uplink switches north of the Cisco UCS domain, plus which classes are active, along with whether packet drop is allowed, the relative weight of the different classes when there is contention, the maximum transmission unit (MTU) size, and if there is multicast optimization applied. QoS system classes are defined for the entire Cisco UCS domain, the classes that are enabled can later be used in QoS policies, which are then assigned to Cisco UCS vNICs.

The following table and figure details the QoS System Class settings configured for HyperFlex. Priority Enabled CoS Packet Drop Weight MTU Multicast Optimized Platinum Yes 5 No 4 9216 No Gold Yes 4 Yes 4 Normal No Silver Yes 2 Yes Best-effort Normal Yes Bronze Yes 1 Yes Best-effort 9216 No Best Effort Yes Any Yes Best-effort Normal No Fibre Channel Yes 3 No 5 FC N/A QoS Policies In order to apply the settings defined in the Cisco UCS QoS System Classes, specific QoS Policies must be created, and then assigned to the vNICs, or vNIC templates used in Cisco UCS Service Profiles. The following table details the QoS Policies configured for HyperFlex, and their default assignment to the vNIC templates created. Policy Priority Burst Rate Host Control Used by vNIC Template: Platinum Platinum 10240 Line-rate None storage-data-a storage-data-b Gold Gold 10240 Line-rate None vm-network-a vm-network-b Silver Silver 10240 Line-rate None hv-mgmt-a hv-mgmt-b Bronze Bronze 10240 Line-rate None hv-vmotion-a hv-vmotion-b Best Effort Best Effort 10240 Line-rate None N/A Multicast Policy A Cisco UCS Multicast Policy is configured by the HyperFlex installer, which is referenced by the VLANs that are created. The policy allows for future flexibility if a specific multicast policy needs to be created and applied to other VLANs, that may be used by non-HyperFlex workloads in the Cisco UCS domain.

The following table and figure details the Multicast Policy configured for HyperFlex. Name IGMP Snooping State IGMP Snooping Querier State HyperFlex Enabled Disabled VLANs VLANs are created by the HyperFlex installer to support a base HyperFlex system, with a VLAN for vMotion, and a single or multiple VLANs defined for guest VM traffic. Names and IDs for the VLANs are defined in the Cisco UCS configuration page of the HyperFlex installer web interface.

The VLANs listed in Cisco UCS must already be present on the upstream network, and the Cisco UCS FIs do not participate in VLAN Trunk Protocol (VTP). The following table and figure details the VLANs configured for HyperFlex. Name ID Type Transport Native VLAN Sharing Multicast Policy >LAN Ether No None HyperFlex >LAN Ether No None HyperFlex >LAN Ether No None HyperFlex >LAN Ether No None HyperFlex Figure 33 Cisco UCS VLANs Management IP Address Pool A Cisco UCS Management IP Address Pool must be populated with a block of IP addresses.

These IP addresses are assigned to the Cisco Integrated Management Controller (CIMC) interface of the rack-mount and blade servers that are managed in the Cisco UCS domain. The IP addresses are the communication endpoints for various functions, such as remote KVM, virtual media, Serial over LAN (SoL), and Intelligent Platform Management Interface (IPMI) for each rack-mount or blade server. Therefore, a minimum of one IP address per physical server in the domain must be provided. The IP addresses are considered to be an “out-of-band” address, meaning that the communication pathway uses the Fabric Interconnects’ mgmt0 ports, which answer ARP requests for the management addresses. Because of this arrangement, the IP addresses in this pool must be in the same IP subnet as the IP addresses assigned to the Fabric Interconnects’ mgmt0 ports. The default pool, named “ext-mgmt” is populated with a block of IP addresses, a subnet mask, and a default gateway by the HyperFlex installer.

Figure 35 IP Address Block MAC Address Pools One of the core benefits of the Cisco UCS and Virtual Interface Card (VIC) technology is the assignment of the personality of the card via Cisco UCS Service Profiles. The number of virtual NIC (vNIC) interfaces, their VLAN association, MAC addresses, QoS policies and more are all applied dynamically as part of the association process. Media Access Control (MAC) addresses use 6 bytes of data as a unique address to identify the interface on the layer 2 network. All devices are assigned a unique MAC address, which is ultimately used for all data transmission and reception. The Cisco UCS and VIC technology picks a MAC address from a pool of addresses, and assigns it to each vNIC defined in the service profile when that service profile is created. Best practices mandate that MAC addresses used for Cisco UCS domains use 00:25:B5 as the first three bytes, which is one of the Organizationally Unique Identifiers (OUI) registered to Cisco Systems, Inc. The fourth byte (e.g.

00:25:B5: xx) is specified during the HyperFlex installation. The fifth byte is set automatically by the HyperFlex installer, to correlate to the Cisco UCS fabric and the vNIC placement order. Finally, the last byte is incremented according to the number of MAC addresses created in the pool. To avoid overlaps, you must ensure that the first four bytes of the MAC address pools are unique for each HyperFlex system installed in the same layer 2 network, and also different from other Cisco UCS domains which may exist, when you define the values in the HyperFlex installer. The following table details the MAC Address Pools configured for HyperFlex, and their default assignment to the vNIC templates created: Table 19 MAC Address Pools. Name CDP MAC Register Mode Action on Uplink Fail MAC Security Used by vNIC Template: HyperFlex-infra Enabled Only Native VLAN Link-down Forged: Allow hv-mgmt-a hv-mgmt-b hv-vmotion-a hv-vmotion-b storage-data-a storage-data-b HyperFlex-vm Enabled Only Native VLAN Link-down Forged: Allow vm-network-a vm-network-b vNIC Templates Cisco UCS Manager has a feature to configure vNIC templates, which can be used to simplify and speed up configuration efforts. VNIC templates are referenced in service profiles and LAN connectivity policies, versus configuring the same vNICs individually in each service profile, or service profile template.

VNIC templates contain all the configuration elements that make up a vNIC, including VLAN assignment, MAC address pool selection, fabric A or B assignment, fabric failover, MTU, QoS policy, Network Control Policy, and more. Templates are created as either initial templates, or updating templates. Updating templates retain a link between the parent template and the child object, therefore when changes are made to the template, the changes are propagated to all remaining linked child objects. The following tables detail the settings in each of the vNIC templates created by the HyperFlex installer. VNIC Template Name: vm-network-b Setting Value Fabric ID B Fabric Failover Disabled Target Adapter Type Updating Template MTU 1500 MAC Pool vm-network-b QoS Policy gold Network Control Policy HyperFlex-vm VLANs >Native: no LAN Connectivity Policies Cisco UCS Manager has a feature for LAN Connectivity Policies, which aggregates all of the vNICs or vNIC templates desired for a service profile configuration into a single policy definition.

This simplifies configuration efforts by defining a collection of vNICs or vNIC templates once, and using that policy in the service profiles or service profile templates. The HyperFlex installer configures a LAN Connectivity Policy named HyperFlex, which contains all of the vNIC templates defined in the previous section, along with an Adapter Policy named HyperFlex, also configured by the HyperFlex installer. The following table details the LAN Connectivity Policy configured for HyperFlex. Policy Name Use vNIC Template vNIC Name vNIC Template Used: Adapter Policy HyperFlex Yes hv-mgmt-a hv-mgmt-a HyperFlex hv-mgmt-b hv-mgmt-b hv-vmotion-a hv-vmotion-a hv-vmotion-b hv-vmotion-b storage-data-a storage-data-a storage-data-b storage-data-b vm-network-a vm-network-a vm-network-b vm-network-b Adapter Policies Cisco UCS Adapter Policies are used to configure various settings of the Converged Network Adapter (CNA) installed in the Cisco UCS blade or rack-mount servers.

Various advanced hardware features can be enabled or disabled depending on the software or operating system being used. The following figures detail the Adapter Policy configured for HyperFlex: BIOS Policies Cisco HX-Series servers have a set of pre-defined BIOS setting defaults defined in Cisco UCS Manager. These settings have been optimized for the Cisco HX-Series servers running HyperFlex.

The HyperFlex installer creates a BIOS policy named “HyperFlex”, with all settings set to the defaults, except for enabling the Serial Port A for Serial over LAN (SoL) functionality. This policy allows for future flexibility in case situations arise where the settings need to be modified from the default configuration. Boot Policies Cisco UCS Boot Policies define the boot devices used by blade and rack-mount servers, and the order that they are attempted to boot from. Cisco HX-Series rack-mount servers, the compute-only Cisco UCS B200-M4 blade servers and the compute-only Cisco UCS C220-M4 or Cisco UCS C240-M4 rack-mount servers have their VMware ESXi hypervisors installed to an internal pair of mirrored Cisco FlexFlash SD cards, therefore they require a boot policy defining that the servers should boot from that location. The HyperFlex installer configures a boot policy named “HyperFlex” which specifies boot from SD card.

The following figure details the HyperFlex Boot Policy configured to boot from SD card: Host Firmware Packages Cisco UCS Host Firmware Packages represent one of the most powerful features of the Cisco UCS platform; the ability to control the firmware revision of all the managed blades and rack-mount servers via a policy specified in the service profile. Host Firmware Packages are defined and referenced in the service profiles. Once a service profile is associated to a server, the firmware of all the components defined in the Host Firmware Package are automatically upgraded or downgraded to match the package. The HyperFlex installer creates a Host Firmware Package named “HyperFlex” which uses the simple package definition method, applying firmware revisions to all components that matches a specific Cisco UCS firmware bundle, versus defining the firmware revisions part by part.

The following figure details the Host Firmware Package configured by the HyperFlex installer: Figure 41 Cisco UCS Host Firmware Package Local Disk Configuration Policies Cisco UCS Local Disk Configuration Policies are used to define the configuration of disks installed locally within each blade or rack-mount server, most often to configure Redundant Array of Independent/Inexpensive Disks (RAID levels) when multiple disks are present for data protection. Since HX-Series converged nodes providing storage resources do not require RAID, the HyperFlex installer creates a Local Disk Configuration Policy named “HyperFlex” which allows any local disk configuration. The policy also defines settings for the embedded FlexFlash SD cards used to boot the VMware ESXi hypervisor.

The following figure details the Local Disk Configuration Policy configured by the HyperFlex installer: Maintenance Policies Cisco UCS Maintenance Policies define the behavior of the attached blades and rack-mount servers when changes are made to the associated service profiles. The default Cisco UCS Maintenance Policy setting is “Immediate” meaning that any change to a service profile that requires a reboot of the physical server will result in an immediate reboot of that server. The Cisco best practice is to use a Maintenance Policy set to “user-ack”, which requires a secondary acknowledgement by a user with the appropriate rights within Cisco UCS, before the server is rebooted to apply the changes. The HyperFlex installer creates a Maintenance Policy named “HyperFlex” with the setting changed to “user-ack”. The following figure details the Maintenance Policy configured by the HyperFlex installer: Power Control Policies Cisco UCS Power Control Policies allow administrators to set priority values for power application to servers in environments where power supply may be limited, during times when the servers demand more power than is available.

The HyperFlex installer creates a Power Control Policy named “HyperFlex” with all power capping disabled, and fans allowed to run at full speed when necessary. The following figure details the Power Control Policy configured by the HyperFlex installer: Scrub Policies Cisco UCS Scrub Policies are used to scrub, or erase data from local disks, BIOS settings and FlexFlash SD cards. If the policy settings are enabled, the information is wiped when the service profile using the policy is disassociated from the server. The HyperFlex installer creates a Scrub Policy named “HyperFlex” which has all settings disabled, therefore all data on local disks, SD cards and BIOS settings will be preserved if a service profile is disassociated. The following figure details the Scrub Policy configured by the HyperFlex installer: Figure 45 Cisco UCS Scrub Policy Serial over LAN Policies Cisco UCS Serial over LAN (SoL) Policies enable console output which is sent to the serial port of the server, to be accessible via the LAN.

For many Linux based operating systems, such as VMware ESXi, the local serial port can be configured as a local console, where users can watch the system boot, and communicate with the system command prompt interactively. Since many blade servers do not have physical serial ports, and often administrators are working remotely, the ability to send and receive that traffic via the LAN is very helpful. Connections to a SoL session can be initiated from Cisco UCS Manager. The HyperFlex installer creates a SoL named “HyperFlex” to enable SoL sessions. The following figure details the SoL Policy configured by the HyperFlex installer: vMedia Policies Cisco UCS Virtual Media (vMedia) Policies automate the connection of virtual media files to the remote KVM session of the Cisco UCS blades and rack-mount servers. Using a vMedia policy can speed up installation time by automatically attaching an installation ISO file to the server, without having to manually launch the remote KVM console and connect them one-by-one. The HyperFlex installer creates a vMedia Policy named “HyperFlex” for future use, with no media locations defined.

The following figure details the vMedia Policy configured by the HyperFlex installer: Figure 47 Cisco UCS vMedia Policy Cisco UCS Manager has a feature to configure service profile templates, which can be used to simplify and speed up configuration efforts when the same configuration needs to be applied to multiple servers. Service profile templates are used to spawn multiple service profile copies to associate with the servers, versus configuring the same service profile manually each time it is needed.

Service profile templates contain all the configuration elements that make up a service profile, including vNICs, vHBAs, local disk configurations, boot policies, host firmware packages, BIOS policies and more. Templates are created as either initial templates, or updating templates. Updating templates retain a link between the parent template and the child object, therefore when changes are made to the template, the changes are propagated to all remaining linked child objects.

The HyperFlex installer creates two service profile templates, named “hx-nodes” and “compute-nodes”, each with the same configuration. This simplifies future efforts, if the configuration of the compute only nodes needs to differ from the configuration of the HyperFlex converged storage nodes. The following table details the service profile templates configured by the HyperFlex installer: Table 30 Cisco UCS Service Profile Templates Configured by HyperFlex.

ESXi VMDirectPath relies on a fixed PCI address for the pass-through devices. If the vNIC configuration is changed (add/remove vNICs), then the order of the devices seen in the PCI tree will change. The administrator will have to reconfigure the ESXi VMDirectPath configuration to select the 12 Gbps SAS HBA card, and reconfigure the storage controller settings of the controller VM. The following sections detail the design of the elements within the VMware ESXi hypervisors, system requirements, virtual networking and the configuration of ESXi for the Cisco HyperFlex HX Distributed Data Platform. The Cisco HyperFlex system has a pre-defined virtual network design at the ESXi hypervisor level. Four different virtual switches are created by the HyperFlex installer, each using two uplinks, which are each serviced by a vNIC defined in the Cisco UCS service profile. The vSwitches created are: vswitch-hx-inband-mgmt: This is the default vSwitch0 which is renamed by the ESXi kickstart file as part of the automated installation.

The default vmkernel port, vmk0, is configured in the standard Management Network port group. The switch has two uplinks, active on fabric A and standby on fabric B, without jumbo frames. A second port group is created for the Storage Platform Controller VMs to connect to with their individual management interfaces. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere. Vswitch-hx-storage-data: This vSwitch is created as part of the automated installation. A vmkernel port, vmk1, is configured in the Storage Hypervisor Data Network port group, which is the interface used for connectivity to the HX Datastores via NFS. The switch has two uplinks, active on fabric B and standby on fabric A, with jumbo frames highly recommended.

A second port group is created for the Storage Platform Controller VMs to connect to with their individual storage interfaces. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere. Vswitch-hx-vm-network: This vSwitch is created as part of the automated installation.

The switch has two uplinks, active on both fabrics A and B, and without jumbo frames. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere. Vmotion: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on fabric A and standby on fabric B, with jumbo frames highly recommended.

The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere. The following table and figures help give more details into the ESXi virtual networking design as built by the HyperFlex installer by default. Virtual Switch Port Groups Active vmnic(s) Passive vmnic(s) VLAN IDs Jumbo vswitch-hx-inband-mgmt Management Network Storage Controller Management Network vmnic0 vmnic1 >no vswitch-hx-storage-data Storage Controller Data Network Storage Hypervisor Data Network vmnic3 vmnic2 >yes vswitch-hx-vm-network none vmnic4 vmnic5 >no vmotion none vmnic6 vmnic7 >yes VMDirectPath I/O allows a guest VM to directly access PCI and PCIe devices in an ESXi host as though they were physical devices belonging to the VM itself, also referred to as PCI pass-through. With the appropriate driver for the hardware device, the guest VM sends all I/O requests directly to the physical device, bypassing the hypervisor. In the Cisco HyperFlex system, the Storage Platform Controller VMs use this feature to gain full control of the Cisco 12Gbps SAS HBA cards in the Cisco HX-series rack-mount servers. This gives the controller VMs direct hardware level access to the physical disks installed in the servers, which they consume to construct the Cisco HX Distributed Filesystem.

Only the disks connected directly to the Cisco SAS HBA or to a SAS extender, in turn connected to the SAS HBA are controlled by the controller VMs. Other disks, connected to different controllers, such as the SD cards, remain under the control of the ESXi hypervisor.

The configuration of the VMDirectPath I/O feature is done by the Cisco HyperFlex installer, and requires no manual steps. A key component of the Cisco HyperFlex system is the Storage Platform Controller Virtual Machine running on each of the nodes in the HyperFlex cluster. The controller VMs cooperate to form and coordinate the Cisco HX Distributed Filesystem, and service all the guest VM IO requests. The controller VMs are deployed as a vSphere ESXi agent, which is similar in concept to that of a Linux or Windows service. ESXi agents are tied to a specific host, they start and stop along with the ESXi hypervisor, and the system is not considered to be online and ready until both the hypervisor and the agents have started. Each ESXi hypervisor host has a single ESXi agent deployed, which is the controller VM for that node, and it cannot be moved or migrated to another host. The collective ESXi agents are managed via an ESXi agency in the vSphere cluster.

The storage controller VM runs custom software and services that manage and maintain the Cisco HX Distributed Filesystem. The services and processes that run within the controller VMs are not exposed as part of the ESXi agents to the agency, therefore the ESXi hypervisors nor vCenter server have any direct knowledge of the storage services provided by the controller VMs. Management and visibility into the function of the controller VMs, and the Cisco HX Distributed Filesystem is done via a plugin installed to the vCenter server or appliance managing the vSphere cluster. The plugin communicates directly with the controller VMs to display the information requested, or make the configuration changes directed, all while operating within the same web-based interface of the vSphere Web Client. The deployment of the controller VMs, agents, agency, and vCenter plugin are all done by the Cisco HyperFlex installer, and requires no manual steps. Controller VM Locations The physical storage location of the controller VMs differs among the Cisco HX-Series rack servers, due to differences with the physical disk location and connections on those server models.

The storage controller VM is operationally no different from any other typical virtual machines in an ESXi environment. The VM must have a virtual disk with the bootable root filesystem available in a location separate from the SAS HBA that the VM is controlling via VMDirectPath I/O. The configuration details of the models are as follows: HX220c (including HXAF220c): The controller VM’s root filesystem is stored on a 2.2 GB virtual disk, /dev/sda, which is placed on a 3.5 GB VMFS datastore, and that datastore is provisioned from the internal mirrored SD cards. The controller VM has full control of all the front facing hot-swappable disks via PCI pass-through control of the SAS HBA. The controller VM operating system sees the 120 GB SSD, also commonly called the “housekeeping” disk as /dev/sdb, and places HyperFlex binaries, logs, and zookeeper partitions on this disk. The remaining disks seen by the controller VM OS are used by the HX Distributed filesystem for caching and capacity layers. HX240c (including HXAF240c): The HX240c-M4SX server has a built-in SATA controller provided by the Intel Wellsburg Platform Controller Hub (PCH) chip, and the 120 GB housekeeping disk is connected to it, placed in an internal drive carrier.

Since this model does not connect the 120 GB housekeeping disk to the SAS HBA, the ESXi hypervisor remains in control of this disk, and a VMFS datastore is provisioned there, using the entire disk. On this VMFS datastore, a 2.2 GB virtual disk is created and used by the controller VM as /dev/sda for the root filesystem, and an 87 GB virtual disk is created and used by the controller VM as /dev/sdb, placing the HyperFlex binaries, logs, and zookeeper partitions on this disk.

The front-facing hot swappable disks, seen by the controller VM OS via PCI pass-through control of the SAS HBA, are used by the HX Distributed filesystem for caching and capacity layers. The following figures detail the Storage Platform Controller VM placement on the ESXi hypervisor hosts. The HyperFlex compute-only Cisco UCS B200-M4 server blades, or Cisco UCS C220-M4 or Cisco UCS C240-M4 rack-mount servers also place a lightweight storage controller VM on a 3.5 GB VMFS datastore, which can be provisioned from the SD cards.

Figure 51 HX240c Controller VM Placement HyperFlex Datastores The new HyperFlex cluster has no default datastores configured for virtual machine storage, therefore the datastores must be created using the vCenter Web Client plugin. A minimum of two datastores is recommended to satisfy vSphere High Availability datastore heartbeat requirements, although one of the two datastores can be very small. It is important to recognize that all HyperFlex datastores are thinly provisioned, meaning that their configured size can far exceed the actual space available in the HyperFlex cluster. Alerts will be raised by the HyperFlex system in the vCenter plugin when actual space consumption results in low amounts of free space, and alerts will be sent via auto support email alerts.

Overall space consumption in the HyperFlex clustered filesystem is optimized by the default deduplication and compression features. Figure 52 Datastore Example CPU Resource Reservations Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure CPU resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have CPU resources at a minimum level, in situations where the physical CPU resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. The following table details the CPU resource reservation of the storage controller VMs. Number of vCPU Shares Reservation Limit 8 Low 10800 MHz unlimited Memory Resource Reservations Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure memory resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have memory resources at a minimum level, in situations where the physical memory resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. The following table details the memory resource reservation of the storage controller VMs.

Cisco HyperFlex systems are ordered with a factory pre-installation process having been done prior to the hardware delivery. This factory integration work will deliver the HyperFlex servers with the proper firmware revisions pre-set, a copy of the VMware ESXi hypervisor software pre-installed, and some components of the Cisco HyperFlex software already installed. Once on site, the final steps to be performed by Cisco Advanced Services or our Cisco Partner companies’ technical staff are reduced and simplified due to the previous factory work.

For the purposed of this document, the entire setup process is described as though no factory pre-installation work was done, yet still leveraging the tools and processes developed by Cisco to simplify the process and dramatically reduce the deployment time. Installation of the Cisco HyperFlex system is primarily done via a deployable HyperFlex installer virtual machine, available for download at cisco.com as an OVA file. The installer VM performs the Cisco UCS configuration work, the installation of ESXi on the HyperFlex hosts, the installation of the HyperFlex HX Data Platform software and creation of the HyperFlex cluster, while concurrently performing many of the ESXi configuration tasks automatically. Because this simplified installation method has been developed by Cisco, this CVD will not give detailed manual steps for the configuration of all the elements that are handled by the installer. The following sections will guide you through the prerequisites and manual steps needed prior to using the HyperFlex installer, how to utilize the HyperFlex Installer, and finally how to perform the remaining post-installation tasks.

Prior to beginning the installation activities, it is important to gather the following information: To deploy the HX Data Platform, an OVF installer appliance hosted by a separate ESXi server which is not a member of the vCenter HyperFlex Cluster, is required. The HyperFlex installer requires one IP address on the management network and the HX installer appliance IP address must be reachable by Cisco UCS Manager, ESXi management IP addresses on the HX hosts, and vCenter IP addresses where HX hosts are added. IP addresses for the Cisco HyperFlex system need to be allocated from the appropriate subnets and VLANs to be used. IP addresses that are used by the system fall into the following groups: Cisco UCS Manager: These addresses are used and assigned by Cisco UCS manager. Three IP addresses are used by Cisco UCS Manager, one address is assigned to each Cisco UCS Fabric Interconnect, and the third IP address is a roaming address for managing the active FI of the Cisco UCS cluster.

In addition, at least one IP address per Cisco UCS blade or HX-series rack-mount server is required for the default ext-mgmt IP address pool, which are assigned to the CIMC interface of the physical servers. Since these management addresses are assigned from a pool, they need to be provided in a contiguous block of addresses. These addresses must all be in the same subnet. HyperFlex and ESXi Management: These addresses are used to manage the ESXi hypervisor hosts, and the HyperFlex Storage Platform Controller VMs. Two IP addresses per node in the HyperFlex cluster are required from the same subnet, and a single additional IP address is needed as the roaming HyperFlex cluster management interface.

These addresses can be assigned from the same subnet at the Cisco UCS Manager addresses, or they may be separate. HyperFlex Storage: These addresses are used by the HyperFlex Storage Platform Controller VMs, and as vmkernel interfaces on the ESXi hypervisor hosts, for sending and receiving data to/from the HX Distributed Data Platform Filesystem.

Two IP addresses per node in the HyperFlex cluster are required from the same subnet, and a single additional IP address is needed as the roaming HyperFlex cluster storage interface. It is recommended to provision a subnet that is not used in the network for other purposes, and it is also possible to use non-routable IP address ranges for these interfaces. Finally, if the Cisco UCS domain is going to contain multiple HyperFlex clusters, it is possible to use a different subnet and VLAN ID for the HyperFlex storage traffic for each cluster. This is a safer method, guaranteeing that storage traffic from multiple cluster cannot intermix. VMotion: These IP addresses are used by the ESXi hypervisor hosts as vmkernel interfaces to enable vMotion capabilities. One or more IP addresses per node in the HyperFlex cluster are required from the same subnet. Multiple addresses and vmkernel interfaces can be used if you wish to enable multi-nic vMotion.

The following tables will assist with gathering the required IP addresses for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the addresses required, and an example configuration. Network Group: Cisco UCS Management HyperFlex and ESXi Management HyperFlex Storage VMotion VLAN ID: Subnet: Subnet Mask: Gateway: Device Cisco UCS Management Addresses ESXi Management Interface Storage Controller Management Interface ESXi Hypervisor Storage vmkernel Interface Storage Controller Storage Interface VMotion vmkernel Interface Fabric Interconnect A Fabric Interconnect B Cisco UCS Manager HyperFlex Cluster HyperFlex Node #1 HyperFlex Node #2 HyperFlex Node #3 HyperFlex Node #4 HyperFlex Node #5 HyperFlex Node #6 HyperFlex Node #7 HyperFlex Node #8. Network Group: Cisco UCS Management HyperFlex and ESXi Management HyperFlex Storage VMotion VLAN ID: Subnet: Subnet Mask: Gateway: Device Cisco UCS Management Addresses ESXi Management Interface Storage Controller Management Interface ESXi Hypervisor Storage vmkernel Interface Storage Controller Storage Interface VMotion vmkernel Interface Fabric Interconnect A Fabric Interconnect B Cisco UCS Manager HyperFlex Cluster HyperFlex Node #1 HyperFlex Node #2 HyperFlex Node #3 HyperFlex Node #4 Blade #1 Blade #2 Blade #3 Blade #4 Table 37 HyperFlex Cluster Example IP Addressing.

The Cisco UCS Management, and HyperFlex and ESXi Management IP addresses can come from the same subnet, or be separate, as long as the HyperFlex installer can reach them both. By default, the 2.0 HX installation will assign a static IP address to the management interface of the ESXi servers. Using Dynamic Host Configuration Protocol (DHCP) for automatic IP address assignment in not recommended. DNS servers are highly recommended to be configured for querying in the HyperFlex and ESXi Management group. DNS records need to be created prior to beginning the installation. At a minimum, it is highly recommended to create A records for the ESXi hypervisor hosts’ management interfaces.

Additional A records can be created for the Storage Controller Management interfaces, ESXi Hypervisor Storage interfaces, and the Storage Controller Storage interfaces if desired. The following tables will assist with gathering the required DNS information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration. Item Value NTP Server #1 171.68.38.65 NTP Server #2 171.68.38.66 Timezone (UTC-8:00) Pacific Time Prior to the installation, the required VLAN IDs need to be documented, and created in the upstream network if necessary. At a minimum there are 4 VLANs that need to be trunked to the Cisco UCS Fabric Interconnects that comprise the HyperFlex system; a VLAN for the HyperFlex and ESXi Management group, a VLAN for the HyperFlex Storage group, a VLAN for the VMotion group, and at least one VLAN for the guest VM traffic. The VLAN IDs must be supplied during the HyperFlex Cisco UCS configuration step, and the VLAN names can optionally be customized.

The following tables will assist with gathering the required VLAN information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration. Name ID hx-inband-mgmt 3031 hx-storage-data 3032 vm-network 3034 hx-vmotion 3033 The Cisco UCS uplink connectivity design needs to be finalized prior to beginning the installation. One of the early manual tasks to be completed is to configure the Cisco UCS network uplinks and verify their operation, prior to beginning the HyperFlex installation steps. Refer to the network uplink design possibilities in the Network Design section. The following tables will assist with gathering the required network uplink information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration. Fabric Interconnect Port Port Channel Port Channel Type Port Channel ID Port Channel Name A 1/25 Yes No LACP vPC 13 vpc-13-nexus 1/26 Yes No Yes No Yes No B 1/25 Yes No LACP vPC 14 vpc-14-nexus 1/26 Yes No Yes No Yes No Several usernames and passwords need to be defined or known as part of the HyperFlex installation process.

The following tables will assist with gathering the required username and password information for the installation of an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example configuration. Setting Value Hostname IP Address Subnet Mask Default Gateway DNS Server #1 To deploy the HyperFlex installer OVA, complete the following steps: 1. Open the vSphere (thick) client, connect and log in to a vCenter server where the installer OVA will be deployed. Click File >Deploy OVF Template.

Click Browse and locate the Cisco-HX-Data-Platform-Installer-v2.0.1a-20704.ova file, click on the file and click Open. Click Next twice. Modify the name of the virtual machine to be created if desired, and click on a folder location to place the virtual machine, then click Next.

Click a specific host or cluster to locate the virtual machine and click Next. Click the Thin Provision option and click Next. Modify the network port group selection from the drop down list in the Destination Networks column, choosing the network the installer VM will communicate on, and click Next. If DHCP is to be used for the installer VM, leave the fields blank and click Next. If static address settings are to be used, fill in the fields for the installer VM hostname, default gateway, DNS server, IP address, and subnet mask, then click Next. Check the box to power on after deployment, and click Finish.

The installer VM will take a few minutes to deploy, once it has deployed and the virtual machine has started, proceed to the next step. HyperFlex Installer Web Page The HyperFlex installer is accessed via a webpage using your local computer and a web browser. If the HyperFlex installer was deployed with a static IP address, then the IP address of the website is already known. If DHCP was used, open the local console of the installer VM.

In the console you will see an interface similar to the example below, showing the IP address that was leased: Figure 53 HyperFlex Installer VM IP Address To access the HyperFlex installer webpage, complete the following steps: 1. Open a web browser on the local computer and navigate to the IP address of the installer VM. For example, open 2. Click accept or continue to bypass any SSL certificate errors. At the security prompt, enter the username: root 4.

At the security prompt, enter the password: Cisco123 5. Verify the version of the installer in the lower right-hand corner of the Welcome page is the correct version, and click Continue. Read and accept the very exciting End User Licensing Agreement, and click Login.

The HX installer will guide you through the process of setting up your cluster. It will configure Cisco UCS profiles, and settings, as well as assigning IP addresses to the HX servers that come from the factory with ESXi hypervisor software preinstalled. To configure the Cisco UCS settings, policies, and templates for HyperFlex, complete the following steps: 1.

On the HyperFlex installer webpage select a Workflow of “Cluster Creation”. Enter the Cisco UCS Manager and vCenter DNS hostname or IP address, the admin username, and the password, the default Hypervisor credential which comes installed from the factory is username: root with password “Cisco123” and are already entered in the installer. You can select the option to see the passwords in clear text. Optionally, you can import a.json file that has the configuration information, except for the appropriate passwords. Click continue.

Select the Unassociated HX server models that are to be created in the HX cluster and click Continue. If the Fabric Interconnect server ports were not enabled in the earlier step, you have the option to enable them here to begin the discovery process by clicking the Configure Server Ports link. Important: When deploying a second or any additional clusters, you must put them into a different sub-org, and you should also create new VLAN names for the additional clusters. Even if reusing the same VLAN ID, it is prudent to create a new VLAN name to avoid conflicts. For example, for a second cluster: Change the Cluster Name, Org Name and VLAN names so as to not overwrite the original cluster information. You could use naming such as HyperflexCluster-2, HXorg-2, vlan-hx-inband-mgmt-2, vlan- storage-data-2, vlan-Hyperflex Cluster-2, vlan-hx-cluster-2.

Important: (Optional) If you need to add extra iSCSI vNICs and/or FC vHBAs for external storage setup, enable iSCSI Storage and/or FC Storage here using the procedure described in the following section:. Click Continue.

Enter the subnet mask, gateway, DNS, and IP addresses for the Hypervisors (ESXi hosts) as well as host names. The IP’s will be assigned via Cisco UCS to ESXi systems. Click Continue.

Assign the additional IP addresses for the Management and Data networks as well as the cluster IP addresses, then click Continue. A default gateway is not required for the data network, as those interfaces normally will not communicate with any other hosts or networks. Enter the HX Cluster Name, Replication Factor setting (RF=3 was default and recommended in HXDP 1.8 or previous releases, None is default value in HXDP 2.0 for HX All Flash systems during installation, although RF=3 is recommended in this design guide.) 11. Enter the Password that will be assigned to the Controller VMs. Enter the Datacenter Name from vCenter, and vCenter Cluster Name.

Enter the System Services of DNS, NTP, and Time Zone. Enable Auto Support and enter the Auto Support Settings then scroll down. Leave the defaults for Advanced Networking. Validate if VDI is not checked.

Jumbo Frames should be enabled. It is recommended to select Clean up disk partitions. Validation of the configuration will now start.

If there are warnings, you can review and click “Skip Validation” if the warnings are acceptable. If there are no warnings, the validation will automatically continue on to the configuration process. The HX installer will now proceed to complete the deployment and perform all the steps listed at the top of the screen along with status. This process can take approximately 1 hour or more. The process can also be monitored in Cisco UCS Manager GUI and vCenter as well as the profiles and cluster are created. You can review the summary screen after the install completes by selecting Summary on the top right of the window.

You can also review the details of the installation process after the install completes by selecting Progress on the top left of the window. After the install completes, you may export the cluster configuration by clicking on the top-down arrow icon of the window. Click OK to save the configuration to a.json file.

This file can be imported to save time if you need to rebuild the same cluster in the future, and be kept as a record of the configuration options and settings used during the installation. To automate the post installation procedures and verify the HyperFlex Installer has properly configured Cisco UCS Manager, a script has been provided on the HyperFlex Installer OVA. These steps can also be performed manually in vCenter if preferred.

The following procedure will use the script. SSH to the installer OVA IP as root with password Cisco123, # ssh root@10.29.133.76 root@10.29.130.228's password: root@Cisco-HX-Data-Platform-Installer:~# 2. From the CLI of the installer VM, run the script named post_install.py. To check other scripts, perform a ls command: root@Cisco-HX-Data-Platform-Installer:~# cd /usr/share/springpath/storfs-misc/hx-scripts root@Cisco-HX-Data-Platform-Installer:/usr/share/springpath/storfs-misc/hx-scripts# 3. The Installer will have the information of the previous HX installation and it will be used by the script. Enter the HX Controller Password for the HX cluster (use the one entered during the HX Cluster installation), as well as the vCenter user name and password. You can also enter the vSphere license or complete this task later.

Enter “y” to enable HA/DRS. Enter “y” to disable SSH warning.

You can configure logging to a local datastore and create that datastore by entering “y” for configure ESXi logging and entering a datastore name and datastore size. Add the vmotion VMkernel interfaces to each node by inputting “y”. Input the netmask, the vmotion VLAN ID, and IP addresses for each of the hosts as prompted. Vmotion VMkernel Port is created for each host in vCenter: 8. The main installer will have already created a vm-network port group and assigned the default VM network VLAN input from the cluster install. Enter “n” to skip this step and use the default group that was created. If desired, additional VM network port groups with assigned VLANs in the vm-networks vSwitch can be created.

This option will also create the corresponding VLANs in Cisco UCS and assign the VLAN to the vm-network vNIC-Template. This script can be rerun at later time as well to create additional VM networks and Cisco UCS VLANs. Example: Using this option in the script to show how to add more VM networks: VLANs are created in Cisco UCS: VLANs are assigned to vNICs: Port groups are created: 9.

Input y for the Enable NTP option. Enter yes to test the auto support email function and enter a valid email address. Immediately you will receive a notification of receiving of the test email: 11. The post install script will now check the networking configuration and jumbo frames. Enter the ESXi and Cisco UCS Manager password. The script will complete and provide a summary screen. Validate there are no errors and the cluster is healthy.

Optionally, it is recommended to enable a syslog global host. Select the ESXi server, configuration tab, advanced settings, syslog, and entering the value of your syslog server for syslog.global.loghost. You could use the vCenter server as your log host in this case. Repeat this for each HX host.

Create a datastore for virtual machines. This task can be completed by using the vSphere plugin, or by using the web management interface. Using the web interface, go to the HX-Controller cluster IP management URL, for example:, Select Datastores in the left pane, and click Create Datastore.

In the popup, enter the name, size and click create. To use the vSphere web client, select vCenter Inventory Lists, and select the Cisco HyperFlex System, Cisco HX Data Platform, cluster-name, manage tab and the plus (+) icon to create a datastore. Create a test virtual machine on your HX datastore in order to take a snapshot and perform a cloning operation. Take a snapshot of the new virtual machine via the vSphere Web Client prior to powering it on. This can be scheduled as well. In the vSphere in the web client, right click on the VM, and select Cisco HX Data Platform, and Snapshot Now.

Input the snapshot name and click OK. Create a few clones of our virtual machine. Right click the VM, and select Cisco HX Data Platform, then ReadyClones. Input the Number of clones and Prefix, then click OK to start the operation. The clones will be created in seconds. Your cluster is now ready.

You may run any other preproduction tests that you wish to run at this point. Optionally, you can change the HX controller password via the “stcli security password set” command. And it is recommended that you login to the ESXi hosts to change the root passwords for the enhanced security. Customized Installation of Adding vHBAs or iSCSI vNICs From HXDP version 1.8 onward, customers now have the flexibility to leverage other storage infrastructure by mapping other storage to HX systems. As an example, one can map Fibre Channel LUNs on an IBM VersaStack or a NetApp FlexPod system, and then easily do Storage vMotion in the system to and from other storage types. Figure 54 External Storage in HX In order to connect to other storage systems such as FlexPod via iSCSI or FC SAN, it is recommended that vHBAs or additional vNICs be added prior to creating the HX cluster. If these are added post cluster creation, the PCI enumeration can change, causing passthrough device failure in ESXi and the HX cluster to go offline.

The basic workflow for this procedure is outlined as follows. This assumes that ESXi is installed on the HX nodes prior to this step. In this section only the addition of FC vHBAs or iSCSI vNICs to HX hosts is documented (A more detailed procedure about in depth the process of adding other storage to HX cluster is in the ). There are two basic methods covered below.

The first method is adding the adapters prior to creating the cluster, and the second method is adding the adapters after creating the cluster. Although in this CVD we use iSCSI as example to connect HX to external IP storage devices, the vNICs created by this procedure could be used for connecting to NFS storage devices. Adding or removing vNICs or vHBAs on the ESXi host will cause PCI address shifting upon reboot. This can cause the PCI passthrough configuration on the HX node to no longer be valid, and the HX controller VM will not boot. It is recommended that you do not make such hardware changes after the HX cluster is created. Alternatively, it is a better option to add vHBAs or iSCSI vNICs if necessary during the process while the cluster is created. From HXDP 2.0 onward the HX installer supports this configuration as a part of the cluster creation.

An overview of this procedure is as follows: 1. Open the HyperFlex Installer from a web browser, login as root user. On the HyperFlex Installer webpage select a Workflow of Cluster Creation to start a fresh cluster installation. Continue with appropriate inputs until you get to the page for Cisco UCS Manager configuration. Click the >arrow to extend iSCSI Storage configuration. Check the box Enable iSCSI Storage if you want to create additional vNICs to connect to the external iSCSI storage systems.

Enter VLAN name and ID for Fabric A and B dual connections. Click >arrow to extend FC Storage configuration. Check the box Enable FC Storage if you want to create Fibre Channel vHBAs to connect to the external FC or FCoE storage systems. Enter WWxN Pool prefix (For example: 20:00:00:25:B5:ED), VSAN names and IDs for Fabric A and B dual connections. Continue and complete the inputs for all required configuration, start the cluster creation and wait upon the completion.

Note that you can choose to enable either only iSCSI or only FC or both according to your own needs. After the install is completed, dual vHBAs and/or dual iSCSI vNICs are created for the Service Template hx-nodes. For each HX node, dual vHBAs and/or dual iSCSI vNICs are created as well from the cluster creation. The additional vNICs are under the vNICs directory but not under the iSCSI vNICs directory (as those iSCSI vNICs are specifically used for iSCSI boot adapters). In vCenter, a standard vSwitch vswitch-hx-iscsi is created on each HX ESXi host. Though the further configuration to bind iSCSI VMkernel ports needs to be done manually for storage connection (see ).

Should you decide to add additional storage such as a FlexPod after you have already installed your cluster, the following procedure can be used as adding vHBAs or certain vNICs that could cause PCI re-enumeration on ESXi reboot. This can create a passthrough device failure and HX controllers will not boot.

It is recommended you do not reboot multiple nodes at once after making such hardware changes. Validate health state of each system before rebooting or performing the procedure on subsequent nodes. In this example, we will be adding vHBAs after a cluster is created via the Cisco UCS profile service template. We will reboot one ESXi HX node at a time in a rolling upgrade fashion so there will be no outage. Adding additional virtual adapter interfaces to the VIC card on the HX nodes might cause disruption on the system.

Read the following procedure carefully before doing it. To add vHBAs or iSCSI vNICs, complete the following steps: 1.

Example of hardware change: Add vHBA’s to the Service Profile Templates for HX (refer to Cisco UCS documentation for your storage device such as a FlexPod CVD for configuring the vHBA’s). After you have completed adding HBA’s to the templates, the servers will require a reboot. Do NOT reboot the HX servers at this time. Using the vSphere Web Client, place one of the HX ESXi hosts in HX-Maintenance Mode. After the host has entered Maintenance Mode, you can reboot the associated node in Cisco UCS Manager to complete the addition of the new hardware.

In vSphere, Go to the Configuration tab of the ESXi server. Go to Hardware, and select Advanced Settings. In our example, there will be no devices configured for Passthrough since the PCI order has been changed.

Select Configure Passthrough. Select the LSI Logic card to be Passthrough and click OK. Reboot the ESXi host again. When the ESXi host has rebooted, you can validate that your Passthrough device is again available in the advanced setting screen.

Exit HX Maintenance Mode. Reconfigure the HX controller, by right-clicking the system and selecting Edit Settings. In the Hardware tab, the PCI device is shown as unavailable. Highlight the device and click Remove. Click Add, select PCI device, and click Next. The PCI device should be highlighted in the dropdown.

Check the health status of the cluster with the command to validate when the cluster is healthy before proceeding to the next node. Example: Run these command to the cluster IP for the HX Controllers “stcli cluster refresh” then “stcli cluster info grep -i health” 16. Wait and check again validating the cluster is healthy 17. Repeat the process for each node in the cluster as necessary. There are various instances where you might want to redeploy or perform an install using the HX customized install method.

Since the systems come from the factory with ESXi pre-installed, for a re-install we will need to install ESXi on the nodes. The HyperFlex system requires a Cisco custom ESXi ISO file to be used, which has Cisco hardware specific drivers pre-installed to ease the installation process, as detailed in section. The Cisco custom ESXi ISO file is available to download at cisco.com.

The HX custom ISO is based on the Cisco custom ESXi 6.0 U2 Patch 4 ISO release with the filename: HX- Vmware-ESXi-60U2-4600944-Cisco-Custom-6.0.2.4.isoand is available on the Cisco web site: The kickstart New HyperFlex Deployment process will automatically perform the following tasks with no user interaction required: Accept the End User License Agreement. Configure the root password to: Cisco123 Install ESXi to the internal mirrored Cisco FlexFlash SD cards. Set the default management network to use vmnic0, and obtain an IP address via DHCP. Enable SSH access to the ESXi host.

Enable the ESXi shell. Enable serial port com1 console access to facilitate Serial over LAN access to the host. Configure the ESXi configuration to always use the current hardware MAC address of the network interfaces, even if they change.

Rename the default vSwitch to vswitch-hx-inband-mgmt. To prepare the custom kickstart ESXi installation ISO file, complete the following steps: 1.

Copy the base ISO file, HX-Vmware-ESXi-60U2-4600944-Cisco-Custom-6.0.2.4.iso to the HyperFlex installer VM using SCP, SFTP or any available method. Place the file in the /var/www/localhost/images/ folder. Verify the newly downloaded ISO file exists in the proper webpage location, by opening a web browser and navigating the IP address of the HyperFlex installer VM, followed by /images.

For example: 3. The new file, named HX- Vmware-ESXi-60U2-4600944-Cisco-Custom-6.0.2.4.iso will be listed in the webpage file listing. A high level example of a HX rebuild procedure would be: 1. Clean up the existing environment by: - Deleting existing HX virtual machines and HX datastores, - Removing the HX cluster in vCenter, - Removing vCenter MOB entries for HX extension, as well as - Deleting HX sub-organization and HX VLANs in Cisco UCS Manager. Run HX installer, use the customized version of the installation workflow by selecting the “I know what I am doing” link. Use customized workflow and only choose the “Run UCS Manager Configuration” option, click Continue. When the Cisco UCS Manager configuration is complete, HX hosts are associated with HX service profiles and powered on.

Now perform a fresh ESXi installation using the custom ISO image and following the steps in section Cisco UCS vMedia and Boot Policies. When ESXi fresh install is finished, use customized workflow and select the remaining 3 options ESXi Configuration, Deploy HX Software, and Create HX Cluster to continue and complete the HyperFlex cluster installation. More information on the various installation methods can be found in the: Using a Cisco UCS vMedia policy, the mounting of the custom kickstart ESXi installation ISO file can be automated. The existing vMedia policy, named “HyperFlex” must be modified to mount this file, and the boot policy must be modified temporarily to boot from the remotely mounted vMedia file.

Once these two tasks are completed, the servers can be rebooted, and they will automatically boot from the remotely mounted vMedia file, installing and configuring ESXi on the servers. WARNING: While vMedia boot is very efficient for installing multiple servers, using vMedia boot policies below can automatically re-install ESXi on any existing server that is rebooted with this policy. Extreme caution is recommended. This is a destructive procedure to the hosts as it will overwrite the install location should other systems be rebooted accidentally. This procedure needs to be carefully monitored and the boot policy should be changed back to original settings immediately after intended servers are rebooted. It is recommended only for new installs/rebuilds.

Alternatively, you can manually select the boot device using the KVM console on startup instead of making the vMedia device the default boot selection. To configure the Cisco UCS vMedia and Boot Policies, complete the following steps: 1. In Cisco UCS Manager, click the Servers tab in the navigation pane. Expand Servers >Policies >root >Sub-Organizations >hx-cluster >vMedia Policies, and click vMedia Policy HyperFlex.

In the configuration pane, click Create vMedia Mount. Enter a name for the mount, for example: ESXi. Select the CDD option. Select HTTP as the protocol. Enter the IP address of the HyperFlex installer VM, for example: 10.29.133.76 8. Select None as the Image Variable Name.

Enter HX-Vmware-ESXi-60U2-4600944-Cisco-Custom-6.0.2.4.iso as the Remote File. Enter /images/ as the Remote Path. Select Servers >Service Profile Templates >root >Sub-Organizations >hx-cluster >Service Template hx-nodes. In the configuration pane, click the vMedia Policy tab. Click Modify vMedia Policy. Chose the HyperFlex vMedia Policy from the drop down selection, and click OK twice. For Compute-Only nodes (if necessary), select Servers >Service Profile Templates >root >Sub-Organizations >hx-cluster >Service Template compute-nodes.

Repeat Step 13 to 15 to Modify vMedia Policy. Select Servers >Policies >root >Sub-Organizations >hx-cluster >Boot Policy HyperFlex.

In the navigation pane, expand the section titled CIMC Mounted vMedia. Click the entry labeled Add CIMC Mounted CD/DVD. Select the CIMC Mounted CD/DVD entry in the Boot Order list, and click the Move Up button until the CIMC Mounted CD/DVD entry is listed first. Click Save Changes, and click OK. To begin the installation after modifying the vMedia policy, Boot policy and service profile template, the servers need to be rebooted. To monitor the progress of one or more servers, it is advisable to open a remote KVM console session to watch the installation.

To open the KVM console and reboot the servers, complete the following steps: 1. In Cisco UCS Manager, click the Equipment tab in the navigation pane. Expand Equipment >Rack-Mounts >Servers >Server 1.

In the configuration pane, click KVM Console. Click continue to any security alerts that appear. The remote KVM Console window will appear shortly and show the server’s local console output. Repeat Steps 2-4 for any additional servers whose console you wish to monitor during the installation. In Cisco UCS Manager, click the Equipment tab in the navigation pane. Expand Equipment >Rack Mounts >Servers. In the configuration pane, click the first server to be rebooted, then shift+click the last server to be rebooted, selecting them all.

Right-click the mouse and click Reset. Select Power Cycle and click OK.

The servers you are monitoring in the KVM console windows will now immediately reboot, and boot from the remote vMedia mount. In the Cisco customized installation window, select New HyperFlex Deployment and enter.

Enter “yes” in all lowercase to confirm and install ESXi. There may be error messages seen on screen, but they can be safely ignored.

(Optional) When installing Compute-Only node to media other than SD card, select Fully Interactive Install” instead and enter “yes” to confirm the install. Once all the servers have booted from the remote vMedia file and begun their installation process, the changes to the boot policy need to be quickly undone, to prevent the servers from going into a boot loop, constantly booting from the installation ISO file. To revert the boot policy settings, complete the following steps: 1. Select Servers >Policies >root >Sub-Organizations >hx-cluster >Boot Policy HyperFlex. Select the CIMC Mounted CD/DVD entry in the Boot Order list, and click Delete. Click Save Changes and click OK.

The changes made to the vMedia policy and service profile template may also be undone once the ESXi installations have all completed fully, or they may be left in place for future installation work. The process to expand a HyperFlex cluster can be used to grow an existing HyperFlex cluster with additional converged storage nodes, or to expand an existing cluster with additional compute-only nodes to create a hybrid cluster. With converged nodes, you are able to use the standard workflow wizard for cluster expansion. The process for adding compute-only nodes differ slightly. Expansion with Compute-Only Nodes The HX installer has a wizard for Cluster Expansion with Converged Nodes but not for Compute-only Nodes. The manual procedure to expand HX cluster with Compute-only nodes is necessary and covered in this section.

HyperFlex hybrid clusters, which is composed of both converged nodes and compute-only nodes, are built by first applying Cisco UCS profiles to the computing servers (can be Cisco UCS-C standalone servers or Cisco UCS-B blade servers as mentioned above), installing ESXi, then expanding the cluster with compute-only nodes. There are some specific differences in the processes which will be outlined below. To expand an existing cluster, creating a hybrid HyperFlex cluster, complete the following steps: Configure Cisco UCS 1. In Cisco UCS Manager, click the Servers tab in the navigation pane. Expand Servers >Service Profile Templates >root >Sub-Organizations >hx-cluster. Right-click Service Profile Template compute-nodes and click Create Service Profiles from Template.

Enter the naming prefix of the service profiles that will be created from the template. Enter the starting number of the service profiles being created and the number of service profiles to be created. Click OK twice.

Expand Servers >Service Profiles >root >Sub-Organizations, select the proper HX Sub-Organization. Click the first service profile created for the additional compute-only nodes (Cisco UCS-B200 M4 server in this example), and right-click. Click Change Service Profile Association. In the Server Assignment dropdown list, change the selection to Select Existing Server. Select Available Servers. In the list of available servers, chose the server to assign the service profile to and click OK.

Click Yes to accept that the server will reboot to apply the service profile. Repeat steps 7-14 for each additional compute-only nodes that will be added to the cluster.

Configure ESXi Hypervisors 1. It is assumed that the ESXi Hypervisor has been pre-installed. If not, install the ESXi hypervisor either manually using the HX custom ISO or through the vMedia policy method provided earlier. Login to the KVM console of the server through Cisco UCS Manager.

Enter F2 on the console for login prompt. Login to ESXi with root password of Cisco123. Configure Management Network.

Select VLAN and Configure the VLAN ID to your hx-inband-management VLAN and Enter OK (VLAN ID is required for HXDP 1.8.1 or later version code). Select IPv4 Configuration. Set Static IP and input IP address, Netmask and Default Gateway, then Enter OK. Select the DNS configuration and enter the DNS info and Enter OK. Press Esc to leave the Network Configuration Page making sure to enter Y to apply and the changes. Repeat steps 1-10 for each additional ESXi hosts that will be added to the HX cluster. Expand the HX Cluster 1.

Open the HyperFlex installer webpage. Select the customized installation workflow by selecting the “I know what I am doing” link. Select both the Deploy HX Software and Expand HX Cluster options, click Continue. Enter the vCenter and Hypervisor Credentials and click Continue.

Validate the cluster IP is detected and correct or manually enter it, and click Continue. Click Add Compute Server and add the IP address for the Hypervisor Management network, and add the IP address for the Hypervisor Data network. Input the Controller VM password. It is required for Cisco HyperFlex Clusters built with compute-only nodes, the number of compute-only nodes cannot exceed the number of HX-series converged nodes. The HX installer has a wizard for Cluster Expansion with Converged Nodes. The procedure is similar to initial setup.

On the HyperFlex installer webpage select a Workflow of “Cluster Expansion with Converged Nodes”. On Credentials page, enter the Cisco UCS Manager and vCenter DNS hostname or IP address, the admin username, and the password, the default Hypervisor credential which come from the factory as root password Cisco123 are already entered in the installer. You can select the option to see the passwords in clear text. Optionally, you can import a.json file that has the configuration information. Click Continue.

Select the HX cluster to expand and click Continue. Select the unassociated HX servers you want to expand to the HX cluster. Click Continue. On the Cisco UCS Manager Configuration page, enter the VLAN settings, Mac Pool Prefix, UCS ext-mgmt IP Pool for CIMC, iSCSI Storage setting, FC Storage setting, and Cisco UCS firmware version and sub-organization name. Make sure all the inputs here are consistent with the initial cluster setup. Click Continue.

Enter the subnet mask, gateway, DNS, and IP addresses for the Hypervisors (ESXi hosts) as well as host names. The IP’s will be assigned through Cisco UCS Manager to ESXi systems. Click Continue. Enter the additional IP addresses for the Management and Data networks of the storage controllers. Enter the Password that will be assigned to the Controller VMs.

Enable Jumbo Frames and select Clean up disk partitions. (Optional) At this step you can manually add more servers for expansion if these servers are hypervisor-ready, by clicking on Add Compute Server or Add Converged Server and then entering the IP addresses for the storage controller management and data networks. Validation of the configuration will now start. If there are warnings, you can review and click “Skip Validation” if the warnings are acceptable (e.g.

You might get the warning from Cisco UCS Manger validation that the guest VLAN is already assigned). If there are no warnings, the validation will automatically continue on to the configuration process. The HX installer will now proceed to complete the deployment and perform all the steps listed at the top of the screen along with status. You can review the summary screen after the install completes by selecting Summary on the top right of the window. After the install has completed, the Converged Node is added to be a part of the cluster, but still requires some post installation steps either using post_install.py script or manually completed to be consistent with the configuration of the existing nodes. Management The Cisco HyperFlex vCenter Web Client Plugin is installed by the HyperFlex installer to the specified vCenter server or vCenter appliance.

The plugin is accessed as part of the vCenter Web Client interface, and is the primary tool used to monitor and configure the HyperFlex cluster. To manage HyperFlex cluster using the plugin, complete the following steps: 1. Open the vCenter Web Client, and login. In the home pane, from the home screen click vCenter Inventory Lists. In the Navigator pane, click Cisco HX Data Platform.

In the Navigator pane, choose the HyperFlex cluster you want to manage and click the name. Summary From the Web Client Plugin Summary screen, several elements are presented: Overall cluster usable capacity, used capacity, free capacity, datastore capacity provisioned, and the amount of datastore capacity provisioned beyond the actual cluster capacity. Deduplication and compression savings percentages calculated against the data stored in the cluster. The cluster operational status, the health state, and the number of node failures that can occur before the cluster goes into read-only or offline mode. A snapshot of performance over the previous hour, showing IOPS, throughput, and latencies. From the Web Client Plugin Monitor tab, several elements are presented: Clicking the Performance button gives a larger view of the performance charts. If a full webpage screen view is desired, click the Preview Interactive Performance charts hyperlink.

Then enter the username (root) and the password for the HX controller VM to continue. Clicking the Events button displays a HyperFlex event log, which can be used to diagnose errors and view system activity events. From the Web Client Plugin Manage tab, several elements are presented: Clicking the Cluster button gives an inventory of the HyperFlex cluster and the physical assets of the cluster hardware. Clicking the Datastores button allows datastores to be created, edited, deleted, mounted and unmounted, along with space summaries and performance snapshots of that datastore. In this section, various best practices and guidelines are given for management and ongoing use of the Cisco HyperFlex system. These guidelines and recommendations apply only to the software versions upon which this document is based, listed in.

For the best possible performance and functionality of the virtual machines that will be created using the HyperFlex ReadyClone feature, the following guidelines for preparation of the base VMs to be cloned should be followed: Base VMs must be stored in a HyperFlex datastore. All virtual disks of the base VM must be stored in the same HyperFlex datastore.

Base VMs can only have HyperFlex native snapshots, no VMware redo-log based snapshots can be present. For very high IO workloads with many clone VMs leveraging the same base image, it might be necessary to use multiple copies of the same base image for groups of clones. Doing so prevents referencing the same blocks across all clones and could yield an increase in performance. This step is typically not required for most uses cases and workload types.

Figure 55 HyperFlex Management - ReadyClones HyperFlex native snapshots are high performance snapshots that are space-efficient, crash-consistent, and application consistent, taken by the HyperFlex Distributed Filesystem, rather than using VMware redo-log based snapshots. For the best possible performance and functionality of HyperFlex native snapshots, the following guidelines should be followed: Make sure that the first snapshot taken of a guest VM is a HyperFlex native snapshot, by using the “Cisco HX Data Platform” menu item in the vSphere Web Client, and choosing Snapshot Now or Schedule Snapshot. Failure to do so reverts to VMware redo-log based snapshots. (Figure 57) A Sentinel snapshot becomes a base snapshot that all future snapshots are added to, and prevents the VM from reverting to VMware redo-log based snapshots.

Failure to do so can cause performance degradation when taking snapshots later, while the VM is performing large amounts of storage IO. Additional snapshots can be taken via the “Cisco HX Data Platform” menu, or the standard vSphere client snapshot menu. As long as the initial snapshot was a HyperFlex native snapshot, each additional snapshot is also considered to be a HyperFlex native snapshot. Do not delete the Sentinel snapshot unless you are deleting all the snapshots entirely. Do not revert the VM to the Sentinel snapshot. (Figure 58) If large numbers of scheduled snapshots need to be taken, distribute the time of the snapshots taken by placing the VMs into multiple folders or resource pools. For example, schedule two resource groups, each with several VMs, to take snapshots separated by 15 minute intervals in the scheduler window.

Snapshots will be processed in batches of 8 at a time, until the scheduled task is completed. (Figure 59) The Cisco HyperFlex Distributed Filesystem can create multiple datastores for storage of virtual machines. While there can be multiple datastores for logical separation, all of the files are located within a single distributed filesystem. As such, performing storage vMotions of virtual machine disk files has little value in the HyperFlex system.

Furthermore, storage vMotions create additional filesystem consumption and generate additional unnecessary metadata within the filesystem, which must later be cleaned up via the filesystem’s internal cleaner process. It is recommended to not perform storage vMotions of the guest VMs between datastores within the same HyperFlex cluster. Storage vMotions between different HyperFlex clusters, or between HyperFlex and non-HyperFlex datastores are permitted. HyperFlex clusters can create multiple datastores for logical separation of virtual machine storage, yet the files are all stored in the same underlying distributed filesystem. The only difference between one datastore and another are their names and their configured sizes. Due to this, there is no compelling reason for a virtual machine’s virtual disk files to be stored on a particular datastore versus another.

All of the virtual disks that make up a single virtual machine must be placed in the same datastore. Spreading the virtual disks across multiple datastores provides no benefit, and can cause ReadyClone and Snapshot errors. Within the vCenter Web Client, a specific menu entry for “HX Maintenance Mode” has been installed by the HyperFlex plugin. This option directs the storage platform controller on the node to shutdown gracefully, redistributing storage IO to the other nodes with minimal impact. Using the standard Maintenance Mode menu in the vSphere Web Client, or the vSphere (thick) Client can be used, but graceful failover of storage IO and shutdown of the controller VM is not guaranteed. This section provides a list of items that should be reviewed after the HyperFlex system has been deployed and configured. The goal of this section is to verify the configuration and functionality of the solution, and ensure that the configuration supports core availability requirements.

The following tests are critical to functionality of the solution, and should be verified before deploying for production: Verify the expected number of converged storage nodes and compute-only nodes are members of the HyperFlex cluster in the vSphere Web Client plugin manage cluster screen. Verify the expected cluster capacity is seen in the vSphere Web Client plugin summary screen. (See ) Create a test virtual machine that accesses the HyperFlex datastore and is able to perform read/write operations. Perform the virtual machine migration (vMotion) of the test virtual machine to a different host on the cluster. During the vMotion of the virtual machine, make sure the test virtual machine can perform a continuous ping to default gateway and to check if the network connectivity is maintained during and after the migration. The following redundancy checks can be performed to verify the robustness of the system.

Network traffic, such as a continuous ping from VM to VM, or from vCenter to the ESXi hosts should not show significant failures (one or two ping drops might be observed at times). Also, all of the HyperFlex datastores must remain mounted and accessible from all the hosts at all times. Administratively disable one of the server ports on Fabric Interconnect A which is connected to one of the HyperFlex converged storage hosts. The ESXi virtual switch uplinks for fabric A should now show as failed, and the standby uplinks on fabric B will be in use for the management and vMotion virtual switches. Upon administratively re-enabling the port, the uplinks in use should return to normal. Administratively disable one of the server ports on Fabric Interconnect B which is connected to one of the HyperFlex converged storage hosts.

The ESXi virtual switch uplinks for fabric B should now show as failed, and the standby uplinks on fabric A will be in use for the storage virtual switch. Upon administratively re-enabling the port, the uplinks in use should return to normal. Place a representative load of guest virtual machines on the system. Put one of the ESXi hosts in maintenance mode, using the HyperFlex HX maintenance mode option.

All the VMs running on that host should be migrated via vMotion to other active hosts through vSphere DRS, except for the storage platform controller VM, which will be powered off. No guest VMs should lose any network or storage accessibility during or after the migration. This test assumes that enough RAM is available on the remaining ESXi hosts to accommodate VMs from the host put in maintenance mode.

The HyperFlex cluster will show in an unhealthy state. Reboot the host that is in maintenance mode, and exit it from maintenance mode after the reboot. The storage platform controller will automatically start when the host exits maintenance mode. The HyperFlex cluster will show as healthy after a brief time to restart the services on that node. VSphere DRS should rebalance the VM distribution across the cluster over time.

Many vCenter alerts automatically clear when the fault has been resolved. Once the cluster health is verified, some alerts may need to be manually cleared.

Reboot one of the two Cisco UCS Fabric Interconnects while traffic is being sent and received on the storage datastores and the network. The reboot should not affect the proper operation of storage access and network traffic generated by the VMs. Numerous faults and errors will be noted in Cisco UCS Manager, but all will be cleared after the FI comes back online. A: Cluster Capacity Calculations A HyperFlex HX Data Platform cluster capacity is calculated as follows: ((( X 10^9) / 1024^3) X X X 0.92) / replication factor Divide the result by 1024 to get a value in TiB The replication factor value is 3 if the HX cluster is set to RF=3, and the value is 2 if the HX cluster is set to RF=2. The 0.92 multiplier accounts for an 8% reservation set aside on each disk by the HX Data Platform software for various internal filesystem functions.

Calculation example: = 1200 for 1.2 TB disks = 15 for an HX240c-M4SX model server = 8 replication factor = 3 Result: (((1200*10^9)/1024^3)*15*8*0.92)/3 = / 1024 = 40.16 TiB B: HyperFlex Sizer HyperFlex sizer is a cloud based end-to-end tool that can help the customers and partners find out how many Cisco HyperFlex nodes are needed and how the nodes can be configured to meet their needs for the compute resources, storage capacity and performance requirements in the datacenter. The sizing guidance of the HX system is calculated according to the information of workloads collected from the users. This cloud application can be accessed from anywhere at Cisco website ( CCO login required): There are some improvements of this sizing tool in HXDP 2.0 release including: Addition of HyperFlex All-Flash as sizing options Addition of Microsoft SQL Workload sizing Sizing Report Download – PowerPoint file containing details of the sizing input, proposed configuration and utilization of resources for that option. UI improvement and streamline optimization for the sizing workflows.

The HyperFlex Sizer tool is designed to provide general guidance in evaluating the optimum solution for using selected Cisco products. The tool is not intended to provide business, legal, accounting, tax or professional advice. The tool is not intended as a substitute for your own judgment or for that of your professional advisors. Currently four workload types are supported: VDI, General VSI, Microsoft SQL and Raw Capacity calculator with the options for All-Flash HyperFlex cluster or Hybrid HyperFlex cluster. You can choose to add compute nodes if necessary.

The following examples demonstrate the scenario where a newly built HX cluster connects to the existing third-party storage devices, in either iSCSI or FC protocol. The new HX system is built with its own Fabric Interconnect switches then connecting to upstream Ethernet switches or Fibre Channel switches where the existing storage devices reside.

For a different scenario where HX nodes are added to the existing FI domain it needs to be extremely careful as HX installer will overwrite any conflicting configuration in the existing Cisco UCS domain, e.g. It might require upgrade of Cisco UCS firmware or change of the configuration on the upstream switches as well. All these might be really disruptive to the existing production environment and need to be carefully planned and operated in the maintenance window.

It is recommended that you contact Cisco support team to make this kind of change when you need to connect HX nodes to the existing Fabric Interconnect domain. The HX installer can guide you through the process of setting up your HX cluster allowing you to leverage existing 3 rd party storage via the iSCSI protocol. It will automatically configure Cisco UCS profiles, and HX cluster nodes with extra vNICs for iSCSI, and proper VLAN’s in the setup. The procedure is described in this CVD. It is assumed that the 3 rd party storage system is already configured per a Cisco Validated Design and all networking configuration are completed on the upstream switches as well.

For ISCSI, the VLANs are configured on the A Fabric and B fabric separately as per those documents. In this example the topology is that the HX hosts connect to the Cisco UCS Fabric Interconnects that are connected to the upstream Ethernet switches e.g. Nexus 9000 series.

The third party storage is connected to the Ethernet switches. To configure the HX system with iSCSI external storage for HyperFlex, complete the following steps: 1. Prior to installation of HX let us identify some iSCSI settings from the existing environment. Make sure that the 3 rd party storage device has two iSCSI VLANs. Record them in the following table (Table 48). This information will be needed for later use in the HX install. Record the IP addresses of the iSCSI controller interfaces for the A and B path targets, and the iSCSI IQN name of the target device in the same table.

Depending on how the redundant storage paths are configured in the production, more than two controlling interfaces might be recorded here. For example, in the FlexPod setup when the NetApp storage array connects to Cisco Nexus 9000 series switches via VPC, normally four iSCSI IP addresses are assigned, two for one Path (A or B). Items Fabric A Fabric B iSCSI VLAN ID iSCSI Target Ports IP Address-A IP Address-B iSCSI IQN Name iSCSI Storage Controller #1 iSCSI Storage Controller #2 2. Follow to create HX cluster with the external storage adapters using the same VLAN ID’s obtained from Step 1 for both Fabric A and B. Upon completion of HX install two vNICs for iSCSI will be created for each HX host. Open Cisco UCS Manager, expand LAN >LAN Cloud >Fabric A >VLANs, then Fabric B >VLANs to verify that the ISCSI VLANs are created and assigned to Fabric A and B. On the LAN tab, expand Policies >root >Sub-Organizations, go to the HX sub-organization just created, view the iSCSI templates that were created.

In Cisco UCS Manager, Expand Servers >Service Profiles >root >Sub-Organizations, go to the HX sub-organization just created, verify iSCSI vNICs on all HX servers. Click on one vNIC, view the properties of that iSCSI adapter. Make sure Jumbo MTU 9000 is set.

Next set up the networking for the vSphere iSCSI switch. Login to vCenter and select the first node of the HX cluster in the left screen, then on the right screen select the Configuration tab, select Networking in the hardware pane, then scroll to the iSCSI switch. Click Properties. Select VMkernel and click Next.

Name iSCSI-A for the Network Label and input iSCSI VLAN ID for the A Fabric, then click Next. Add the IP address for subnet for Fabric-A and click Next. Click Finish to complete addition of iSCSI VMkernel port for A Fabric. Repeat Steps 7-11 to add VMkernel Port for iSCSI-B. Back to the vSwitch Properties page, highlight the vSwitch and click Edit.

Change MTU for vSwitch to 9000. Select the NIC Teaming tab and make both adapters active by moving the standby adapter up. Highlight the iSCSI-A VMkernel port and click Edit in the vSwitch Properties page.

Change the port MTU t0 9000. Select the NIC Teaming tab. Choose the option of Override switch failover order, highlight vmnic9 and move it to Unused Adapters as this adapter is for the iSCSI-B connection. Highlight the iSCSI-B VMkernel port and click Edit. Change the port MTU t0 9000.

Select the NIC Teaming tab. Select the Override switch failover order, highlight vmnic8 and move it to Unused Adapters as this adapter is for the iSCSI-A connection. Click Close and review the iSCSI vSwitch.

Now we should have two IP addresses used in the vSwitch on separate VLANs. Repeat Steps 6-22 to configure the iSCSI vSwitch for the other HX nodes in the cluster. Add the software iSCSI adapters on HX hosts. Select the first node of the HX cluster in the left screen, then on the right screen select the Configuration tab, select Storage Adapters in the hardware pane and click Add, then click OK to Add Software iSCSI Adapters, and then click OK again. Scroll down and right-click the newly created software initiator, right-click and select Properties.

Click Configure to change the iSCSI IQN name to a customized name. Click the Network Configuration tab, and click Add to bind the VMkernel Adapters to the software iSCSI adapter. Select iSCSI-A and click OK. Click Add again, and select ISCSI-B and click OK.

Copy and record the initiator name, IP addresses of iSCSI-A and iSCSI-B VMkernel ports to the following table. Save these values for later use to add to the initiator group created on the storage array. Items Fabric A Fabric B iSCSI VLAN ID HX Hosts IP Address-A IP Address-B iSCSI IQN Name HX Server #1 iSCSI Initiator HX Server #2 iSCSI Initiator HX Server #3 iSCSI Initiator HX Server #4 iSCSI Initiator HX Server #5 iSCSI Initiator HX Server #6 iSCSI Initiator HX Server #7 iSCSI Initiator HX Server #8 iSCSI Initiator 31.

Click the Dynamic Discovery tab and click Add and enter the first IP address that you recorded from your storage device network interface. Click Add again until all the interfaces for your storage controllers are entered. You do not need to rescan the host bus adapter at this point, so choose No to the scan popup.

Repeat Steps 24-32 adding the software iSCSI adapters for the remaining HX nodes. Now create iSCSI initiator groups and then create an iSCSI LUN on the storage system and map it to the HX system.

In this example we are using NetApp OnCommand System Manager GUI to create a LUN on FAS3250 array, so please consult your storage documentation to accomplish the same tasks. It is assumed you have already configured your iSCSI storage as shown in the CVD. Open NetApp OnCommand System Manager GUI from the web browser, select the pre-configured iSCSI Storage Virtual Machine, expand Storage, then LUNs; from the right pane, click Create. This will open Create LUN wizard. Click Next on the General Properties page, enter the LUN Name, Type and Size. Check “Select an existing volume or qtree for this LUN”, browse and select an existing volume, then click Next. On Initiators Mapping page, select Add Initiator Group.

In Create Initiator Group wizard, on the General tab, enter Name, Operation System, and select Type of iSCSI for the Initiator Group to be created. On Initiators tab, click Add then enter the iSCSI IQN Name of the first HX host (copy from Table 49), click OK. Repeat Step 40 until the IQN names of all HX iSCSI adapters are added. Select Create to create the Initiator Group. The Create Initiator Group Wizard closes and reverts to the Initiators Mapping page of the Create LUN wizard.

Select the HX initiator group that is just created, click Next three times then click Finish to complete the LUN creation. Check the iSCSI initiators mapped to this LUN. With a mapped LUN, you can rescan the iSCSI software initiator. Login to the vCenter again, in the configuration tab, right-click the iSCSI software adapter and click Rescan or click Rescan All at the top of the pane (do this for each host). The iSCSI disk will show up in the details pane. Add the disk to the cluster by selecting Storage in the Hardware pane, then Add Storage in the Configuration tab.

Leave Disk/LUN selected and click Next. Now the NetApp iSCSI LUN will be detected. Highlight the disk and click Next, and then click Next again. Enter the new Datastore name and click Next then Finish. New iSCSI datastore for the HX cluster will be created. You can now create VM’s on this new datastore and migrate data between HX and the ISCSI datastore.

The HX installer can guide you through the process of setting up your HX cluster allowing you to leverage existing 3 rd party storage via the Fibre Channel protocol. It will automatically configure Cisco UCS profiles, and HX cluster nodes with vHBAs, proper VSAN, and WWPN assignments simplifying the setup. The procedure is described in this CVD.

It is assumed that the third party storage system is already configured per a Cisco Validated Design and all networking configuration including Fibre Channel for connecting to the upstream switches is completed as well. In this example we will be using Cisco MDS Fibre Channel switches that are connected to the Cisco UCS Fabric Interconnects that are configured with some unified ports in End Host FC mode. The third party storage is connected to the MDS switches. It is required that you obtain the VSAN ID’s being used in your current environment for the storage device that is already configured. This can be obtained from the SAN tab in Cisco UCS Manager, or from the upstream Fibre Channel switches. Follow for the HX cluster installation using the same VSAN ID’s obtained from Step 1 for both Fabric A and B. Upon completion of HX install, two VSANs and two vHBAs (one for Fabric A and one for Fabric B) for each HX host will be created.

Open Cisco UCS Manager, Expand SAN >SAN Cloud >Fabric A >VSANs, then Fabric B >VSANs, verify the right VSANs are generated: 3. In Cisco UCS Manager, Expand Servers >Service Profiles >root >Sub-Organizations, go to the HX sub-organization you just created, verify vHBAs on all HX servers: 4. Record all the WWPN’s for each HX node in the following table. It is needed later for zone configuration on the FC switches.

You can copy the WWPN value by clicking on the vHBA in Cisco UCS Manager and the in the right pane, right-clicking the WWPN to copy. Items Value Fabric A Fabric B HX Server #1 WWPN Alias HX Server #2 WWPN Alias HX Server #3 WWPN Alias HX Server #4 WWPN Alias HX Server #5 WWPN Alias HX Server #6 WWPN Alias HX Server #7 WWPN Alias HX Server #8 WWPN Alias 5.

Alternatively, you can copy the WWPN value on the ESXi host in vCenter on the Configuration tab >Storage Adapters >Cisco VIC FCoE HBA Driver >. The WWPNs for the storage ports will also be recorded. It is needed later for zone configuration on the FC switches. You can get that information from your storage device’s management tool.

Value Fabric A Fabric B Storage Device Port #1 WWPN Alias Storage Device Port #2 WWPN Alias Storage Device Port #3 WWPN Alias Storage Device Port #4 WWPN Alias 7. Login to the MDS switch for A Fabric (MDS A), verify all HX vHBAs for A fabric have login to the name server and verify they are in the same VSAN as the target storage ports. Hui Chen, Technical Marketing Engineer, Cisco UCS Data Center Engineering Group, Cisco Systems, Inc. Hui is a network and storage veteran with over 15 years of experience on Fibre Channel-based storage area networking, the LAN/SAN convergence systems, and how to build end-to-end; from the server to storage, and solutions in the data center. Currently he focuses on Cisco’s Software Defined Storage (SDS) and Hyperconverged Infrastructure (HCI) solutions. Hui is also a seasoned CCIE. Jeffery Fultz, Technical Marketing Engineer, Cisco UCS Data Center Engineering Group, Cisco Systems, Inc.

Jeff has over 20 years of experience in both Information Systems and Application Development dealing in Data Center Management, Backup, and Virtualization Optimization related technologies. Jeff works on design and test a wide variety of enterprise solutions encompassing Cisco, VMware, Hyper-V, SQL, and Microsoft Exchange. Jeff is a Microsoft Certified System Engineer with Multiple Patents filed in the Datacenter Solutions space. Brian Everitt, Technical Marketing Engineer, Cisco UCS Data Center Engineering Group, Cisco Systems, Inc. Brian is an IT industry veteran with over 18 years of experience deploying server, network, and storage infrastructures for companies around the world. During his tenure at Cisco, he has been a lead Advanced Services Solutions Architect for Microsoft solutions, virtualization, and SAP Hana on Cisco UCS. Currently his focus is on Cisco’s portfolio of Software Defined Storage (SDS) and Hyperconverged Infrastructure solutions NOTE: Available paragraph styles are listed in the Quick Styles Gallery in the Styles group on the Home tab.

Alternatively, they can be accessed via the Styles window (press Alt + Ctrl + Shift + S).