man in front of computer screen cybersecurity attack

What is a Hyperconverged System?

man in front of computer screen cybersecurity attack

A hyperconverged system is one that integrates compute, storage, and network resources into a single server.  Virtualization software is installed on the system to unlock the ability for it to work.  There are several different products available and each offers some form of “software defined storage” abilities that allow single servers to work together in clusters.  Hard drives installed in each single server are pooled together to create a shared storage resource.

Hyperconvergence addresses some of the inherent challenges that traditional storage array infrastructures come with by design.  It helps to understand traditional deployments to appreciate the benefits of hyperconverged systems.

In the early 2K’s a revolution started to take off in the mainstream server industry – type I and type II hypervisors introduced the ability to virtualize many single servers operating as independent virtual instances on a single physical server.  That revolution evolved into what is now thought of as a traditional architecture.

Hyperconverged systems combine those components through software:

  • Compute (think a single rack server)
  • NAS or SAN shared storage for high availability
  • Storage switches (NFS, iSCSI, FC, or FCoE)
  • Hypervisor software that runs on the compute servers.

Reducing interoperability complexity and increasing productivity

A typical traditional deployment might look like:

  • (3) HP rack mount servers with QLogic converged network adapters for storage connections
  • (2) Cisco switches for iSCSI
  • (2) Cisco switches for front end VM network connectivity
  • (1) Dell EMC VNXe with an additional storage shelf
  • VMware vSphere for the hypervisor.

At any given point during troubleshooting performance issues in that stack you may have a support ticket open with HP, QLogic, Cisco, Dell EMC and VMware at the same time working to resolve the issue.  If you take the separate storage, network, and compute design requirements out of a traditional architecture then that frees up manpower resources to focus on other areas rather than being “the storage admin” or “the virtualization admin”.

Flexibility for scaling hyperconverged servers

There’s a balance that must be met in any design between compute and storage.  There should be enough compute servers to allow your HA design to work (N+1 for example).  There also needs to be enough capacity in the storage array to meet the growth needs of the organization.  A common issue is having too much compute power on the physical hosts to meet the N+1 design – and that leads to VM sprawl.

Traditional storage arrays are typically scale-up designs.  Scale-up designs use a single or dual storage processors for all the disk I/O in the array.  Some storage arrays will (and should always) use dual controllers for high availability.  Depending on the storage controller design, active-passive for example, there could be a 50% resource waste if only one of the storage processors is actively serving I/O at a time.  VM sprawl typically leads to a storage array that’s overprovisioned or overutilized.  The “storage admin” may spend a lot of time trying to resolve “noisy neighbor” issues on the storage processors.  Scale-up storage arrays add capacity by adding shelves of disks.  The problem with that can be a bottleneck at the storage processors – just because a storage array is rated for 250 disks doesn’t mean the storage processors will operate efficiently with 250 disks installed at 85% capacity.

Traditional storage array vendors have been working to improve these situations by offering all flash designs.  All flash offers deduplication and compression benefits – but with added costs.

A benefit of hyperconverged infrastructures is that they are scale-out and don’t experience these issues.  Scale-out systems add compute performance and capacity with each system added.  They can also be deployed as all flash or in a hybrid setup mixing flash for performance and spinning hard drives for capacity.  If you need additional storage in a hyperconverged array, then you simply add a server with the required storage capacity.  You increase the overall compute power of the hyperconverged SAN based on that system just added.  If the hyperconverged SAN meets the minimum required number of capacity contributing servers then you have the option of adding a compute-only server to save on costs.

There is increased flexibility with certain hyperconverged software offerings where all the servers in the SAN aren’t required to have matching hardware.  Keep in mind performance will be more predictable if everything matches.  If that’s not for you and you prefer an all-in-one shopping experience, then there are offerings which box the hardware and software from one vendor – it just might not be as flexible in design choices.

Cost of hyperconverged servers

You are probably familiar with some painful support renewal cycles if you have ever owned a traditional storage array.  For the uninitiated – typically the storage array will have grown to meet business needs over the past three or five years which will drive the cost of the support renewal to be more expensive than a new array with similar capacity.  The support renewal does not (usually) include hardware updates for the now three or five-year-old storage array – including the storage processors.  Some vendors are changing this approach at renewal time to offer upgrades on the storage processors, but it still can be a difficult conversation to have with the CFO.

The technology hardware refresh cycle of the business must be considered when comparing the ROI and TCO of hyperconverged infrastructures versus traditional infrastructures.  A three or five-year outlook on a traditional storage array should account for the expected growth of the business (see the note above on the renewal cycle).  The storage array should be purchased with head room to support that expected growth model.  It can be difficult to justify the cost of unused disks sitting idle for five years to stakeholders who aren’t tech savvy or aware of storage designs.  The ROI of hyperconverged systems will almost always be better in years one and two because of the unused space utilization in traditional storage arrays.

Hyperconverged systems may offer a better grow-with-the-business model.  A rack of three 2U hyperconverged systems will cost less in power and cooling than a traditional storage array, three compute servers, and network storage switches – if you are going to get into the fine details of TCO.  It isn’t always the case that hyperconverged systems will have the better ROI and TCO.  Each situation must be considered individually.

There are many options to consider when choosing between deploying a traditional architecture or using hyperconverged systems.  While the hyperconverged market does offer the ability to use commodity hardware and lower software costs, it is still young and unpredictable.

Contact Entec Systems today to find out how we can help you.