Given that the concept is only about two years old, it’s worth
explaining what hyperconverged infrastructure is and how it’s different
from its cousin converged infrastructure.
Hyperconvergence is the latest step in the now multiyear pursuit of
infrastructure that is flexible and simpler to manage, or as Butler put
it, a centralized approach to “tidying up” data center infrastructure.
Earlier attempts include integrated systems and fabric infrastructure,
and they usually involve SANs, blade servers, and a lot of money
upfront.
Converged infrastructure has similar aims but in most cases seeks to
collapse compute, storage, and networking into a single SKU and provide a
unified management layer.
Hyperconverged infrastructure seeks to do the same, but adds more
value by throwing in software-defined storage and doesn’t place much
emphasis on networking. The focus is on data control and management.
Hyperconverged systems are also built using low-cost commodity x86
hardware. Some vendors, especially early comers, contract manufacturers
like Supermicro, Quanta, or Dell for the hardware bit, adding value with
software. More recently, we have seen the emergence of software-only
hyperconverged plays, as well as hybrid plays, where a vendor may sell
software by itself but will also provide hardware if necessary.
Today hyperconverged infrastructure can come as an appliance, a
reference architecture, or as software that’s flexible in terms of the
platform it runs on. The last bit is where it’s sometimes hard to tell
the difference between a hyperconverged solution or software-defined
storage, Butler said.