I recently did a presentation at HP Discover in Barcelona, Catalonia, called Red Hat’s vision for an open-hybrid cloud (the slides are also available). When preparing the presentation, I thought at first calling it “Red hat’s vision for a Software-Defined Datacenter”. The term “Software-Defined Datacenter” (SDDC), first coind by VMware, has become extremely popular in the IT industry in the past months. There are very few parts of the datacenter that cannot be “software-defined” anymore. The first element was the Software-Defined Networking (SDN), then followed by Software-Defined Storage (SDS), Software-Defined Computing (SDC), that led to the SDDC.
However, during the preparation of my session, I stepped back a little and thought about what this “software-defined” trend was about and I asked myself this question: what datacenter today runs no BIOS ? no hypervisor ? no operating system ? no application server ? and no application ? None, of course. Why ? Because a datacenter has always been defined by software ! The difference with today’s IT industry are two factors that are driving efficiency: openness and standardization.
- What is software-defined networking ? It is about taking a standard x86 server, connecting it to the network, and, through software, make it a controller for the network environment using open protocols.
- What is software-defined storage ? It is about taking standard x86 servers and using the capacity of their internal disks and, through software, put their capacity at the disposal of clients through open access protocols.
- What is software-defined computing ? It is about taking standard x86 servers and consolidating hundreds of servers virtualizing the standard x86 processors instructions.
But what about the cloud ? To me, cloud is the automation layer that will manage resources on top of this infrastructure. Be it public or private, a cloud creates an automated way to provision services by offering a service catalogue to users through a self-service portal.
The question is now with whom do you want to work to implement this open, standardized datacenter ?
After having freed yourself from proprietary, hardware-centric and purpose-built hardware, what would be the point of locking yourself again with a software vendor ? Openness on the infrastructure side can only be matched by openness on the software side, and Free and open-source software (FOSS) is the key for you to keep the control on your environment, and especially have the choice of different vendors to choose from. Open protocols are key to provide access to all part of this type of infrastructure, and that is the beauty of FOSS: there can be no proprietary protocol, as the way applications talk to each other is known by everyone. No secret sauce, no voodoo magic and no “trust us, everything is going to be fine”, just plain openness, from which you can only benefit.
Who do you think can help you building this open standardized datacenter ? In terms of vendors, think of one who’s been standardizing Unix platforms onto standard x86 servers with an open-source operating system for the past 20 years. Think of a vendor that provides storage solutions based on x86 servers and open protocols. Think of a vendor heavily involved in all of OpenStack’s modules, including Neutron, that manages networking. This is what Red Hat has been doing for the past 20 years: opening and standardizing.
The future might bring surprises. The trend toward ARM-based servers, SoCs, and hyperscale computing might create new silos of technology. Software-based storage on top of x86 servers will probably co-exist with fibre channel SANs for some time. But as long as your environment is as open (in hardware and software) and as standardized as possible, you are in good hands. But do not blindly trust vendors who claim they are open. Trust the open-source communities and the vendors who contribute the most to them.