According to their website, "OpenStack software controls large pools of compute, storage, and networking resources through a datacenter, managed through a dashboard or via the OpenStack API." That's a pretty long sentence but in the end it means cloud without resorting to what is almost an overused, and certainly ill defined, buzzword.
I want to deploy a pool of compute, storage, and networking resources. OpenStack offers me a way to control and manage this pool. OpenStack is also very widely used by some pretty serious companies (BestBuy, Paypal, Comast to name a few). It's actively developed and gets new releases a couple of times a year.
The downside... unsurprisingly, OpenStack is complex. It isn't a single application. It's an entire ecosystem of applications and options. There are a number of open source projects that fall under the OpenStack umbrella. The most popular/useful/mature get pulled into the Core. At present there are 6 core Services: Nova (for Compute), Neutron (for networking), Keystone (for identity), Glance (Image Service), Swift (Object Storage) and Cinder (Block Storage). There are also ~13 Optional services.
There are a lot of ways to install OpenStack ranging from all-in-one installs (where virtual machines or containers are used to create an OpenStack environment on a single machine) to full blown metal installs of hundreds of nodes in a number of geographic locations. My needs fall somewhere in the middle. I want to deploy a small proof of concept network that will allow me to easily replace nodes with better hardware and to just add more nodes when I need additional compute or storage resources. Initially I just want to utilize all of the unused hardware that I have just laying around. As things prove out we'll budget in replacements and boost performance.
I looked at the OpenStack website. I was initially drawn to the Install Guide for Ubuntu. I read through it and it was very hands on. Perhaps too hands on. There was a lot of room for error. A bit of googling about led me to https://www.ubuntu.com/cloud/openstack which offers a faster way to get up and running using their Autopilot software. This method essentially has a MAAS (Metal As A Service) host which will use IPMI and PXE to configure your physical hosts. And it's at this point that I completely struck out. The hardware I'm using has some pretty shaky IPMI implementations and I couldn't find good workarounds for MAAS nor did I want to spend the time learning MAAS when I wanted to be deploying OpenStack.
A little side track on IPMI. IPMI is an acronym for the Intelligent Platform Management Interface. This is a computer interface specification for an autonomous computer subsystem that provides management and monitoring capabilities independently of the hosts CPU, firmware and operating system. (above definition curtesy of Wikipedia). A BMC (baseboard management controller) is a specialized service processor that is the main controller for IPMI and provides the intelligence and the physical interfaces to other components and sub-systems. There have been 5 versions of the IPMI specification beginning with v1.0 in 1998. v2.0 was publicshed in 2004 and it has had two minor updates since then (2014 which added IPv6 support; and 2015 which added additional security protocols). On server level hardware IPMI is implemented in the DRAC on Dell Hardware and within the ILO on HP hardware. Yep. At the base of your cloud is physical hardware. At the base of all cloud is physical hardware.
Moving on...
A bit more research led me to TripleO (tripleo.org). TripleO (OOO) or OpenStack On OpenStack. We install OpenStack on a single machine (called the Undercloud) including the optional Ironic component which is used to handle Iron (bare metal servers). Like MAAS it's going to use IPMI and PXE, however it includes dummy drivers to get around some broken implementations and further... learning it is learning OpenStack since it's a component. Seems a win. The only downside, from my perspective, is having to use CentOS instead of Ubuntu. Another serious bonus for me is that Puppet is used to provision the nodes into their role (Compute, Object Storage, Controllers, etc...). I use Puppet a lot so this would mean more visibility under the hood.
TripleO is undergoing a lot of changes (like all things OpenStack). It's entirely possible that future versions will rely much more heavily on containers (and that won't be a bad thing).
TripleO, at least as of the time of this writing, will use Nova, Ironic, Neturon, Heat, Glance and Ceilometer to deploy OpenStack on bare-metal hardware. This deployment is the usable result and is called the Overcloud.
The steps should proceed pretty much like this:
1. Install CentoOS 7 on the server that will become the Undercloud director
2. Deploy the Undercloud
3. Configure IPMI on the remaining hardware (hardcode an IPMI/BMC IP address, username and password)
4. Register the hardware to Ironic
5. Allow Ironic to deploy the introspection image. This image gets additional information about the hardware and performs some light benchmarking. The results of the introspection will make it easier to programmatically decide which hardware is right for which roles.
6. Tag hardware for roles
7. Deploy the OverCloud
8. Observe the monitoring and operations software
9. Backup the director
At least that's the plan... next up: Deploying the Undercloud
No comments:
Post a Comment