I think I’ve done it. I now have my own home IaaS.
I went for the OpenStack approach, Packstack with RDO on Scientific Linux. In the future I want to replace SL6 with Gentoo on the bare metal, and install the OpenStack packages from portage, but I’ll wait for the work from a Gentoo dev who knows what he’s doing.
This also means that the running hypervisor is KVM, not the Xen that I would rather be using. Technically, there isn’t much difference to them, but Xen is the hypervisor used by AWS, PV images can be booted without fiddling with partitioning and bootloaders. That’s so ’90s.
Getting an instance of OpenStack is fairly easy these days. Tools like DevStack and PackStack, the plethora of puppet and chef modules to deploy openstack means that it is really easy to get running. That is, if you follow the patterns that everyone else did. Compiling from source and manual configuration by hand and vim is still a chore.
I’ve only managed to get keystone working when doing it that way.
I chose PackStack on an Enterprise Linux-like distro because it is a well tested version that offers a straightforward (but tightly controlled) pathway to add additional nodes. PackStack also plays well in a /24 home environment without requiring managed switches and offers a bit more persistence than DevStack.
Once you have an instance of OpenStack, what next?
To use IaaS, you need a VM image to run. The documentation has some links to community generated images. Of note is the CirrOS test image, the Hello World Equivalency of any Cloud Architecture. Once it boots and you can ping a few internet hosts the next step is to try out the Ubuntu or Fedora images. There are SUSE images, but I didn’t have much luck with them, and the Rackspace Cloud Builders images are just more of the same.
No, there is no Gentoo image provided. A problem that I will use the rest of this blog post to address.
Like any other modern linux system, instances need to be booted. The KVM/Libvirt backend emulates the full x86 hardware so we use the x86 (BIOS) method. That requires a disk image with MBR partitions, BIOS bootloader (I’m choosing extlinux) and all of that mess.
As the system boots, it needs to probe the environment to get some customizations working. The most important job during boot is to acquire the ssh public key of a user allowed to login. It also need to set the hostname (optional) and download (and run) a provided user-data script for parity with the Amazon Linux and Openstack images. These late-boot jobs I have left to a local.d service.
My first image needed to be built by hand from within the provided fedora image. After creating a blank file, and loop mounting it I went through a stage3 install.
- Clear /etc/fstab. The rootfs is mounted by the kernel already, no other filesystem is of interest. (devtmpfs and other kernel filesystems are automounted by the kernel and initramfs before fstab is needed).
- Remove root’s password from /etc/shadow. An empty password field means that the root user can login from the console without providing credentials. All network logins are denied unless using the correct ssh key. This is also enforced by /etc/securetty which I have left unchanged.
- Enable the s0 serial console for ttyS0 in /etc/inittab. Xen uses the hvc0 console.
- Create /etc/init.d/net.eth0 symlinked from net.lo.
- Add symlinks for sshd and net.eth0 to /etc/runlevels/default.
I’ve posted my kernel config to gist.github.
Here’s my /boot/extlinux.conf.
DEFAULT gentoo LABEL gentoo LINUX /boot/vmlinuz APPEND root=/dev/vda1 console=ttyS0 rootfstype=ext4 earlyprintk=serial INITRD /boot/initramfs SERIAL 0
The trick is the last line, “SERIAL 0” which enables bootloader output in the serial log. Also note that the root filesystem sits on vda1, which requires the virtio drivers. I’m even using the virtio network drivers which offers better performance for virtual guests. I have unmanaged gigabit switches inside my network and I did get network speeds faster than the FastEthernet bottlenet. HDD IO was my real bottleneck.
The last modification that I made was to..
useradd -m -G wheel,users ec2-user
extlinux --install /boot
from inside the chroot.
Shuffling around 10G raw disk images is a pain, worse it takes much longer to spinup instances. Qemu’s qcow2 image format is much more efficient.
qemu-img convert -f raw -O qcow2 gentoo.img gentoo.qcow2
Finally, upload the image to openstack with glance.
source keystonerc glance image-create --name gentoo-$(date +%Y%m%d)-amd64 \ --disk-format qcow2 \ --container-format bare \ --file gentoo.qcow2
Scripting it together
I’ve put together a script that can be provided to an instance as it boots using the user-data mechanism. It takes about an hour to run, but could be speeded up by using binhosts and local downloads.
You should read the script.
There are references to some special tarballs, stage3-latest and portage-latest are copies of tarballs as distributed by Gentoo. vmlinuz-latest is a tarball containing the kernel, initramfs and modules without needing to recompile gentoo-sources. vmoverride-latest is a tarball of the modifications that I made above.
Since this script is expected to be run from inside the hand made Gentoo image, emerge can be run from outside the chroot using the ROOT variable pointing to the chroot. This has the advantage of only installing runtime dependencies to the chroot.
extlinux needs to be run on a mounted system, bit bashing from outside of the chroot I have found to be unreliable, so do that from inside the chroot at the same time as the useradd.
At the end of all of this, I now have a basic Gentoo image working with openstack. It is basic and posting this next link probably puts me in violation of a few GPL clauses. So here it is.