Category: Programming


Image representing Nitrous.IO as depicted in C...

Image by None via CrunchBase

I’ve had the Pixel for a few months now. The most surprising thing that I’ve realised is how much time I have been using this without modifications. In the first month, I immediately dropped into devmode, installed Gentoo, Debian and my own builds of ChromiumOS.

In the end, I decided to use the Pixel with devmode off, while I sacrifice shell access to the local filesystem, the extra security of the verified boot is nice. This isn’t that restrictive for me because the crosh shell (ctrl+alt+t) has a ssh client which is enough for me to do my “real” computing on a server somewhere else.

When at home, I have a server at home, at work I have a small cloud and workstation to connect to. But sometimes, I wonder if I really can get away from these support servers and make the most of the ChromeBook environment.

I don’t care about picture or video editing. There are some HTML5 games too. What will matter to me most is an IDE and collaboration tools (groupware). I’ll save groupware for later.

Introducing Nitrous.IO. This is going to be one of those multi-page blogs.

Continue reading



echo -n "Passphrase:"
read -sr p
gpg -d --batch --passphrase "$p" "$0" | python

exit $?
Version: GnuPG v2.0.19 (GNU/Linux)


Maybe I should combine this with puppet.


I have been reading the GNU Make manual. I knew about some of these features already, implicit rules, special targets etc. I’ve used .PHONY for a while, without *really* knowing how it works (and thus, how to properly use it).

Today, I’ve learnt about .ONESHELL (new in gmake 3.82).

Continue reading



Secure digital card usb-adapter

Secure digital card usb-adapter (Photo credit: Wikipedia)


Hi future me,
Sometimes you’re going to find yourself needing to boot some very archaic CDs. But CD drives might not exist in the future, so you’re stuck with USB to shim the ISO. (I think that first sentence should get all of the Google hits, but lets include some more buzzwords such as LiveCD, LiveUSB, syslinux etc.)


Continue reading


English: Cloud Computing visual diagram

Image via Wikipedia

Programs crash. It is a fact that all sysadmins have to put up with. In a production environment, the best we can do is restart them and hope nobody noticed. Figuring out what went wrong is a task that can usually be differed until the service is back up to speed again. That’s what logs are for after all.

Applications crash around me more often than you would otherwise notice. I admit that I do tread the thin line between stability and features.

I run git-master/svn-trunk code on my Home Desktop. When we have just declared the latest version worthy of a public release, I’m the first to suggest that we move on to $(latest version +1)-preAlphaX, just to try out this fancy new routing feature. Just yesterday, I was checking to see if a series of statistically rare (3 distinct crashes over thousands of calls) crashes still exist in the current codebase (not even tagged for preAlpha yet).

And that’s not even mentioning the cloud-service that is being hacked together in my extra time. Needless to say, I am literate in many techniques to keep a service working sufficiently well in the light of the impossibility of perfection.

So, how do you build a service that appears to keep working, on and on, even if one cannot guarantee the robustness of many the single-apps running?

Well, lets start with an example of an ideal app. Citadel[1] is a BBS/Groupware server that originates from the late ’80s. It is a single, monolithic application that has well crafted internal data structures for storing strings (BBS, email, chat/IM are all represented in the same way).

Citadel has an application layer protocol for data input and retrieval that makes fools out of any xml-rpc implementation. The Citadel protocol[2] is simple (reminds me of SMTP a bit), yet diverse enough such that this same API is used for cluster synchronization (between citservers), client access (/usr/bin/citadel, the user frontend cli), webapps (webcit), PAM and Apache modules to hook into the user database and naïve backups to an ASCII stream. I’m a personal fan of[4].

The rest of citadel’s feature set, as far as I can tell is best described as translators between this protocol and the protocol-du-jour. A fun thing to try is to access your mail, IMAP/POP what have you. Now do it over NNTP, or the native client. It’s all the same.

Citadel is written in pure and portable C, with very few dependencies and a tiny core library.

# From the ebuild
        >=sys-libs/db-4.1.25_p1 #A well proven, on disk database.
        virtual/libiconv #for translations
        ldap? ( >=net-nds/openldap-2.0.27 ) #optional and not needed
        pam? ( sys-libs/pam ) #optional nice-to-have
        ssl? ( >=dev-libs/openssl-0.9.6 )" #but not gnutls, I don't think anyone will care
        net-mail/mailbase #just checking if the FHS is sane
        !net-mail/mailwrapper #no wrappers, citadel does it all internally
        postfix? ( mail-mta/postfix )" #optional postfix

-rwxr-xr-x 1 root root 128K Sep 27 11:13 /usr/lib/

But what does this mean for stability?

Designing good data structures and a suitable protocol for moving them around is a luxury that many modern application developers do not have. There’s a simple reason: redesigning the wheel takes time and requires extensive hindsight knowledge. It will probably be a bit buggy and won’t have feature parity with another easily accessible alternative.

It isn’t worth the time and effort, and your project manager/financial analyst knows this too. So we make do with an off-the-shelf solution, it costs some money and isn’t exactly what we want. We spend more time learning the API and writing glue-code. All because of one very, *VERY* important feature – it exists, simples.

It adds bloat, more blackboxes and involves more people when things go wrong. But it is the easier thing to do. Contrast to Citadel, and one quickly realises that a small binary means a small codebase. A small, intimate codebase a code dive, bug fixing or feature adding is easier, albeit technically intricate.

For my readers who are wondering why we don’t just reinvent the wheel in the face of these challenges, I invite you to write this twitter application that I have been promising for a few posts now. Go on, I dare you to write an OAuth implementation. There’s plenty of documentation, diagrams and discussions about the intricacies on the internet[4]. Conceptually, it’s just 3 HTTP requests, a callback (or out-of-band message) and the love child of HTTP-Digest and the Needham-Schroeder cryptographically secure delegated authentication (think Kerberos).

How many people can say they’ll be comfortable reimplementing SHA-1?[5][6]

… Where were we..? Oh yes, service uptimes.*

Things crash, accept it. It probably crashed in a section that you have no idea how to fix. What’s the work around?
If you have near a lot of resources, then it’s easy to apply some cloud-computing tricks.

1/. Spawn many instances of the application. If it crashes less than 50% of the time, then by-averages, you’re improving stability. If it’s more than 50%, then call it a failed test and send it back to the devs. Implementation Difficulty: Easy. Virtualization is cheap(logistically) and financially if you select the correct platform.
2/. Use a watchdog[7] or shell loop. So that when the app crashes out, no one notices the flames. The occasional reboot also helps to stave off the effects of memory leaks. Implementation Difficulty: Easy. You can get your init system, or cron to do this. Upstart has a nice ‘respawn’ keyword.
3/. Modularize. A common method to scale an application is to separate its key features. Put the database, frontend processor, backend worker, web-interface and API servers on different physical machines. Better yet, sell them as separate products. Bonus points if they scale with load heterogeneously. Implementation Difficulty: Intermediate. You now have to start defining a real API and start worrying about communication lines. Welcome to the internet.
4/. Load balance. If you have 6 application servers, 3 backend workers, and a database cluster, then think about Hardware[8][9], Software[10] or DNS load balancing. The idea is mask the fact that a machine or two have gone down. If communication can go on an either-or path, load balancing magic can keep service consumers unaware that anything goes wrong while 1/. and 2/. come into effect. Implementation Difficulty: Easy-Hard. Adding in physical load balancers can even trick cluster-unaware applications into doing the right thing. It gets harder if you want to exploit extra communication and heartbeat protocols for state synchronization. It can be really fun to try cluster management at the application layer.

The last bit about writing cluster applications is probably the one thing that I will try to avoid due to the implications if you get it wrong. Reliability, consistency, performance. In the world of clusters, pick two. However, I have seen instances that optimise reliability and performance until something breaks. Then the same cluster will go into self-preservation mode and optimise for reliability and consistency until it recovers.

Finally, A well written application can scale to tens, if not hundreds of thousands of users without resorting to these techniques. With a non-trivial amount of extra time to reinvent wheels, network services would be looking much less cloudy.

[4] The biggest user of OAuth is probably the most authoritative.

*Intentionally vague about what service I’m talking about.


Unicode Eye Chart in Firefox 2

Unicode Eye Chart in Firefox 2 (Photo credit: sillygwailo)

Dear present and future me, friends and visitors. Unicodifiying your computers is not a hard thing to do. If you use Python3, then the str type is unicode. In Python2, you need to use the built-in type unicode. Continue reading


Azərbaycan: Ubuntu-nun rəsmi loqosu. Deutsch: ...

Image via Wikipedia

Through no fault of my own, I found myself sitting patiently at my desk waiting for a progress bar to complete. That was last Friday, the progress bar was for Ubuntu 11.10 Server 32-bit. The line between me and the ‘true’ internet contains a vast array of firewalls, switches, routers, nexuses (nexii?), IPS and fiber. I don’t actually hit the internet until the “local” pop-out somewhere in Amsterdam.

20 minutes of a 0.5MB/s download later, I should have realised. I can’t just close my eyes and hope that my dive into Ubuntu would be challengeless.

My task, package ‘P’ so that end-users don’t have to wade through ‘developer-friendly’ documentation[1]. In the modern inclination towards Cloud Computing, Virtualising ‘P’ and pushing it out onto UCS[2] farms seems like the way forward.

We’re experimenting with a Linux port. Linux has the happy ability to live happily with copies of itself in a network without calling in the accountants. It also means that a single use box only needs a single (or dual) core, a bit of RAM and ~5GB of disk space[3].

When building test tools, one writes code that works to do the job in the very limited scope of the moment. And so it is with program ‘P’. ‘P’ is a perfectly pythonic program and, in theory, should run perfectly happily on any modern OS. It’s a long and even more complicated story why, but program ‘P’ only works on windows. Stands to reason, windows is the most popular desktop OS.

The hand-wavy excuse for this legacy behaviour is that there’s a compiled C module that has one too many windows dependencies.

Snag. Developers, when forced to not use Visual Studio, think Linux is synonymous with Ubuntu. They have my pity and sympathy for not spending too long deciding. It’s understandable since they just want to get on with writing code, if it’s written well, then it should work anywhere.

I fire up my Gentoo templates. I have this really cool one that I just clone, change the hostname/root password and I immediately have a new Linux server[4].

Gentoo naturally has … err … differences and it is going to take too long to sift through and re-port ‘P’. Ubuntu here I come. A true case of whenyoucantbeatthemjointhem syndrome.

So, actually installing the bug ‘U’ isn’t that painful. There’s a curses wizard that guides you through some nice desirables, LVM partitioning, boot loaders and the VM friendly checkbox. I’m prepared to sacrifice updatability and tweaking if the end executable still works. The package manager works well enough, and google knows which commands I need next. I might even learn something about how the Ubuntu world works, then come to love it hate it less. I only need to follow a recipe.

[1] I actually learnt the python language by reading this program.
[3] Compared to quad/octo-core 8GB RAM (max guest support) and ~40GB disk space, typically.
[4] It is REALLY cool, menial things like portage trees, local rsync mirrors, binhosts and icecream clusters are pre-configured. I should write a post about setting one up. From this template, friends at work have instantiated new subnets of production worthy servers within hours of summoning it from the mighty god VLAN.

Someone call me up on this if I don’t actually do it.

I’m thinking, something cross-platform in QML/Qt/C++. I’ll figure something out.
Anybody got feature requests?