antifragile software development

What could antifragile software be like? Or rather, what kind of a team and with what kind of a development process could produce software in an antifragile fashion?

This is one originally appeared in Finnish at Fraktio’s blog. Consider this half-repost, half-rethink and half-translation, as time has passed and lessons have been learned.

I’ve been rereading Nassim Taleb’s interesting book Antifragile, which introduces a new concept, called antifragility. It’s not really something that the author has invented, but rather described and profoundly specified for the first time. An opposite to fragile is often, perhaps intuitively, thought to be “robust” or “resilient”. In fact, they are attributes that make something withstand harm. So where a fragile entity is negatively affected, a robust isn’t at all, an antifragile one is positively affected by harm.

First of all, I’ll pass on any self-learning or self-correcting programs, as it’s somewhat of a more academic and complicated concept. Moreover, I believe myself to be ill-equipped to consider those type of programs comprehensively enough. So if software in itself could not gain from negative influence, what then? Instead, I’ll consider antifragility in a software project, team, organisation or the development process in general.

a robust codebase #

A fragile codebase is usually a hindrance, as dealing with arcane problems slows down the development. It consumes first the developer’s energy and patience, and eventually everyone’s.

What makes a robust codebase? How exactly does one develop software that is, if not self-correcting, better adjustable to conditions following something negative. Different testing approaches, coupled with Continuous Integration practices, should bring about the needed safety and confidence on the correctness of the software. The goal with tests, documentation, naming conventions, decoupling etc. is to make the codebase readable and comprehensible. All this should empower the developer to feel confident in making changes with minimum effort and stress. The code becomes more adjustable and therefore provides the developer with more options.

One can of course get too stuck on usually good ideas, for example zealous support for DRY principles or compulsive testing. They then end up with software layered and abstracted just too deep and with tests for tests. This puts an enormous strain to the developer’s mind. Paradoxically, the codebase becomes slow to deal with. Striving for perfect code (whatever that is) or fanatically following some principle is at best a waste of time, and at worst the road to perdition.

project practices #

It might be quite obvious that waterfall project management is the antithesis of adjustable. Lean and agile can mean pretty much anything these days, but their essence is about minimising waste. Some of their principles include avoiding the buildup of task concurrency, bureaucracy, bottlenecks and “software inventory”.

Minimum viable product is one overused, or rather, misused and abused concept that has in some cases become the scapegoat for laziness and shortcuts. It too, is fundamentally a good idea, that promotes the ability to quickly react to problems.

Lean principles increase the adaptability to change and therefore antifragility of the project. The software becomes, through this adaptability, antifragile in relation to other competing software projects and general reactivity to the changing demands of the users, infrastructure, legislation, etc. Decreasing bureaucracy and encouraging efficient communication is vital for avoiding bottlenecks and ensuring the correct features are being worked on.

A large software inventory promotes fragility and waste, as the features in the inventory are untested (in production) and the reaction to them remains in the future. A buildup of these can potentially cause huge risks and unmanageable situations. Conversely, releasing often allows the project team to quickly react to negative feedback, thus improving the software in hopefully a short amount of time.

individuals and teams #

Let’s imagine two type of people, from either end of a spectrum, where the other end is risk-averse and the other not so. Risk-averse individuals like to work in stable environment, and work from meticulously from feature to feature, carefully building the software and avoiding any surprises. This is not say that these people avoid agile practices and would rather work on a waterfall project. Risk prone people, on the other hand, would rather hack a feature into existence, and take up time-limited challenges. They might get bored with a too "common” work. Both type of people have their strengths, and in an ideal team, it is of course best to include both. This team would therefore have diverse skills, different viewpoints and hopefully be able to provide sufficient and realistic criticism within the each other.

The meticulous individuals tend to be risk-averse and robust to most changes, but not that well equipped to handle large sudden changes (like [http://en.wikipedia.org/wiki/Black_swan_theory](Black Swan events)). The more risk prone, on the other hand, like to take risks and embrace a change. The risk averse limit the risk prone from taking too great risks and pushing a software into a too fragile of a state. The risk prone keep the software/project from stagnating and perhaps provide an antifragile approach to handling technological change.

organisation #

An antifragile organisation is intuitively a small one. An organisation with a flat structure is better equipped to handle and react to negative influences. This is ideally an organisation with low hierarchy, with half-expert, half-generalist workforce, or so called “T shaped employees”. A transparent organisation, that reveals and acknowledges its errors, rather than hiding them can easily be seen as more reliable. This can, for example, be a public status page revealing and explaining service outages, bugs or other problems.

An entity with an already established, positivite appearance, can actually benefit from making a (small) mistake, making them appear more “human”. Turning hardships into public stories and lessons is helping out others.

A large organisation can be antifragile, keeping in mind that antifragility of a whole can have fragile parts. A large and antifragile organisation should be able to eliminate non-functioning parts of it, although layoffs are unpopular and legislatively difficult (somewhat depending on the country, though). Unexpectedly, the perverse act of becoming “too big to fail” can too make a company robust to the largest of financial catastrophes.

tl;dr #

Antifragility in software often appears as the comparable ability to react to changes. In other words, being more adaptable than competing software. In development practises, antifragility can promote the ability to release often and then react to negative feedback, mistakes, bugs and problems. This ability is dependant on the organisation, the responsible team and it’s members, the used project practices and eventually the sensibility and soundness of the codebase.

 
0
Kudos
 
0
Kudos

Now read this

crypto and security

I just finished Neal Stephenson’s 900+ page book Cryptonomicon. I’ve since (and during the long read), been dwelling more on information security. Even though the book is fiction, it points out there’s been this hidden effort of... Continue →