neomania and vintage projects

Neomania, or neophilia, means the love for new things. Many hardware products reach a plateau, and the focus then tends to move to software. I’m wondering how, and at what level, does the speed of change on software and software development, begin to level. I believe the speed is only accelerating, and am also hesitant to join the scouting party on this endeavour.

For some things, there comes a point when the change is too fast and they fall behind. They can’t cope with the speed and are then best abandoned. Add to this the fact that people are willing to jump ship based on little or no proof at all. Due to neomania, technology adoption is faster than common sense and actual need would suggest. I think there’s both genuine interest in everything new, and a certain compulsive need to be one step ahead.

Fast adoption of software seems more mentally, rather than financially, taxing. The same goes with writing software, where keeping up with the daily trends is a burden. The payoff can be noticeable when that new and exciting library is really needed and it amplifies the actual use case. Otherwise, it’s likely going to cause trouble for the incompleteness of it all.

Pioneers are essential, though, and there’s a thing to be said about the opposites.

If people are moving into your town, it’s going to be full of open-minded people looking to improve their lot. If people are moving away from your town, the evaporative-cooling effect means you’ll be left a narrow-minded populace trying to protect its dwindling assets. - Kevin Simler at Ribbonfarm

 picking projects

No one wants to end up the narrow-minded dinosaur. So there’s a general dislike of everything old, a fear of being left behind. There’s no such thing as vintage in software development, although some things seem to come and go out of fashion. Because a developer rather works with a new technology, what happens to the old projects? Who works on those?

Working with those sexy new technologies is exciting, but it has it’s risks. Keeping a realistic distance to the bleeding edge is generally more sensible, as it requires less effort and everything is likely more stable. There’s less breaking changes and better documentation. Less chance for investing in a soon-to-die technology. At the same time, there is need for starting with new, refactoring to new, and wallowing with, not to mention understanding, the old.

All projects have some amount of technical debt. But those that are declared “done”, are the ones in trouble. Development halts, and the intrinsic information dissipates. If there’s no chance of refactoring or updating, it’s common sense that working on the project is personally harmful. When there’s nothing new to learn, a smart developer thinks they could instead be improving themselves.

Not all technologically hopeless projects are detrimental, though. If an otherwise screwed up software is still around, chances are that it has users or is otherwise useful, and is most likely in use. So there is a chance to learn more about the business domain, user base, user behaviour, and so on. When a project is in a technological trouble, there arises a craving to start over. A clean state. A sexy codebase is traded to potentially years worth of intrinsic information.

For some of these projects, perhaps vintage would indeed be a better descriptor than legacy. Architecture and other solutions can be brilliant, even if they are elderly.

Me, I’d rather work on a technological shitstorm with users and purpose, than a useless or userless exercise on the cutting edge. The distinction is never that straightforward, but the basic idea is sound.


Now read this

gene testing

I subscribed to the gene testing service from 23andMe some time back, and have been inspecting the results during the last couple of weeks now. Overall, I find the results interesting. I’m slightly concerned about the privacy and... Continue →