My business is change. I like to think we help people to change for the better and improve by introducing tools and techniques we think are good practice. To do this, we rely on our own research but we also encourage our customers to do their own research, too, and for the most part what they find backs up our suggestions for improvement. Imagine my surprise, then, when a customer recently sent me a link to some 'research' from a company billing itself as a 'Research Lab.' To give it even more credence, the company trumpeted its very own 'Chief Scientist' complete with PhD.
The 'research' was pretty standard stuff, they'd done some static analysis on a bunch of systems from large companies and had found out pretty much what many of us already know - most computer systems aren't written very well and there is an awful lot of technical debt around. There are many reasons for this, which, again, many of us are aware of, so I won't bore you by evangalising our favoured solutions here and the main reason for publication of the 'research' anyway, was marketing of the company's products and services in this area.
However, there was one phrase in the findings that had caught the customer's eye, "Scores for robustness, security, and changeability declined as the number of releases grew, with the trend most pronounced for security." In other words, the more releases you do, the more likely you are to have issues with security, robustness and changeability.
The reason this caught his eye is that we had been helping this customer with a transition to a model of Agile Software Development, of which some of the key principles are: "Early and frequent release of valuable software" and "Deliver working software frequently." We had told him that releasing as often as possible would lead to more satisfied customers and better quality software. Now he wanted to know how we could reconcile this with 'research' done by a 'Chief Scientist' from a 'Research Lab' that said the more releases a product has, the more likely it is to have security, robustness and changeability issues.
He had a very good point, moving to a continuous integration and delivery model is one of the first steps we recommend to all our customers but we also like to think we are scientists and that our methods are based on solid foundations. Should we continue with a continuous integration and delivery strategy when the 'research' tells us otherwise?
I'd like to report that we spent a lot of time an effort on this ourselves but it really only took us a few minutes to figure out what the real problem here was. Anyone who's undertaken any field of scientific study will know the first thing, and probably hardest, thing to learn is critical analysis. In other words, how to understand what the data is telling us. The first lesson in critical analysis is understanding that just because there is a relationship between two things doesn't necessarily mean the relationship is causal and even when it is a causal relationship, it is very important to identify which item is the cause and which item is the effect.
For example, every so often in the medical sector some bright young scientist will release research purporting to demonstrate that skinny people are more likely to die young than fat people are. Often this research is pounced upon by the more sensationalist newspapers and published under banner headlines exhorting us to abandon our diets.
The research in question is usually based on the measurement of weight and age at death for non-accidental mortalities and the researcher invariably finds the average weight of the fatalities to be less than the average weight of the population, concluding, therefore, that the less you weigh, the more likely you are to die.
What the researcher fails to account for:
- People die young mostly because of illness.
- Many of the illnesses that cause you to die young are wasting illnesses that cause massive and rapid weight loss before death.
- Many people spend a long time in hospital before they die and are subject to dietary controls.
The truth is - it’s not being slim that makes you die, it’s dying that makes you slim.
Most medical professionals are aware of this phenomenon and simply ignore it as do all responsible medical journals. At the same time, many practitioners ready themselves for the barrage of questions shortly to be directed their way from concerned patients that have read the 'research.' After all, if it's published 'research' it must be true, mustn't it?
Let's go back to the 'research' that told us frequent releases lead to security, robustness and changeability issues and see what the researchers might have missed...
- Anyone who's worked in a regulated industry would agree that security and robustness issues are mission-critical and need to be fixed and released ASAP. You don’t wait for the next scheduled release to deploy the fixes for them, you do a patch release. In other words, security and robustness issues force you to have more releases. Ergo, a system with security or robustness issues is very likely to have more releases.
- Code that is difficult to change suffers from a similar problem in that software is difficult to change when it is difficult to understand. If we understood it, it would be easy to change. When it's difficult to understand, it's also difficult to know when it's been fixed. You may think you've fixed it and do a release but you haven't. You may have to do several releases before it’s finally put to bed.Therefore, code that is difficult to change is likely to have more releases.
- Code that has no issues doesn't have patch releases.
The truth here is - it's not frequent releases that cause code issues, it's code issues that cause frequent releases.
Even funnier to some people but actually quite tragic for our industry - a piece of research lamenting the poor analysis and design skills of software developers blighted by poor analysis skills itself. Most experienced professionals and journals will recognise this 'research' for what it is but, again, the practitioners need to ready themselves for the barrage of questions from concerned customers. After all, if it's published 'research' it must be true, mustn't it?