Why the battle for codebase quality should not be fought by lone soldiers…
I once heard somebody remark “Quality, is just something of value, to someone, at some particular point in time.”
That somebody could well have been Ron Jeffries, I can’t find the direct quote, but regardless, it nonchalantly captures the ephemeral nature of quality and particularly rings true when thought of in the context of software.
Right now, at this very moment in time, if your codebase is delivering value to your customer, you could say you’ve won the battle for quality.
And you’d be half right.
But with every release comes the need to fight again. And if you’re living in a world of continuous delivery, you’re also living in a perpetual fight for quality. Because the other half of the battle lies behind the scenes, in-between releases.
Victorious warriors win first, then go to war
“War is like winter. And winter is coming.” So said Ulysses S. Grant, the 18th President of the USA, and a well-known TV character with a fitting surname. In the context of software, as the phrase suggests, pragmatic delivery teams are well prepared for the inevitable.
They recognise the requirement to move quickly in order to continuously release value, and the ability to move fast, with confidence, stems largely from the health of the codebase. To move fast a team must protect the quality of their codebase. So how do they do that? A colleague of mine once said to me – “a codebase should be understandable, not learnable” – an aphorism for the essence of code quality.
In my experience there are three overarching attributes that generally hallmark a healthy codebase –
- Consistency in conventions
- Organic documentation
- Well-structured tests
They’re three great principles to guide a team in battle. Consistency breeds familiarity. Organic documentation maintains awareness. And well-structured tests give confidence.
Weapons of choice
The battle for code quality is one best fought together. There is value to be had from a lone soldier, but their missions are best kept short and infrequent. The best teams creatively make the most of their people.
Pair programming and code reviews are two distinctly different but effective sociotechnical weapons in the battle for quality when utilised consistently by a team.
For the uninitiated, a code review is the act of reviewing another team member’s code, generally before it’s committed, usually through a predefined mechanism like a pull request. And pair programming is teaming up with a colleague to write code together and committing upon completion. On the surface these weapons may sound like their value is to catch bugs. Fewer bugs means higher quality, right? But the value to quality goes far beyond bug squashing.
To pair is human
Earlier I identified familiarity, awareness and confidence as qualities to strive for. Pairing is a social vessel to ferry you quickly on that journey and when utilised well you will –
- Bond team members
- Upskill less experienced team members
- Reduce defect density
- Pollinate best practices and conventions
- Encourage collective ownership and self-policing of the codebase
- Motivate self-improvement
- Breed confidence in decisions
Who wouldn’t want to benefit from those things?
Pair programming breaks down silos of knowledge. Everyone benefits because everyone is learning; whether it’s about the code, or about one another. Two heads are often better than one and your team’s dynamic will present you with interesting pair combinations. Hiccups and oversights in the code stand a better chance of getting caught. Any trade-offs with quality you make to ship code faster can be agreed upon and made visible to the team to address later.
But, and of course there is a but, it’s the dynamic within a team that’s often the delta which dictates effectiveness of pairing. Pair programming puts a spotlight on individuals. Pressure to perform follows. Vulnerabilities are made apparent. If the team is insensitive to this there’s a risk of alienation, so there’s a need to ensure safety when team members inevitably reveal weaknesses in their skillsets.
Then there’s the possible compromise to velocity. The most senior team members will naturally incur inertia through pairings with the lesser experienced. Calculated rotation of pairs based on the climate a team finds itself in is key to keeping pair programming effective.
And pairing is simply not for everyone, all of the time. Particularly tough problems may best be tackled solo with time to think and without the pressure of peer presence. In-the-zone moments are often the product of solitude. But you may consider viewing teams in which lone wolf operations are the most effective way of working as a warning flare; provoking investigation into whether the team has the correct numbers, an appropriate dynamic, a manageable volume of work and the capacity and capability to sustain long term code quality.
Embrace constructive feedback with code reviews
In an ideal world, calculated and liberal use of pair programming is something of a panacea for codebase health. But when events and circumstance conspire against your silver bullet, you have another weapon to unholster – code reviews.
When tackling tough problems pair programming effectiveness can waiver; with code reviews the immediate peer pressure to deliver solutions is removed. Code is only submitted for review when a team member is comfortable to do so. And lone wolf solutions can be unorthodox; an expression of deep thought and individual brilliance. When ready for review they can prove more beneficial to learning than what may be more linear solutions arrived upon by pairs.
Sometimes solo work is unavoidable. Code reviews are an inclusive tool that will accommodate remote workers through less effort than remote pair programming.
Although code reviews require team members to commit time, they provide the benefit of asynchronicity, something which pairing does not. Code reviews don’t have to be conducted at set times; they can be scheduled around team member availability, which benefits teams working flexible hours.
Reviews, like pairing, breed familiarity, awareness and confidence. And in the battle to protect codebase health, code reviews give team members a lens to view the code with fresh eyes. Attention to detail can be lost in the fatigue of lengthy pair programming sessions. Code reviews can avoid this risk. But conversely, they provide a comparatively small window of opportunity to learn and catch mistakes. And necessity of reviews can become a bottleneck. When not orchestrated carefully they’re a risk to velocity through delays, and a risk to quality through skim reviewing.
The key differentiator between the two is with pairing the review is an ongoing process; with code reviews the review happens only when a team member is finished. But this finality can be used for good effect; providing a challenge for team members to demonstrate to teammates their ability to go it alone.
We shape our tools, and thereafter our tools shape us
Like with technology, teams should use the right sociotechnical tools for the right job. There is no one size fits all when it comes to battling for quality. Too many variables. But hopefully, if you’re attuned to your team and environment, this post should provide a little steer as to how and why you could and should make use of pairing and reviews.
Like all good approaches to delivery you should measure the effectiveness of whichever you utilise, and adapt your strategy accordingly. Quality can be like chasing fog, but codebase health is somewhat easier to measure. The test coverage, team onboarding time (the time to install and time it takes new team members to make their first commit), known defect density and various static analysis tools reporting on adherence to conventions are all reasonable barometers.
Ultimately though, a healthy codebase is a means to an end. And the end should be a happy team and happy customers. Team retros should provide insight into the former, through team member morale and happiness. And analytics and reporting insight to the latter, complemented by general team performance metrics.
And as you see a positive impact to team members and team productivity alike, adapt and evolve your strategy to better sharpen and leverage your weapons. Your codebase and customer will thank you. And, just for a brief moment, hopefully, you’ll be able to stop, take stock, and declare yourself the victor, for now, in the battle for quality.