While it seems impossible to have high-quality code and ship fast - It may be possible to have your cake and eat it too. By embedding a few best practices that score low on the ‘overhead’ scale, software engineering leaders can drive up code quality and reduce bugs without affecting rapid delivery imperatives.
Most software organizations don’t define improving code quality as a main KPI; bugs are just a fact of life, or “I’d love to improve, but I don’t have the time.”
But with neglecting code quality, come severe outcomes across resourcing, reputation, and employee satisfaction.
Can quality work FOR agility and not AGAINST it?
Let's look at some numbers...
With the growing pressure to ship faster, dev organizations find themselves struggling to find the delicate balance between quality and velocity.
Choosing velocity over quality comes at a price.
According to a study by Stripe, the average developer spends, in a week, 13.5 hours on technical debt and 3.8 hours on bad code.
A Rollbar study found:
- 38% of developers spend around a quarter of their time fixing bugs
- 26% spend half their time doing bug fixes
- 89% said undetected errors take a toll on the business: errors are causing loss of users, lower ability to attract investors, reputation damage, and more.
Not only this, hunting bugs is causing developer burnout (17%), resentment (12%), and even making them want to quit entirely (9%).
“Just get it done so we can ship it” doesn’t sound quite so clever these days- and perhaps it never did.
By ignoring quality - such as readability, code standards, and refactoring - in favor of shipping fast, the average developer is losing valuable time in terms of productivity. This has an adverse impact on stakeholders, and can drive your developers to despair.
In conclusion, focusing on velocity over quality is actually achieving the opposite and hurts velocity over time.
We all have projects of varying qualities. Let’s assume that the worst 10% of projects have six times more bugs than the best 10%. Simply put, even if the worst 10% of projects were able to advance just to a median level, the influence can be dramatic.
So if improved quality can in fact dramatically improve velocity, how can it be achieved without slowing the team down? You can also read more about driving excellence in engineering teams in a previous article we wrote on the subject.
Winning on both quality and velocity fronts
Ok, so it’s clear we need to increase code quality. But how can this be achieved when we’re in a mad scramble to ship all the time? The intuitive answer is to try and prevent bugs from occurring in the first place instead of spending endless time fixing them.
Software delivery analytics and big data can be tremendously helpful in such proactive bug prevention.
Let’s take a look at some important tips to win on both the quality and velocity fronts and how we can use engineering analytics tools for getting the visibility needed to monitor this process.
Tip #1: Reduce pull request size and file sizes
Viewing pull requests sizes and defect density it is clear that there is a direct correlation between the two.
Try to keep your files shorter than 100 lines of code. After that you pay much more in bugs. This in turn decreases cycle time and number of merge conflicts, enabling continuous deployment.
*CCP - Corrective Commit Probability, reflects how likely it is that a new commit is a ‘fix’
Tip #2: Reduce cycle time - and time between pull request and code review
Code reviews should be done as soon as possible after updates. The shorter this cycle, the less chance of new changes resulting in overlapping lines and bugs. This article states that the ideal time between pull request submission and review is under 2 hours. Why? Humans aren’t hard-wired for context switching, leading to productivity losses as more time goes by between the two.
Engineering analytics tools can help visualize cycle times, plus time between pull requests and reviews - so you can understand its impact on the amount of bugs found and fixed.
Tip #3: Refactoring prioritization
When technical debt becomes too high, we set our sights on refactoring code. The challenge is - how to build the right technical roadmap, or decide which files to prioritize.
At GoRetro we use machine learning to identify error-prone files and components - so you can see where you can make the maximum impact in the shortest time.
Effective bug prevention additional tips
- Reduce file length and coupling: Corrective Commit Probability reflects how likely it is that a new commit is a ‘fix’. And - studies have found a low CCP is directly proportional to smaller file length and lower coupling. Try and keep your files short and reduce coupling to get you to the top 20% of projects in quality.
- Find time to do your TODOs: Studies have found that, when refactoring, going back and hitting your TODOs has the biggest impact on reducing bug rates.
- Get feedback on your code: Gain feedback from code reviews, tests and having your code reused. And of course, following Linus’s Law is a great idea - getting more eyeballs on the job will help you to find bugs faster.
- Take care of the few code smells that matter: In terms of quality, the strongest code smells can be dealt with by abstraction, simplicity and defensive programming - so following these design goals from the beginning can be a quick win.
We do believe the quality and velocity don’t have to come at each other’s expense. You can have them both and the right technology to help you achieve this master plan is readily available. Don’t forget to also include increased code quality as a KPI for your dev organization. Baking quality into both code itself and the processes surrounding production and review will lead to better outcomes across the board - and will not decrease shipping times, but rather the opposite. By following best practices and monitoring quality with the help of tools like Acumen, team leads can produce accurate quality metrics and gain confidence in the impact of both code quality and velocity on business outcomes.