The Merge Request Process is Broken
Have you ever written two lines of code, only to wait a day for them to be merged? How did that make you feel?
The concept of code review has been a long-standing practice in software development. It could be traced back to the early days of programming, where developers would often collaborate and review each other’s code in a very informal way. Later software development became more structured and formalized code review processes were established.
In the early days, the process was more focused on code correctness, its efficiency, and error handling. Nowadays, this process is supported by various tools, covering aspects such as naming, readability, best practices, security, and even knowledge sharing.
On the other hand there is a movement called DevOps. It aims to automate and streamline the processes involved in building, testing, deploying, and maintaining software applications. Merge requests are positioned between the building and deploying stages.
So, what is wrong with MR?
The unseen issue arises from the tension between two forces: automation and the human element. The human factor slows down or even makes a fully automated solution impossible. The automation replaces the need of a human and it already successfully replaced developers in the following areas:
- Automated testing — allows developers to catch and fix errors before merging code, without needing to wait for feedback from others, by running a comprehensive suite of automated tests.
- Automated linting and code formatting — some tools can automatically check code for style issues and even automatically fix certain issues. This can reduce the need for manual code style checks during code reviews.
- Automated code quality metrics — some tools can also provide metrics on code quality, such as code coverage by tests, technical debt, code complexity, etc. This can give a quick indication whether the code is ready to be merged or not.
- Merge conflicts — while the CI/CD process successfully ran tests and checked for code style and formatting, it did not catch the potential merge conflict. Integrating tools that can detect such conflicts automatically could help identify and resolve them earlier in the development process.
- Automatic rollbacks — provide a mechanism to revert to a previous known working state of an application or system in case of unexpected issues or failures. Rollbacks can help mitigate the impact of bugs, errors, or compatibility issues that may arise after a new version or update is deployed.
What prevents us from taking one step forward to make merge requests totally automated? I am not talking about committing directly to master.
- Contextual understanding — automated tools may not always have the complete context or understanding of the code and its intended purpose. Sometimes the written code might not align with the intended functionality, and automated checks alone may not be able to catch such issues.
- Knowledge sharing — the MR process allows team members to review and gain knowledge about changes happening in the codebase. By going through MRs, developers can stay updated on the evolution of the codebase, learn from each other’s approaches, and ensure consistent practices across the team.
- Design and architecture — MRs often involve discussions and feedback on design decisions, architectural choices, and overall code structure. These discussions can lead to better solutions, identify potential flaws or optimizations, and improve the overall quality of the code.
How to eliminate the human from above points? Is a MR really a good place for knowledge sharing and design/architecture discussions?
- Sprint planning sessions — during sprint planning, the team can discuss technical aspects, design decisions, and architecture choices. This is an opportunity to align on the direction and plan for upcoming work, reducing the need for extensive design discussions during the MR process.
- Pair programming — it involves two developers working together on the same code. This practice not only helps catch errors early but also serves as an ongoing review and feedback process. It promotes knowledge sharing and improves code quality through continuous collaboration.
- Knowledge sharing sessions — regular sessions dedicated to discussing the written code, sharing new ideas, and discussing industry trends can foster a strong team affinity and collective code ownership. These sessions provide an avenue for team members to learn from each other and stay updated on the codebase.
The prevailing opinion will still argue for the necessity of human-driven merge requests. I won’t argue against it. My point is that MR approvals by two developers or a manager might, in some cases, diverge from the initial intention behind merge requests.
I also claim that there are changes in your code when you could totally rely on already available automation in your CI/CD pipelines and if something is not yet covered by the automation you could invest your precious time in automating it instead of reviewing someone’s code.
What if, in the near future or even now, we can benefit from AI that provides feedback on how our code changes integrate into the entire codebase? The AI could be trained to pick up potential issues not detectable by rule based automation.
So, why should I wait a day, or even ten minutes, for the approval of my two-line code merge request when I already have well-designed automation in place?