Duration: 6 months
Disclaimer: The duration (6 months) to cover the Focus Areas is indicative, based on our previous experiences with teams. Please, note, your team may go faster; it may go slower. For teams that progress at a faster learning rate, we may go deeper and/or add additional technical topics beyond the Focus Areas above. For teams that proceed at a slower pace due to missing fundamental knowledge and/or tech stack skills and/or inadequate resources (e.g., computer processors), we go at a slower pace and may not complete all the listed Focus Areas.
Technical Assessment must have been successfully completed.
We have reached an alignment that increasing quality and reducing maintenance costs is of very high value for the company
We have reached alignment with the team regarding current technical challenges and the proposed solutions
Test Automation
QA Automation Engineer:
E2E Testing
Developer Testing:
Component Testing & Contract Testing
Unit Testing & Integration Testing
Architecture
Component Testable Architecture
Hexagonal Architecture
Clean Architecture
Clean Code & Refactoring
Separation of Concerns
Test Driven Development
Deployment Pipeline should be used to automate build, test, and deployment. The highest priority is to keep the Pipeline green at all times.
Strive to build Microservices with a separation of domain and infrastructural concerns. To achieve this, we need Hexagonal Architecture as a minimum, though we will use Clean Architecture if we go further (with a Rich Domain).
Typical structure for a Microservice following Clean Architecture & Tests:
Application Core Layer consisting of Use Cases and Domain.
Unit Tests target the Use Cases, and in rare situations, may target the Domain directly.
Presentation Layer consisting of REST API (or other).
Integration tests target the REST API, and stub out the Use Cases.
Infrastructure Layer consists of DB and communication with external services.
Integration tests target the Infrastructure.
We also have Component Tests spanning the entire Microservice but exclude any other Microservices or External Services.
The tests must have high Mutation Coverage:
WHY high Mutation Coverage? Code Coverage is not adequate because it measures only whether code was executed, not whether it was verified - so someone can achieve 100% code coverage even though they have zero assertions. On the other hand, Mutation Coverage overcomes the problems of Code Coverage, because it measures both execution and verification coverage.
HOW to achieve high Mutation Coverage:
Use Test Driven Development (TDD), this naturally leads to high Mutation Coverage. This is the most effective, preferred way to reach high Mutation Coverage.
Use Test Last Development (TLD), this leads to low Mutation Coverage, and then developers need to go through the Mutation Test report and resolve the mutants. This is generally a painful, time-consuming way; hence TDD is preferable.
IMPACT of high Mutation Coverage:
When we have high Mutation Coverage, it means our test suite is reliable. Then, when we implement changes and make refactorings, we can do that safely. On the other hand, if Mutation Coverage is low, we cannot achieve it.
Test Driven Development: The TDD cycle is Red-Green-Refactor, which means we start with a failing test, we write code to make the test pass, then we refactor the code and ensure the test still passes. We apply TDD for new development:
For new User Stories, we read the acceptance criteria. We formulate the expectations in our mind. For each expectation, we write the expectation in executable form, as a test, which is an executable specification, we then write some code to satisfy the test and check that the test passes, and we're now free to tidy up the code and ensure the test still passes.
For new Bugs, we read the bug description. We write a test which specifies the desired behavior, verify that the test fails (this means we've reproduced the bug). Then we fix the code, and verify that the test passes, which means we have successfully implemented the bug fix. We can then also tidy up the code.
Refactoring to Clean Code
Prerequisites: Refactoring can be safely done only if we have a high Mutation Coverage. Thus, if the Mutation Coverage is low, don't do refactoring, instead raise the Mutation Coverage. A score of 100% of close to that means Refactoring can be done safely.
First level refactoring should be done locally before pushing code. For example, resolving Compiler Warnings and using SonarLint, also automated usage of Linter to ensure consistent code formatting.
Second level refactoring is done based on SonarQube analysis within the Pipeline. For example, resolving the Code Smells that were identified by SonarQube.
Third level refactoring is done based on human review. As we can see above, the first level and second level reviews were automated. Third level review is done by a human (e.g. team lead, team members) and it is done only after the automated reviews, to avoid wasting time. Human review may consider principles such as DRY, SOLID, etc.
The target outcomes of our program are the following:
Test Quality - higher code coverage, higher mutation score, higher test maintainability
Code Quality - higher code readability, higher modularity, higher static analysis scores
Software Architecture - improved architectural testability, increased decoupling between business logic and infrastructure
Test Driven Development - building in quality by implementing user stories using TDD
The target outcomes are relevant for the source code and test code that are covered during coaching sessions. The extent to which those benefits are seen in the whole application as a whole is dependent on the time invested (beyond coaching sessions) to replicate the best practices to the rest of the codebase.
At one end of the spectrum, some teams only invest time during the coaching sessions. In that case, they have gained some basic practice in applying practices at a small scale, but have not gained practice in applying the practices at a wider scale. In this case, there won't be any visible impact on bug level or delivery speed.
At the other end of the spectrum, some teams have heavy time investment beyond coaching sessions to apply what they've learned. Those teams use the practices as part of their day-to-day work, and the practices become a habit; and in that case, they can expect to see visible impacts on bug reduction and delivery speed and visible improvements in test & code quality metrics.
Some teams are in-between. The level of benefit they can expect is dependent on the time investment beyond coaching sessions.
Additional factors which we have observed as affecting team performance: high-performing teams were already comfortable in their tech stack, already familiar with principles of OOP, and additionally with strong intrinsic motivation. Lower-performing teams are struggling in their tech stack, have little or no understanding of OOP principles, and are missing intrinsic motivation toward quality.