As a solo tester in a growing company, you’re bombarded with advice about what to measure. Code coverage, test execution rates, bug counts, automation percentages … the list seems endless. Add DORA metrics to the mix, and it’s easy to feel overwhelmed.
Here’s the thing: when you’re testing solo, you need metrics that provide maximum insight with minimal overhead. After years of working with solo testers and small teams, I’ve found three metrics that consistently deliver value.
1. Customer-Reported Issues in New Features
This metric aligns perfectly with DORA’s Change Failure Rate, but with a practical twist for solo testers.
How to Track It:
- Create a simple spreadsheet with columns for:
- Feature name
- Release date
- Customer issues reported in first week
- Issue severity (High/Medium/Low)
- Whether it required a hotfix
Why It Matters:
This tells you immediately whether your testing strategy is effective at catching important issues before they reach customers.
Common Pitfall:
Don’t try to track every minor issue. Focus on problems that impact user experience or require urgent fixes.
2. Critical User Journey Stability
This metric supports DORA’s Time to Restore Service and helps prevent major incidents.
How to Track It:
- Identify 3-5 most critical user journeys
- Create a daily red/amber/green status for each:
- Green: Working as expected
- Amber: Minor issues
- Red: Journey broken
- Note any incidents and resolution time
Why It Matters:
Gives you and stakeholders immediate visibility of system health without complex monitoring tools.
Common Pitfall:
Trying to monitor too many journeys. Stay focused on what truly matters to your business.
3. Time Distribution: Exploratory vs. Reactive
This metric helps you maintain a healthy balance between proactive and reactive work, supporting better deployment frequency.
How to Track It:
- Use a simple time-tracking sheet with categories:
- Exploratory testing
- Bug investigation
- Regression testing
- Production support
- Review weekly percentages
Why It Matters:
If you’re spending more than 40% of your time on reactive work, it’s a sign that something needs to change in your testing strategy.
Common Pitfall:
Don’t track time in minute detail. Rough percentages are enough to spot trends.
Implementation Tips
- Start Small
- Begin with just one metric
- Use tools you already have (spreadsheets work fine)
- Build habits before adding complexity
- Automate Where Sensible
- Use your CI/CD pipeline to count deployments
- Set up simple automated checks for critical journeys
- Create basic dashboard in your existing tools
- Report Effectively
- Keep a one-page summary
- Focus on trends rather than absolute numbers
- Include brief context notes
Making These Metrics Work with DORA
These three metrics naturally support DORA’s four key metrics:
- Customer issues help track Change Failure Rate
- Journey stability supports Time to Restore Service
- Time distribution helps improve Deployment Frequency and Lead Time
The key difference? You’re tracking them in a way that’s sustainable for a solo tester.
Remember
The best metric is one you’ll actually maintain. Start simple, focus on what drives decisions, and evolve your approach as your company grows.
I’ve just released a new eBook with 7 pitfalls to avoid in this situation, get it here:
![](https://kato-coaching.com/wp-content/uploads/2024/11/3d-cover-1024x933.png)
Thanks – aporeciate your insights.
Comments are closed.