A Chat with Quality Assurance Manager Mike DeBoer
We recently sat down with Mike DeBoer, who is the Quality Assurance Manager for EBM Software. Mike works with the Catalyst software development team, where he has bolstered QA efforts with his experience and automation expertise. His many responsibilities include defining and developing the QA processes, leading the QA teams and leading product reviews before every release.
Prior to joining the EBM Software team, Mike was the Quality Assurance Practice Lead at Nerdery, and has served in QA leadership roles in a wide variety of market segments. He is a well-regarded thought leader in the Minneapolis QA community, and in October, he gave a keynote presentation at the Testingmind Test Automation & Digital QA Summit on The Changing and Dynamic World of Automation Resources.
We picked Mike’s brain on a variety of QA-related topics, asked him about how he’s evolving the Catalyst product review process and got his thoughts on some of the major QA challenges companies are facing today.
There are a lot of misinterpretations and possibly some misconceptions about QA's role. Can you kind of give me an idea of what your job truly entails?
I like to think that I'm an advocate for the client of any application I'm testing. I'm looking at it from a client’s eyes. Is this the best throughput we can do? I'm looking at the application from that perspective.
At the same time, there’s a huge misconception that because you have QA, you have quality. “Testing” cannot test quality into an application, it can only validate and verify what's already there. That's why I always try to get involved as early as possible, to help identify issues and possible conflicts in the scoping and design phase of the development cycle. By being involved early, QA is still only validating what is there, but by identifying issues in these early phases, we are driving in quality from the inception of an idea to its final delivery.
In the definition, I’m asking those questions that product and Dev might find annoying. “What if this happens or what if that happens?” But, asking those questions early can then drive quality into the application, because now they're having to think about that and use those as they're defining it.
You had an interesting presentation about automation recently at the Testingmind QA Summit. What kind of role does automation play in the QA process for somebody who doesn't know?
Automation primarily lives within the regression testing of the application. Regression testing is a set of test cases and baselines that validates that the new code going in and new fixes going into the application have not broken anything in the existing code. It's really a validation set – it's the stuff that you do that seems mundane – but you have to do every single time just to ensure that you've got a quality product.
Now, without automation you're having to do this manually and this can be very time consuming, and when you're asking a human to do it there is a higher probability of error. When you automate it, you're now having the application run that the exact same way every single time. Now you have consistency and you know you've covered everything.
Automation will never fully eliminate the need for a manual tester, but automation expands what the testers can do and allows them to go deeper into the application, which only improves the quality of the overall application.
Could you give us an idea of what goes into a product review before release?
From my perspective, I review all the test cases and the results. I look at the passes, fails, blocks and any issues I have. I review any bugs we’ve found. I then work with the development team with regard to the fails and the blocks to make sure that they're OK with where we’re at and that nothing can potentially negatively impact our users.
At the same time, I’m reviewing and taking one last look at the bugs that were fixed in the release, along with the new functionality that goes in. Then finally, we do a demo for the business team so they can see what's being delivered in this next release. They can see how it's going to impact the client.
Once that part is taken care of, we discuss with the business side what are we doing next in development and make sure that what we have in our next release is still in alignment with what they've asked for, because they've already set the priorities and we've already reviewed it. We want to make sure that we're adapting to their various needs and their requests.
What got you excited about the potential of joining EBM Software?
It was the overall opportunity. Because when I was looking at joining the organization, the opportunity was wide open for me to define the QA process from top to bottom. The QA was being done by developers and financial analysts, and they were doing an amazing job, but it was not consistent. It was not their primary job and it's not what they really wanted to be doing – which is the case with QA in a lot of places. They needed somebody who was fully invested in QA to take it to the next level.
When I joined, my first job was taking what they had and formalizing that into actual test cases. So now when I go through those test cases, we're thorough, we're consistent and we have consistent reportable metrics on that. This just raises the level of quality that we're able to provide and consistency we're able to provide to our clients in the business.
It was all about that opportunity to bring a real, true QA group here to EBM.
You've been in this for a long time. In your opinion what are the top three QA challenges that companies industry-wide are facing, today?
I touched on this a little bit in that presentation that I did for Testingmind. Number one is that many companies out there just want to automate everything. They think QA can just automate everything within the sprint. In a lot of cases, it just doesn’t make sense (or isn’t even possible depending on the maturity of their software development process).
Number two is an offshoot of that: the idea that many companies think they don't need manual testing anymore. That they can just automate it all. There are some things that may get done once a year and take an hour to manually test. That same test case may take a week or two to automate. Is that good use of that company’s resources to automate that? Probably not. It would be much more efficient just to have that be done manually once or twice a year.
So, these problems can kind of snowball for those companies, if they aren’t careful. People think they can automate everything and manual testers aren’t needed… well what’s the first thing that goes out the window, in most cases, when their timelines tighten up and deadlines start to loom? The automation. And here you are without any real manual testers! Sure, any good automator should be able to perform these manual tests, but that’s not the job they signed up for.
And that kind of brings me to the third challenge a lot of companies are facing: building their QA teams. As businesses look to identify resources, too often they’re and looking for people that check all the boxes when it comes to their tech stacks. They often fail to realize that there is some transference of knowledge between different tech stacks. There's a lot that's applicable across the board. That resource identification can be even further exasperated by the fact that QA is becoming much more specialized.
You should really be trying to identify people who have worked in applications that are at least similar, and will fit nicely in your workflow. You can teach technology. You can’t teach fit.
What are we doing to avoid these challenges?
Fortunately, our approach is a very pragmatic one. But certainly, we’ve looked to formalize things, and make QA as straightforward and efficient as possible. Since I've started here, I've been introducing a lot of process. Not process for process’s sake, but processes that make sense. It's a process that is the right fit for where we're at maturity-wise.
For example, in the application, there is a new defect process that I've introduced as a part of it being here. I've also expanded on the process for test cases, and formalized that process within the Azure visual studio tools. So now we've got test cases and we have reportable metrics. I can very easily and visually show pass/fail, I can show when we run, and I can show how many times we've run test cases.
So, giving greater visibility within the QA process is something I've been spending a lot of time focusing on, because I will always be an advocate for that, and I always say the most transparent department in any software organization should be their QA department.
Are you interested in learning more about QA and test automation? Check out Mike’s presentation from the Testingmind QA Summit below!
EBM Software is the developer behind Catalyst, the industry’s premiere business performance software solution. Are you trying to bridge the gap between collecting data and drawing actual insights from it? Do you want to save time preparing data, so you have more time for actual analysis? Let us help. Contact us today to schedule a discovery call and let’s take the first step toward turning the data you’ve collected into insights that drive profitability.