Ardent Development Podcast

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 19 years ago. He believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. He has worked many roles, … Continue reading #007 – Augmenting the Agile Team: A Testing Success Story with Mike Hrycyk

Show Notes

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 19 years ago. He believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. He has worked many roles, but always returning to testing. Mike is currently the Director of Quality for PQA Testing.

In this episode, Derek and Ron chat with Mike Hrycyk about his experience using a regression testing team to augment feature teams, handling the testing regression cycle while the feature teams (developers and testers) do new development. He makes a compelling case and his story of success is well worth the listen.

Where to find Mike Hrycyk

@qaisdoes on Twitter

On the web at qaisdoes.com

Enjoy the show and be sure to follow Ardent Development on Twitter.

Transcript

Ron: We are joined today with Mike Hrycyk who has been trapped in the world of quality since he first hit his user acceptance testing 19 years ago. He has survived all the different levels, and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality through software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike’s currently the director of quality for PQA Testing but has previously worked in social media management, parking, manufacturing Web photo retail music delivery kiosks and railroad. So welcome to the show, Mike

Mike: Thank you. Glad to be here.

Derek: It’s good to have you Mike.

Ron: Glad to have you. Now you just finished up at a conference. We thought we’d have you on to talk a little bit about your first talk that you gave there are. Augmenting the gentle team, a testing success story. Could you give us started on that topic, Mike.

Mike: Well sure, for sure. So Agile for me is a bit of a passion, I think. I really believe in the power of Agile. But one of the things that I’ve learned in working with people who do Agile is that when when people self teach or when they have bad coaches people seem to believe that there’s a right way to do Agile, that there’s one way to do Agile and they go out and they find a how to guide for how to do Agile and it teaches you how to implement it. But the problem with that is that every situation is incredibly different and that Agile isn’t really set up to be a how to guide it’s set up. It has a manifesto, it’s a set of concepts and it’s something that everyone who adopts it has to figure out how to do it right. And so I had a project that we did with one of our clients so, we’re testing as a service company. And we got involved with one of our clients where we did an assessment and helped them figure out what they needed to to be successful and some of the work they were doing. And one of the things that that we were looking at with them was is what they’re doing, is it doing Agile wrong, is it doing it right? And I have this personal mission to make sure that no one believes that you’re doing Agile wrong as a term that you can hear. I’m not sure if you guys are familiar with the concept but when I hear that it just makes me angry because Agile is an iterative approach to everything and it’s the way that there’s no way that it needs to be done. You’re doing it right, if it’s working for you. And so this talk that I put together is sort of a case study from a project where we did take and went way off the standard realm of Agile and did it our own way. And I wanted I talk about how we did it what the problems were and what the success was to help people see that doing Agile your own way is probably the best path to success. That makes sense?

Ron: Absolutely. It’s an interesting topic because as you go from company to company and do different assignments there you see Agile implemented in different ways. And I think if you talk to the folks that are involved in projects they would actually give you a slightly different slant which I think is aspirates this topic of are we doing it right. Because I.T. is often, you know years ago, there’s a right and wrong way if you will, right there seems. But this seems somewhat fluid. I think people are having a hard time knowing you know are we doing well or are we doing it right?

Mike: Well and for someone who grew up in Waterfall who’s spent years having lists of things that you need to do to do things properly Agile so different from that. And I think that’s one of the reasons that some people and I hesitate to call people old timers but if that’s your mindset maybe that’s the right way to say it. You get stuck in that mindset and Agile has too much change, it’s just too fluid and it’s difficult for you to go into that new world where you might have to be able to shift every two weeks, you might have to be able to shift the way you’re doing things because you’re supposed to be iterating to make things better.

Mike: So the clients I won’t name names but the client is a Canadian client that produces a retail management solution or an RMS for mobile phone kiosks. So when you go into a mobile phone place in a mall or wherever you go and you buy a phone and probably not the ones that are actually branded Fido or Bell or whatever, probably one of the other ones although they sell to the carriers as well. But when you go into one of those and you buy a phone, one of the things that they need to do is they not only have to track the purchase of the phone. They also have to set up provisioning for the phone so that when you walk out of that kiosk that you have a phone that is connected to the carrier and that does what it needs to do. So they produce software that takes care of that, takes care of the selling and takes care of that. And they’ve also extended it and tried to make it an option that will sell and take care of all of the needs of that client. So it also takes care of employment stuff, it takes care of inventory, it takes care of reporting and tries to take care of all things. And really what end that ends up being is it ends up being a very very complex system so anyone who’s worked in an ERP knows that that it’s like an octopus only not eight arms, like a million arms that that thread through all different things. And so there’s a lot of integration points for that. And then. So they were having some problems with one of their end clients which is what I term one of our clients who is working with someone else down the line. We call that the end client sort of like end user and when they work with an end client that end client was a name of one of the major carriers in the U.S. and they did 40 percent of the business for my client. So what they did is they had a lot of clout in conversations about features and things and that end client had 15 other vendors delivering solutions that all built together into integrated system that made them successful.

Ron: So it’s an octopus of octopuses.

Mike: Yeah yeah every aka arm had other octopuses living off in this kind of mutated. What that meant though was that that the SIT testing,the integrated testing environment system integration environment was very necessary and complex because you couldn’t test on your own box you can’t test for what’s going to happen when you have 15 other vendors delivering pieces of not code necessarily but messaging and communication and interfaces and all of that into one integrated system that’s going to work. So we have this healthy integration system and then to make it even more complicated. The RMS system that the client was producing, they had 13 different feature teams feeding features to this and they had one consolidated product that went out to all customers. But the end client that owned 40 percent of the business they were also responsible for getting specific features that worked not necessarily just for that end client but there were things that only the end client wanted and they had enough sway to get them for them. So there were in one solid rollout that went everywhere there were feature flags for some things that only worked for that end client and then there were features that went both ways features that other feature groups were delivering that the end client had to use and things that the end client was asking for that would also go to work for all the other clients. So if I haven’t illustrated something that’s really complex too yet maybe I can add more?

Ron: Oh, there’s more?

Mike: Well the things other things that complicated it for us was they had the environments, were very let’s call it rich to be polite. There were 32 different environments between dev staging and tests and more could be done at a whim if necessary. So their monthly release cadence the end client took their versions one month after everyone else so everyone else would get the release say in January and then it would have a month to solidify and make sure it was okay and then the end client would get their stuff a month forward. So now we’re talking version control problems where well which version are we fixing in this? Which pieces that we fix here have to go there and the code that specific for the carrier, does it have to go into the version that’s going to production? Maybe that’s enough complexity?

Ron: You see this a lot, though. I mean that is that is very complex especially when you look at the additional teams behind the end client. But today, we’re seeing this so much with these environments that an octopus is a good way to describe it. But it doesn’t even do it justice because of when you’re multiplying out against the different environments and then managing what piece of code is in what environment versus database version versus the data that’s in in timing all of that. I agree with you, that in itself is as complicated let alone what you’re building for the software.

Mike: It was pretty positive on the part of of our client that they had enough gumption to make statements like no when we write features everyone gets the features. They had to compromise a little and say to this this person who produced 40 percent of their business and said “Okay okay so you need something special? ,we’ll add that on” But that’s an additional feature, any feature that went to everyone went to them as well. So I mean that was pretty positive. I’ve seen where that’s not been the case in the past and that makes a horrific.

Ron: Right, yeah that’s a good first step but

Derek: That makes for disaster down the road.It’s customized versions of the software.

Mike: And so when you when you take all of that complexity that’s talked about and then you say “Hey so how well that sprints and Agile work?” that’s that’s where we start running into the problem. So I mean my focus is always on “Hey how does testing work does it work?, does it work okay?” And the big problem that they rolled into, in the amount of complexity that we’re talking about both within the product itself and then the different integration touch points meant that there was a pretty big regression suite that needed to be gone through and there was automation and automation could do so much but it was trouble with the end client piece just in terms of time and I’ll get to that in a second. So what would happen is the feature team, there was one feature team that was dedicated to the end client making sure they had the features that they needed. They integrated properly with everything else and they would go through their sprints and they had to QA on their team and it was a pretty standard Agile team you know five or six devs and a product owner and a couple QA. And things worked pretty well. And you know they produced a feature and QA tested a feature and made sure it was okay until you got to the point of saying “hey let’s release”. Now remember they were releasing once a month. And when they get to that point they say “OK so now we have to run the regression suite and we also have these deadlines because we actually have a target date for each of these releases”. And what they would get is either Seven or eight hundred test cases and the regression suite and you have two guys who are set up to do that. And of course they have to break sprint with the feature sprint in order to do this because they have to dedicate wholely to that. And indeed, there’s not enough man hours in two people to make that happen either or not and meet dates. So what they would do is they’d go to the other 12 feature teams and they’d say “We’re going to die. And this is an important client. I need help”. And so they would get QA’s from other teams to help them so they would then break their sprints. So we’re breaking sprints all over the place. You’re not getting a continuity of knowledge because you aren’t going to be able to go to the same Agile team every time and say I need your QA. You’re going to get the one that that were there in a place or maybe they can spare someone and not cause disaster so you don’t even get a continuity of knowledge in the way that the end client specific stuff works so when they came to us and talked to us we are in a situation where sprints were being broken all over the place and the end result of that was that they were still shipping but they were getting more and more bugs into SIT that anyone was happy with, more bugs out of SIT into production than anyone was happy with. And because of the complexity of all the things we’re talking about when they found issues finding the right person to work on this issues and finding communication about how they’ll be worked on and when they’ll be delivered were was very slapshot and it was making the end client quite grumpy and you never want the client that’s producing 40 percent of your revenue, being very grumpy.

Derek: Indeed.

Mike: So that’s that’s where we came in. Right. So they just knew that something about quality was broken and they pulled us out and said “hey can you do an assessment”. So we did an assessment and made a bunch of different suggestions and recommendations, that they could do things better. And the big one that they jumped on and that we figured out was let’s talk about this augmenting testing team idea that you have. And so that’s where we looked at the problem and we looked at the idea and they said, well really the problem here is that you don’t have enough people doing testing. So we could just give you four or five or six testers. And what would you do with them? And we said well if we inject six testers into your Agile team, that’s going to break things in a different way, it’s not going to work. So should we spin up another Agile team which as you know I mean the next step that you’ll get to and I guess but what are we going to do with with the normal ratio of having a bunch of devs and a product owner on a team what will they do all the time? And we said, Do we really need them? Do we really have to be tied to this concept that the team needs to have devs on it? And we said Nope which is easy for testers to say. Maybe not so easy for you guys to agree to one for as testers to say. And so we spun up a parallel teams. That’s why we call it an augmenting team and that team was responsible for regression testing and integration testing for code that was going to go to the end carrier and we had a mixed team so there was a team lead and team lead for communication and a senior and that senior became an expert in all the complex systems. Then we added some juniors so that they could take care of the big giant set of regression tests and we also added an automator so that we could start claiming back some of the time that was being lost. The company, the client has a good solid automation practice. What they didn’t have with how often they were breaking the sprint for this end client, there was just no time to ever get automation done.

Ron: So basically you took that Agile team that was in place the first time in those testers that were coming they’re breaking off the Sprint because they they were getting over loaded when it came to releases and you said those first testers they stay with the sprint team and we’re going to form this other team kind of trailing in order to do the testing for for the releases and the integration. Is that right?

Mike: Yeah absolutely. So I mean we had full time work because it was a monthly cadence so we worked in parallel so we’re taking care of regression then we are taking care of SIT at the same time and parallel to that. The feature team was working on the features for the next release and so that feature team in the QA on that feature team they owned those features and they were functional tests those features and make sure that they were OK, and when they were ready to come out into staging they would hand it all over to us and demo the features and we would take over responsibility for regression testing those feature as well as the regular regression testing suite and move it into a set and then take care of the integration testing. And so we took a lot of that load off of everyone and we also took over the entire communications suite. So one of the other challenges that the client had was if you get an issue you find an issue and SIT that issue could have originated with that one feature team or any one of the other 12 feature teams, coordinating, doing triage on that getting someone to go in and debug it and fix it and find a solution and deliver a solution. And couple all of that with communication and delivery SLA that you have with your end client and things were getting lost and misplaced and you just never knew the status of things. And so we dedicated one person on our team who took ownership of communication and tracking, all of those things, and it made the end client very happy.

Ron: So did your approach deal with any changes of the environment set up as well or the number of environments because that sounds like a heavy load just the number of environments you describe coming in?

Mike: So they they haven’t really done anything to fix anything in that area prior to our coming. What we took on was some environment coordination tasks so we knew which things had to be and which environments for us. There were other environments that we said we don’t care about those if you need those for whatever reason. So payments might need their own environment to do some investigation into issues or whatever and they could just own that. And we took on responsibility of making sure that our needs were met with the different versions that we were we were working on because of course there were also hot fixes that would move to these environments and etc. So we just took on ownership of knowing where everything needed to be to make sure that the stuff that the end clients expectations were met that they were going to get a version that they could work with when they needed to work with. And that was so positive just our taking ownership of that and they saw the benefits of that, that they went out and hired a full time, I think they called the title QA coordinator, but really their job was to coordinate to make sure that the right things are in the right places and we understood.

Derek: It’s something that you see when you’re working in a small environment if you have one Agile team regardless of the flavour of Agile you know we have this. There’s definitely a prevailing notion that well hey they can just take care of everything. But as the number of teams grow there is coordination connective tissue things like you mentioned, Mike. The handoff from this team during regression and an issue but really the issue belongs to the other team. There’s so much that I get taken care of in order for the organization to be efficient. I am I’m super fascinated by the by this approach. One of the things and I don’t think Ron’s ever heard me say this but one of the things that I have said to other people in the past is that as you build features you are accumulating effectively extra work for yourself in the future. So you build a feature and you test it now but you want to always make sure that that feature is working, especially as you glue new things on your system. Eventually you will get to the point if you keep building features where it takes you longer to regression test then you actually have time in your release cycle and so you to get to this point where all you ever do is regression testing and you really have to do one or two things, you have to augment your aggression in some way, which is what you’re talking about or you have to be able to automate it which is basically another form of augmenting your team. Otherwise you just eventually have to stop regression testing things.

Mike: I think when you start talking about those kind of problems that you need to implement an integrated automation strategy which includes as full coverage of unit testing as you can and understanding that coming out of each sprint that you’ve automated and what you need to automate so that you don’t have to do these complicated regression passes. I don’t think that’s the full solution for the the case study I’m talking about because you can’t control the 15 other. You’re always going to do the SIT testing because of the 15 other vendors. So I think that the solution that we had like maybe we could have invested a whole bunch of money and done a whole bunch of automation and then said, Okay now we just have to maintain it, but that wouldn’t have been enough because there’s still that integration testing. Having the other people involved at that point. I mean you can’t mock them all out and expect that you’re going to get the research right results in the end because they’re not as clean as a mock is, right? So in this situation they’re always going to do some level of SIT testing but there are other situations without the 15 other vendors that if you have a good solid automation strategy where you’re thinking about what you need to automate at the same time as you’re thinking about what you have to code then then I think you can go a long way towards saving yourself having to have a parallel augmentation team.

Derek: And there’s no question that that’s testing in the presence of integrations to third party systems that are out of your control is just so it makes it so much more complicated. It’s so true, we can build really interesting software that way but man, it’s hard. Yeah. You make really a point there.

Ron: It’s really tricky too when you look at the spectrum of maturity of the different vendors because often you go with a vendor that has the right type of functionality base that you want for your solution and then you have to dig to find out where are they on the spectrum and how do I manage that differently because there’s a very mature method with vendor one and there’s a very immature method with vendor two.

Mike: Yeah know that’s true. And in some sometimes you just have to take it on faith. Faith and that that end edit vendor is going to be as hard assed with everyone else as they are with you making sure that you’re delivering quality. So as long as you trust them to deliver interfaces that match the specs, then you then you code to that. And then because I’m a tester you say well then you test for that and you know nine times out of 10 things should work out OK. And then you also have some bugs and you have to build in the buffer to test for it and make sure that you keep it fixed. But yeah at no point do I ever think that you can just say hey we’ll trust everyone and everything will work because just miscommunication can make that not true.

Ron: Yeah. Speaking of miscommunication I know that I was looking through your blog that you have and I know you do a little bit of writing about behaviour driven development and I’m wondering if you touch on that for a minute.

Mike: For sure. So one of the things that we’ve done here at PQA is so where we have one guy we hired in and he done a bunch of BDD’s, so that’s- Behavior During Development. And so he said we should we should this should be something that we advocate for our customers where it’s right and we said OK teach us. So he taught us about it, and we’ve done it. We’ve implemented it with a few customers where there was a need. So the big thing so BDD is an offshoot of TTD- Test Driven Development. And the way it changes the focus of behavior during development is the concept that a lot of the issues that you get post development, when you deliver the product, is you’re delivering a product that the client never actually asked for. So somewhere between the client saying I want a feature that does a b and c Someone interpreted it because we’re humans and every single thing we say has room for miscommunication and interpretation. So it goes through a product manager a B.A. and a dev and all of us have different brains and our brains work in different ways and we have different kinds of focuses and it’s totally understandable that all of that happens and you end up with a tester and the tester says I think this is what they asked for.? And I mean that’s that’s one of the specialties of the tester mindset and one of the reasons that we exist is that we specialize in going back and trying to reinterpret what was originally said to make sure that what was built really is that. And I don’t think any blame, where I think only moderate blame here goes back to the developers because it’s human nature to have these interpretations it’s human nature to write one thing that can be read in three different ways right and it’s a pretty specific skill set to write things that are useful in that way and maybe back in Waterfall days where we had specks that were thousands of pages long we feel that we got those requirements down to a level where no mistakes can be made but we know that’s not true either. But when you move to Agile which is supposed to be documentation light and you’re writing things in plain English it raises the problem up even further. So I forget the guy’s name but this guy came up with the idea well what if we shift all this stuff to the left? And we come up with this idea of having discussions about what we’re going to build before we build it. So we all get on the same page. And so BDD comes with this concept of the three amigos meeting and the three amigos meeting happens right after you start the sprint and you’re going to work on feature X so you take that feature tag for the work item and you say okay let’s have the three amigos meeting. So the product owner says someone who is advocating for the client and most likely to be able to parrot what the client says and in some cases you can actually have the client present for that and the developer and the tester they sit down and they have a meeting to discuss the feature and the we can be 15 minutes long it can be an hour and a half long it really depends on how complex it is that you’re working on. And you talk about the solution you talk about what it is that developers actually going to do because it’s actually what they want. And you walk out of that meeting with all three people on the same page. And this saves time and tests because the tester when they when they get it. They really understand what the acceptance criteria are. And the developer has the same understanding so theoretically they’re going to produce the same product that that is being expected. And it really saves on having produced the wrong thing.

Ron: You spend time translating in the beginning so that it’s not an oops at the end.

Mike: Yeah you spend time in that conversation. So it’s I don’t know like a pot of translation you’re all there hashing it out. So in the end you don’t have to do any translation yourself because you’re all speaking the same language.

Derek: This is an audio podcast you can’t see me nodding my head but it’s so true. Three Amigos rule that, the rule of three it makes such a huge difference.

Mike: And it’s not right for every client. But we we’ve created this thing that we call testing focused BDD and is a little framework that we have that helps our people who are going in and talking to a client and looking at the problems that they’re having and saying you know what we think BDD is right for you and it’s not right for every client. But as soon as you have a client who says yeah our biggest problem is we develop what we produce really good things and then we have to send them back for rework and that’s a Ting-that’s a flag where we say oh is this right for BDD, let’s see if it’s something that they could do and be useful with.

Derek: Is that testing focused BDD framework something that you share externally or is it something that that is proprietary within your company?

Mike: It is something that we are thinking about taking to conferences this coming year. So it’s not something that we’ve written externally yet but it’s something that we’re going to start doing.

Derek: Okay awesome. We’ll keep our eyes out for that, then.

Ron: Well this is interesting, Mike. I think. I think we can hang out all afternoon and chat. If people want to you know get in touch with you. Where are you on the web and what’s the contact information that they could find you?

Mike: It’s Mike Hrycyk and I’m going to spell the name for you. It’s H R Y C Y K.

Derek: Just like it’s pronounced.

Mike: Yeah just like it’s pronounced. Absolutely, no vowels. So I blog rather intermittently at www.qaisdoes.com. That came out of my brain at one point I said “QA is as QA does” and then I went out and found a URL that matched for me. I tweet, Twitter at QAisdoes. My professional career is at PQA Testing, so as I said we’re testing as a service company and we consider ourselves testing experts and will help any company that has testing needs to figure out how to solve their problem. You can go to PQAtesting dot com for that. If you are all friends of Ron and Derrick and therefore local to Fredericton, that’s where our home office is but we have 6 offices across Canada and I’m in our Vancouver office.

Ron: Well that’s great, Mike. I appreciate you taking the time this afternoon with us. Really interesting topic so thanks so much.

Mike: No problem I had a good time. I welcome you to invite me back someday.

The post #007 – Augmenting the Agile Team: A Testing Success Story with Mike Hrycyk appeared first on Ardent Development Podcast.

What is Ardent Development Podcast?

Derek Hatchard and Ron Smith talk with practitioners and thought leaders in the software development industry in search of inspiration and insights that apply across disciplines including programming, testing, product management, project management, people management, user experience, and security.