Episode #21 – Is It Responsible to Not Test?

Featured speakers:

Clayton Lengel-Zigich Clayton Derek Neighbors Derek Jade Meskill Jade
Play

Derek Neighbors, Jade Meskill, Clayton Lengel-Zigich and Chris Coneybeer have a conversation on when if ever it is responsible to not test.

  • Obie Blog Post
  • Crazy Bob Martin
  • Test everything all the time
  • To what degree
  • Not breaking things
  • Verifying value delivered
  • Market vs quality
  • What’s right level of coverage
  • BDD as a technique
  • How much risk can you tolerate
  • Prototyping
  • Technical debt
  • Whats your testing comfort level
  • Communities have different testing tool competency levels
  • Irresponsible?

Transcript

Derek Neighbors:  Hello, and welcome to another episode of the Scrumcast. I’m Derek Neighbors.

Jade Meskill:  I’m Jade Meskill.

Clayton Lengel‑Zigich:  I’m Clayton Lengel‑Zigich.

Chris Coneybeer:  I’m Chris Coneybeer.

Derek:  Today, we want to talk about is it ever responsible to not test?

[all men gasp]

Derek:  I had to pick the guys up off the floor but…

[laughter]

Derek:  This has come up a number of times. I remember personally arguing about this with James Shore and a few other people at, I believe it was, Agile Roots probably two years ago, maybe 2009. Then, recently, OBIEEs brought this up in a blog post as he’s doing his own start‑up, and he talks about maybe he shouldn’t be testing for his own start‑up in some ways, shapes, and forms.

I just wanted to, obviously, be in a group that’s pretty adamant about testing and pretty focused on quality, what we think about that.

Clayton:  I read an interesting post the other day that was supposed to be an argument against TDD. It was one of those things where half of it was drawn in, and then the other half of it was, “To prove my point look at how crazy Bob Martin is. When he talks about testing, he’s crazy. That means testing’s crazy.”

[laughter]

Clayton:  It was funny, but there were a lot of people that had commented on it that were like, “Oh, I’m so glad that someone finally came out and said this,” like it was some Oprah show or something where someone was revealing this great truth.

I thought that was interesting, but at the same time, no one was really getting into the like, “Well, hmm. Maybe it isn’t responsible to test everything all the time,” counteracting the Bob Martin or some of the people that are really into the test‑all‑the‑effing‑time kind of stuff. There’s something to be said for that.

Jade:  A lot of it is the degree of which you’re testing. Advocating a position where you don’t test at all is very dangerous and completely irresponsible. But do you need unit tests that cover every single edge case, as well as, acceptance tests, as well as, request testing? There are so many levels of testing that you can get into, especially with a web‑based application.

When you’re running a Lean Start‑up and you’re in move‑fast mode, totally experimenting, and trying to figure things out, it can be quite a bit of overhead to do a complete full‑stack top‑to‑bottom, outside‑in testing.

Chris:  Even if you’re doing your Lean Start‑up, there should still be some testing in there for acceptance. There should be some testing around what is the core criteria of the application I’m writing for, not of the unit testing, testing out every single line of code. I understand the coverage being a lot lower.

But I still think that, no matter what, if you’re doing acceptance tests, even integration tests and depending on what you’re building, what you’re doing, you’re going to think about a lot more things, and have more confidence in that code. Not confidence in being perfect but confidence in your not going too far away from what you’re trying to get to.

When you start writing everything with no test ‑‑ I just would have no faith that I couldn’t break something and not be aware of it. Then, if I’m a starter that could have a bad presentation to any end users or people I’m showing the product to.

Clayton:  I heard a good way of describing testing, and a lot of people think of testing, especially if you read stuff on Twitter, like TDD. It’s all about, “Oh, TDD is so great because I realize that I broke something before I got it to production,” or whatever. So, testing becomes all about making sure you’re not breaking things.

The next level of that is, the code I heard was, “You should be doing testing so that you can verify that you’re delivering the value that you say you are with your application.” That would be really important if you said, “From my Lean Start‑up, I’m going to test the value parts and the rest of it is…” Maybe, I don’t do that. Maybe, not test those things. That could work.

Jade:  One of the arguments that came into is, when you’re funding somebody as an investor, the truth is you don’t give a shit about testing. They want, you just don’t care. What you care about is, can you compete in the marketplace? At some point, testing or quality factors into that. But sometimes getting to market or getting to discovery, quality is not the most important thing.

So, if I say, “Clayton, I’m going to give you a hundred thousand dollars and I need this list of 15 features. If you can’t get those 15 features done, regardless of what the quality of those 15 features is, this project’s done and we’re not going any forward, and ending. If you deliver those 15 features to me, regardless of the quality, if you can demonstrate on some level that you can complete these 15 features, I’ll give you another million dollars to continue on with this product.”

That, in a Lean Start‑up mode, is where it starts to become a little difficult to say, what is that threshold? Is it OK to go four weeks without tests? Is it OK to go 10 weeks without tests? Is it OK to go 90 days without tests? Is it OK to only have 10 percent coverage or only have the value items covered?

I mean, there is a point and you could say the same thing, even not on Lean Start‑ups. When you’re in large enterprises that have millions of lines of code with no test and you want to do new development and it’s not reasonable to go back and write tests for every line of code that’s already written, at some point you have to make trade‑offs to say, “What’s the acceptable amount of testing that is responsible.”

That’s a word we use a lot but we drop real quick when we get into our wars, when it’s all testing all the time no matter what. We drop the whole like, “What’s responsible?” Maybe, we could talk about. What are some of the things that we see that are responsible or, maybe better, what we see that is irresponsible?

Chris:  For me, I really embrace the core idea that BDD, a behavior driven development, is less about test coverage and the percentage of testing, but more as a technique to help me discover the problem I’m trying to solve. So when I approach it from that angle, I’m not really interested in necessarily ensuring that every single line of code is covered.

I’m more concerned about, “Am I creating a quality design that is good enough to solve the problem that I’m trying to solve?” There’s a bare minimum level of test coverage that you can implement when you’re going down that path that is going to cover you in 90 percent of cases, especially, as your software continues to grow and add new features. You have some level of confidence that you’re not completely destroying the application by changing a few things around.

So when I’m testing in Rails, I really like to do acceptance testing, some more outside‑in testing. I’m coming in as a user and following the path and making sure that things are working the way I anticipate them, but I can change around a lot of the implementation details behind the scenes without having to go back, refactor all my tests and worry about this whole gigantic test suit.

There’s a bare minimum level of coverage that you can get where the testing is pragmatic and simple but good enough for the time being. It really comes down to risk. How much risk are you willing to take and how much risk can the product or project that you’re working on afford to take?

If you have a hundred thousand dollars for the start‑up and it’s make it or break it, you can take a lot of risk with your software because the bigger risk is that you don’t deliver anything. If you spend all your money and engineering time doing testing, you’re not going to get to market anyway and it was just a waste of time

Derek:  On your thousand dollar example, if you treat it as a prototype, people readily acknowledge that you should build the prototype and throw it away. If you can be responsible to do that, you should also recognize that maybe doing a full testing is irresponsible, because that’s something you’re going to throw away.

At the same time, I’m kind of torn, because I know that when you get to a certain point, I acknowledge that, there are benefits that you get from, say, TDD that are like architectural decisions. You get other benefits that are not really what you’re specifically testing. Those are harder to get and more people think that they understand those and get those than they do.

I could see a lot of people saying, “Well, you know, I do TDD because I have to test everything, because we’ve got all these great TDD benefits,” when maybe, they’re not really getting those and it’s a prototype that they should be throwing away anyway.

Chris:  How many people do you think have the discipline to actually throw away that prototype?

Jade:  None.

Derek:  None.

[laughter]

Jade:  To me, where the argument comes in is what’s that time frame? I mean, I would argue. To me personally, it’s probably in the four to six week time frame, so you can lie to yourself that it’s kind of a prototype for four to six weeks. Once you get beyond that sixth week, if you think of it really in terms of technical debt like financial debt, I might bootstrap a start‑up with credit cards and that becomes irresponsible to a certain dollar figure.

Is that dollar figure $50,000? Is it $100,000? Is it $200,000? I don’t know it’s going to be different for everybody. It’s kind of the same thing. If you’re doing an application without tests, you’re incurring a certain amount of technical debt, or you’re making it more difficult to pay down technical debt that you do occur.

At some point, whatever that time frame is, it’s pretty damn short. I want to say less than six weeks, but it really depends on who you are, but then, you’re just lying to yourself.

Chris:  How could you gauge that? How could you find that limit, that threshold, between test all the effing time and no testing at all? How do you, as a team, determine what that pain threshold is?

Jade:  For me, after playing with a bunch of test frameworks lately and really pushing through, most people feel that testing makes them slower. When you’re comfortable with your framework, that’s simply not true.

Testing often makes you faster, not slower. Even though, you’re writing twice as much code, what you’re doing is you’re thinking much more about the code you’re going to write, therefore you actually end up writing less code in probably a quicker amount of time that produces more functionality.

I would say, that a team that is used to testing, this isn’t even an argument, because to them, they don’t see testing as slowing them down, even in a new project, if they’re testing properly. If they’re over testing, 100 percent coverage, then that’s a different story.

What I would say is, if I was doing a different prototype and I wasn’t really comfortable with the test framework, what I would do is, I would probably say, if I’m breaking something down into stories or tasks, I would give myself 10 or 15 minutes per task or story to say, “Can I feasibly write a test for this?”

If I get stuck, I immediately throw the test away. But I have the scenario, I’ve got the harnessing for testing, so if I come back later, presumably, it’s easy for people to add specs or to add tests in there. Additionally, what that allows is that allows maybe other people on the team that are more comfortable with testing to test instead of just saying, “Oh well, nobody is testing, so I’m not going to test.”

That’s a fine line, as well. That’s a reality. Not everybody is really good at testing and testing quickly and making responsible test decisions.

Chris:  There are a lot of languages and frameworks out there that haven’t yet made it easy for people to do that minimum level of testing.

Jade:  If you’re choosing those, you’re probably not very Lean Start‑up anyways.

Chris:  [laughs] Wow.

Clayton:  That’s a good point, though, because a lot of the, and maybe, that’s just because we live in this community, but a lot of the, “You’re an idiot if you’re not testing everything.” A lot of that comes from Ruby, for instance, on Rails, where it is really easy to test. If you’re using Rails…

Chris:  It’s ridiculously easy.

Clayton:  …it’s like, “How could you not be testing? But if you’re some guy who says, “I have a great idea and all I know is C‑Sharp.” It’s like, “Maybe, it isn’t so easy to get 100 percent bootstrapped and do the testing.” Maybe, that’s why you don’t see that from that community and it’s easy to berate those people.

Jade:  What do you think, Chris? You’re familiar with the .net community.

Chris:  For .net community, it’s been building up for the last couple of years. We’ve been learning a lot from other frameworks. Now, it is a lot easier to get up and running and do acceptance testing on there.

You have tools available and you have a lot of frameworks. You go out to get hub and search for TDD and for .net, and you’ll turn up a ton of frameworks and a lot of tools that are available for you. You can grab Selenium down and start doing acceptance testing.

For me, that’d be the first thing I would want to do. Speaking in this idea of a Lean Start‑up with 15 things, I want to, as much as I like to do integration testing for contracts against other systems and things like that, I would take those 15 things and start writing that out, putting that into some steps and running that through acceptance testing.

That way, I have a contract as to what my user wants. Yes, I’m not getting all unit testing. I’m not getting everything underneath, but I’m using acceptance testing as a type of contract to make sure that I’m going to get back to what is it that I’m supposed to be delivering. I mean, it’s completely possible in .net, no problem.

There are a lot of tools out there available and a lot of tutorials and slides available to get you through that. Even if you take a look at the Microsoft stack alone, take a look at Team Foundation. Team Foundation System now has test runners and has all that. You can get that continuous integration right there in front of you. There are open source tools available for that, too.

Jade:  Cool.

Clayton:  What’s the verdict? Is it you’re responsible to not test in your Lean Start‑up?

Jade:  I’d say, irresponsible.

Derek:  It’s irresponsible, if you’re not at least having a discussion about why you’re not testing.

Chris:  I’m going to equivocate on this one and say it depends.

Clayton:  I fall in the category of I feel like I’m not super slow when I test. If I were doing a Lean Start‑up, I’d say, it was irresponsible if I didn’t.

Jade:  I will say, I’m not as fast as Clayton at testing. At this point, I would say, I was irresponsible if I did not do some testing for my start‑up.

Clayton:  With that, we’ll see you next time. Thanks for joining us.