Episode #81 – Spike Stories, Estimating Before Planning and Blocked Stories

Featured speakers:

Clayton Lengel-Zigich Clayton Jade Meskill Jade Roy van de Water Roy
Play

Clayton, Jade, Roy and Drew discuss:

  • Spike and research stories
  • Estimating stories during planning
  • Blocked stories in a sprint
  • Failing sprints

Transcript

Clayton Lengel‑Zigich:  Welcome to another episode of the Agile Weekly Podcast. I’m Clayton Lengel‑Zigich.

Roy van de Water:  I’m Roy van de Water.

Drew LeSueur:  I’m Drew LeSueur.

Jade Meskill:  I’m Jade Meskill.

Clayton:  Today, we’ve got a smattering of topics, a potpourri if you will.

Jade:  [laughs]

Clayton:  We don’t have a guest. If you’ve been listening to the podcast for a while, you might have noticed that the last 38 episodes we’ve had guests, but not this one. First topic, “Are spike/research stories a smell”?

Drew:  Jade, you hate spikes.

Jade:  I do. Yes, I think so. I’m not against them entirely, but I’ve seen people take it too far. Basically, have multiple iterations of a spike, which to me, is a project.

Roy:  Yeah. That’s just a spike taken to excess. You were telling me earlier about a spike taking six weeks to do. It seems to me that the spike should take maybe one or two hours. But I could totally see the validity of, if you’re going to put up some task and you don’t know if it’s actually possible, you can’t commit to getting it done within your iteration.

So, you set aside some time to go and do the spike. If you have a four‑week iteration that poses a problem, because that means you’re doing your spike now. That means your actual features aren’t getting done until eight weeks from now. A spike might be something that you can’t do if you are doing one‑month iterations.

Jade:  I prefer instead of committing to an unknown, creating a research task or something like that, that we can give a time box or an estimate to. And then use that information to now be able to turn that spike into something that is estimable.

Roy:  That’s what I was thinking of, when I was thinking of the spike. I guess the spike is supposed to be a prototype that you throw out at the end, right?

Jade:  Yeah.

Roy:  I was thinking more along the lines of research task. But I don’t like the idea of doing a pure research task either. If you’re going to spend two hours researching it, why not get started on building it, and improve that as possible.

Jade:  Yeah. When I say research, that’s what I’m thinking. Let’s prove it out. Let’s make sure it happens. Too many times, I’ve seen the spike become the actual product.

Roy:  Right, and then you have a whole bunch of people working on it, who have absolutely no concern for good quality, because we’re throwing this away at the end, anyway. By that time, you’re too attached to it. You can’t invest the time to really build it for real, and it’s already about to market.

Drew:  There’s also a danger in, “We’re just going to see if we can do this.” Even doing it in a sprint or especially if it goes beyond more than one sprint, is sometimes what you prototyped and what you did can be the actual product. I know that that was a pretty rigid, but there’s danger in thinking, “Oh, this isn’t real yet. Oh, this isn’t real yet, this is just a prototype,” and then carrying that on.

Roy:  Usually to any spike, there’s really one crucial component that you’re wondering like “Is this even possible”?

It’s not like the whole spike is…and usually that one critical component ‑‑ and obviously not in all cases ‑‑ but I mean you can probably knock it out in under a day and just prove that that part is possible and that’s all the information you need to estimate the rest of story. Now, I can commit to it.

Clayton:  Let’s say that you are doing sprint planning and a story comes up that you hadn’t really looked up before. Maybe you glanced at it but you don’t know what it tells. You have never done it before and no one on the team has ever done anything like it before, and you have no idea if it works. What do you do?

Roy:  OK, there’s a huge difference between “It’s never been done before,” and “I have never done this before and I am not sure if it’s possible.”

I have never driven to New York City but I am positive I can do it.

Clayton:  “There’s some piece of technology I have never worked.” “There’s a third‑party API that I have never used before and I don’t feel comfortable coming into the story.”

Roy:  Within some limits…I guess it’s up to a comfort level but it’s still reasonable to commit the stuff even if it’s unknown.

Clayton:  I can’t commit to it. It’s too much for…

Jade:  It’s got freaking laser beams.

Roy:  It’s got freaking laser beams. Then I could totally see like pulling in a research story or a research task or whatever, spending an hour to researching it, and building up a prototype of what you actually think it’s going to be with the thought in mind that if this works, this is going into production.

I can totally see that, and then at worst you’re wasting like an hour or two.

Clayton:  If it takes me longer than an hour to answer my question?

Roy:  Then maybe you should talk to your team and see if you can come up with an alternative solution or something that’s simpler. I don’t know.

Jade:  Maybe you’re asking the wrong question.

Roy:  Who is asking wrong question, me or…?

Jade:  If it takes longer than that to answer your question, maybe you could break that down into some smaller pieces. Maybe you have multiple questions that need to be answered.

Roy:  Multiple research tasks, I could say that, especially we have the type of team that has gigantic stories, where they take multiple days to finish each one.

Jade:  You mean that prevents you from getting too far down this path and becoming too attached to this thing that was supposed to be a prototype or something that you are going to throw away at the end?

Roy:  Right, and the other thing that I would make sure too is with any spike or prototype, I mean that would be specially the stuff that I pair on. Because I could totally see that becoming very quickly into a personal project where, because I am the one who saw the spike through, next week I might just as well be the one that works on it, and I certainly know the most about it.

Jade:  …and the laser beam expert.

Roy:  Right, exactly. I am the laser beam expert. You guys are afraid as hell of me and can’t fire me because the laser expert gets it by bus, you guys are screwed.

Clayton:  When did you just take the approach of saying, “Forget the spike or the research thing, let’s just do the story,” and in part of doing the story, I will do the two hours of research and we’ll just complete it there. We don’t have to wait on the whole other iteration”?

Roy:  If you’re really that concerned about waiting on a whole other iteration, then it sounds like your iterations are too long.

Jade:  If you’re willing to take it in to your sprint, then you have, implicitly, given it some estimate.

Roy:  Yeah, but what if part…

Jade:  It can be completed inside of your sprint.

Roy:  Yeah, but I could see a product owner brow‑beating the team into taking the story on, “Hey, we need to get this done by Tuesday,” and you’re like, “Well, I don’t even know if I can get it done this sprint.”

“But we need it by Tuesday.”

Jade:  By bringing it into your sprint, you’ve implicitly agreed that it can be done by Tuesday.

Roy:  That’s a good point. The team members should feel that they are empowered to refuse to bring that stuff in, unless they honestly believe they can get it done.

Jade:  You either truly cannot estimate it at all or it’s just hard and you don’t want to estimate it. I’ve ran into very few situations where there is no possible way to estimate the story or whatever it is that we’re faced with.

It happens, but very few. I might give it a very large estimate, because I’m highly uncertain.

Roy:  We’ve definitely seen cases in which we think like, “Hey, there might be a really easy way to do this. We’re trying to develop some feature and there’s a library that Mike did 90 percent of it for us. We’re afraid to commit to it because it could take two days or it could take one hour depending on the availability of that library or not.”

I could see that as being a really good case for, “We’re just going to commit to because we know, at most, it’s going to take two days. It’s probably only going to take an hour because a library exists or it probably exists or whatever.”

But, at least, you can commit to it. Now, do you leave that extra…do you commit to the maximum amount of time it would take, so you could potentially be leaving almost two days worth of work on the table that you’re not committing to?

Clayton:  If you change the scenario up a bit, and we say that you are doing a planning poker game, and you have this stack of stories that are loosely defined. You’re not talking about the details of the stories to the acceptance criteria and anything like that, but some stories come up and they sound really scary. How often do you just spike them versus maybe giving them a larger estimate?

What do you do?

Jade:  I’ve been faced with that. I’ve worked with a team that had to deal with that. What we ended up doing was breaking that unknown down into some still large, but smaller pieces.

We have this huge thing that we just can’t estimate. We have no idea how to do it. Well, could we talk about things that we could estimate and could understand and simplify the equation?

We ended up with some pretty large loose estimates on the things that were pretty big unknowns, but it wasn’t that we could not estimate it at all.

Clayton:  How do you know when you are spending too much time estimating and breaking things down just for the sake of getting that estimate versus just trying to take the work in or…?

Roy:  Are you suggesting taking the work in without estimating at all?

Clayton:  I’m saying, is it worth it if you have some story that everyone on the team says, or maybe there’s one really outspoken person and they want to spike this because they’ve never ‑‑ I’ll go back to the API example ‑ they’ve never worked with a third‑party API, so there is no way they can size it because it could be the most complex thing in the world, so they want to spike it.

Is it worth the time to go through and break the story down so that it can be sized better or it to have a whole separate spike story, to research the API? Are you wasting your time giving an estimate at that point?

Roy:  It depends on how quickly you need to get it done. If you need to get it done this iteration, then the only way to reasonably do that is to break it down into estimatable chunks that you can be, as a team, confident that you can complete if before the sprint is done.

If the product owner determines that this needs to get done this week or as soon as possible, it’s a top priority story, even if it takes three weeks, “I still need it done as soon as possible,” then you’re going to have to break it down and apply some estimates to it.

Clayton:  If research stories and spikes are smells, how often should a team be using them?

Roy:  I don’t know. In the teams, it’s something like it will come up only like once every other month or so. In my experience, it’s not that common for a spike to pop up.

Clayton:  If it does come more frequently, what does that signal? What could you learn from that?

Roy:  That’s probably because you’re not breaking stories down far enough to begin with for pulling it into the sprint.

Jade:  Yeah, they’re either way too large or your team is incredibly inexperienced, that could happen.

Roy:  Or your team is just really worried that they’re going to be held responsible for their stories. They’ve been burnt in the past by product owners that really pressure them into pulling stories in or pressure them and yell at them afterwards if they don’t…

[crosstalk]

Clayton:  They’re fearful for some reason?

Roy:  I could see that, yeah.

Clayton:  OK.

Roy:  It’s like, “I don’t want to commit to anything because I don’t want to be kept to this,” or, “I don’t trust the rest of my team to actually do it.”

Jade:  Another thing I could be signaling is that you’re missing a skill on your team. Maybe there’s a team member that you need to have that has the necessary experience or qualifications. There might be something that really they just don’t know how to do.

If for a bunch of developers and we need to illustrate something. Maybe we just can’t do it. That would be another indication that we’re missing it, a critical person on our team.

Clayton:  We talked a little bit about estimates, but there are a lot of things out there that are doing story point estimating…basically, that’s part of the planning process. Does that seem like a reasonable thing to do? What are some downsides to that, maybe?

Roy:  I don’t know. I’ve been guilty of being on a team where we do that. It feels reasonable at the time, but I can’t really name any benefits we get out of it. I don’t know. Maybe it’s like double‑ledger bookkeeping that we are confident that our tasking matches up to the point estimates to somewhat reasonable.

We’re not pulling in five 13s in one‑week sprint. But, other than that, I don’t really know how much value we’re getting out of it.

Drew:  On teams where they’re not doing anything with those points, they’re not using it to do release planning, longer‑term planning, or determining velocity, that sort of thing, then what’s the point? But, if they’re actually using those numbers, then, it’s valuable.

Roy:  One huge thing valid that I can see getting out of it is, it very quickly lets everybody on the team that’s part of the estimating process know, that they’re on the same page or not. This is something totally different than you Drew, like, “You threw up a 13 and I threw up a 3, then that means we’re got to talk, because, clearly, we have two hugely different understandings about what this is supposed to take.”

That should probably come up during tasking, anyway.

Jade:  Yeah. There’s some value in that ‑‑ knowing that we’re in alignment. I do think that, if you’re not using them for anything, you’re probably wasting your time.

Roy:  Yeah. If you’re doing it as part of the planning process, and you’re finding that you’re estimating even one 13, if you’re getting into 8s and 13s, I would seriously consider breaking them down into smaller stories, so they might be useful as an indicator that you need to break stuff down.

Because, if you just start tasking, then is becomes like [inaudible 12:30] slowly where you’re adding task to your story a little bit at a time, and there’s not a clear cut‑off point where, “OK, this is too much. We need to break this into smaller pieces.”

Clayton:  If you are doing release planning, you are using the estimates or something somewhat meaningful, does it really make sense to estimate right before the planning? Isn’t it easy to game the system if you do that?

Jade:  Yeah. I personally think that estimation should happen in some sort of backlog grooming or something outside of the actual planning meeting. You should come into the planning meeting with all your stories estimated and ready to go, because estimation does affect priority sometimes.

The product owner might make a different decision depending on what the team thinks it will take to implement a certain feature, and trying to do all that right during the planning meeting itself can lead to a lot of waste of time and confusion.

Roy:  I agree, but, if you’re just starting to do something with estimates, or you want to start doing with your estimates and tracking the velocity or doing some kind of release planning, at the very least, starting to collect to estimates right before planning allows you to start collecting velocity data, so that, later on, when you are doing backlog grooming and you want that information, you have the data available.

Clayton:  I think you can still do the estimates way beforehand and still getting something [inaudible 13:50] .

Roy:  I totally agree. But, if you don’t have a backlog grooming session or whatever, and you aren’t willing to invest the time yet, I don’t know. All I’m saying is, you can still get some value out of doing estimates right before planning, but it’s not going to be as meaningful, or as valuable as it would be if you were to do it before.

Jade:  It takes the same amount of time, but, mentally separating those concerns is good for a team, to treat estimations separate from planning.

Roy:  I agree.

Clayton:  One last topic real quick. Here’s the scenario. Let’s say that you’ve got a Scrum team, and, in their spread, they completed every single story, except one story, and, for whatever reason, they decide that they can’t continue on a story, and it’s blocked. They didn’t know something planning, or something came up, and they just can’t continue.

The assumptions they made were wrong. Should the team feel bad about that? Should they feel like they failed their sprint? Is it an exception? They did everything they could, so it’s OK? What do you guys think?

Jade:  I need a research spike before I can answer that question.

[laughter]

Roy:  It should be considered a failure because that means that something went horribly wrong. They did the best they could at the time, but it doesn’t mean we should ignore this and not learn from experience. We should count it as a sprint failure, and it should probably be the biggest topic in the retrospective.

Clayton:  Yeah, it might be.

Jade:  Too often, sprint failures are punitive, that we treat them as this thing that we’re going to beat the team up against, like you should never fail. You’re going to fail. It’s going to happen on every team. There will be sprints that you fail for a myriad of reasons.

Roy:  But it should be the exception.

Jade:  It should be the exception. If you’re failing constantly, you need to be looking at what’s causing these failures, if you’re overcommitting or whatever it may be. I agree with Roy that it should be the big topic of the retrospective, but it shouldn’t be something that is used to punish the team itself.

Drew:  Right, like the good and the bad, like, “Hooray, we’ve got all this stuff done! We released this awesome stuff. But, oh, too bad! We didn’t get this one done. In hindsight, what could we have done different”? It’s a success and a failure.

Roy:  Yeah, because I’m sure there’s something we could have done different to prevent that. In hindsight, you can see like, “Oh well, we should have done whatever to prevent this.”

Clayton:  We’ve asked this question during planning, or maybe we made these assumptions about how this thing was going to work.

Roy:  Right. How could we avoid making those assumptions in the future? How could you remind yourself to ask these types of questions that would have caused this particular question to get asked for answers that come forward or something like that?

Jade:  Yeah, but if you’re being essentially punished for that, you’re not going to think about that. You’re just going to think about how can we never fail again, instead of being open‑minded about, “OK, how could we have done this differently so that we don’t run into this again.”

Clayton:  Yeah. It’s OK to fail if we use it as a means to…

Jade:  …a learning opportunity.

Clayton:  Yeah, right.

Jade:  And we acknowledge and embrace that failure, and say, “OK, we may have not adhered to a commitment, but we learned a whole bunch about what could be done differently next time around.” And that might not work either, but you’ve got to keep trying.

Roy:  Right. [inaudible 16:59] twice. They failed two weeks in a row, or two sprints in a row, at what point, you start actually considering it failure.

Clayton:  All right. That wraps things up. That’s all the time we’ve got. We invite you to check us out on the Facebook fan page at facebook.com/AgileWeekly, where you can continue the conversation on this podcast or any of the others. We’ll see you soon. Goodbye.

[music]

Announcer:  If there’s something you’d like to hear in a future episode, head over to integrumtech.com/podcast, where you can suggest a topic or a guest.

Looking for an easy way to stay up‑to‑date with the latest news, techniques, and events in the Agile community? Sign up today at agileweekly.com. It’s the best agile content, delivered weekly for free.

The Agile Weekly podcast is brought to you by Integrum Technologies and recorded at Gangplank Studios in Chandler, Arizona. For old episodes, check out IntegrumTech.com or subscribe on iTunes.

Need help with your Agile transition? Have a question, and need to phone a friend? Try calling the Agile hotline, it’s free. Call 866‑244‑8656.