Wednesday 27 March 2013

No time left at the end of the sprint for proper testing

In the Agile Testing Linkedin Grp the following was posted:

No time left at the end of the sprint for proper testing

Designers tend to add and change code until the end of a sprint, not leaving enough time to do all the agreed testing. At the start of a sprint, we assign rough time estimates to user stories, taking both design and test activities into account. Some tests are automated and run during the night.
However, other tests need manual preparation of data and partly manual execution and result analysis. There is also some amount of exploratory testing involved. During the sprint, there always seems to be a reason not to deliver yet to test: fixes, improvements and design updates. At the end of the sprint, little time is left for manual testing, far too less for running the tests, analyzing and repairing eventual bugs, retest and results logging.

What advise do you have for me, so that I can claim and really use a fair amount of the sprint time for testing?

With a follow up post of:
What I called 'delivery' is not a heavy weight process wall. It is just a oral notification in the team stand up meeting that some story is ready for test. Our way of working is pretty much in line with all points you mention: except for point 5. "Testing is a fair amount of the sprint if done well". I think 1 day left for testing out of a 2 week sprint is not this 'fair' amount. The pattern that I have to cope with is: several user stories are coded in parallel and they tend to be 'ready for test' all at the same time, that is: 1 day before sprint end. The tester is involved in functionality and architectural discussions during the sprint and prepares test data and test scripts, ready to 'push the button' when a story is ready.

My (currently unpublished) comment (with minor changes):
So, based on the info you've provided I'm going to make a bunch of suppositions along with ask a number of questions.

1. Is there a definition of 'done'? Does it include testing? If so, it seems like it's being ignored?
* If it is being ignored are there retrospectives held? What happens when this is brought up?
* Is the issue being recognised by the rest of the team?
* Is it being recognised and cared about?
* Are the powers that be aware?

2. Are the stories broken up into tasks? If so is it possible to test the tasks?

3. If what you are working on is broken up into (small) stories and setting aside the late adjustments there should be a constant stream of stories coming through, if not, has this been looked at? If so, what was the outcome?

4. Is it possible for team members to pair? IE testers and devs, ba's and testers, ba's and devs, etc.

5. Is there a visual representation of story process? Visible by everybody?

6. Is this way of working new to the team/company? Was there help making the transition? If there was, were they any good? Were any new people with more experience in this way of working hired?

7. Are you/the testers prepared to play hard ball? You can't possibly test a sprints worth of work in a day, so don't try.

8. How are the late adjustments getting into the story? They should be judged on value and as a team decided on whether or not they get into the sprint. Failing that then a story/stories can be dropped to allow for changes.

9. Is there a scrum master type role? Is he/she someone who has gone and gotten the CSM or are they experienced?
* Experience is very hard to judge, how is it done?

10. Is there a way to prepare test data through automation?

11. Is there any skill sets lacking in the team in general?

It doesn't seem like you have a testing issue, you have a team/culture/mindset issue.



I'd like to know what I've missed?
 


 

3 comments:

  1. I think you covered most points, the definition of done probably the most important one that is missed here.
    One that I'd add is "Do you expect to test the last day by yourself or is the whole team jumping in? If not, why not?"
    Another one is - if the story does not get delivered due to lack of testing, are story points counted? If not the burndown chart will suffer which may actually be a good way to show that there's something wrong. In that way the whole team gets "punished" by not delivering. That goes hand in hand with your play hardball comment.

    Thanks for sharing,

    Thomas

    ReplyDelete
  2. Has the company ever been bitten by the lack of testing? Disaster is a great catalyst for change. Even if you haven't, can you get any metrics from support that you can use to support the argument that the company us better served by better testing?

    ReplyDelete
  3. Here's something you could add, perhaps: are we obliged to produce code at the end of every sprint, or are we obliged to provide code that works? And if we say "it works", how would we know it works? And how would we know about problems in it that could wreck our beliefs later on?

    ---Michael B.

    ReplyDelete