Skip to main content

Hyperproductivity in Scrum

Mike Cottmeyer Chief Executive Officer
Reading: Hyperproductivity in Scrum

Last year sometime, I had the pleasure of hearing Jeff Sutherland speak at the Agile Atlanta group here in town. One of the things that Jeff always brings up in his talks, is that Scrum creates hyper-productive teams. I asked him how he defined hyper-productivity, and how he measured it. He told me, but rather than explain my recollection of what he said, I want to reference how he explains it in print.

According to his paper Shock Therapy: A Bootstrap for Hyper-Productive Scrum:

“We define Hyper-Productivity here at 400% higher velocity than average waterfall team velocity with correspondingly higher quality. The best Scrum teams in the world average 75% gains over the velocity of waterfall teams with much higher quality, customer satisfaction, and developer experience.”

Pretty impressive, but I am curious what the improvement is measured against. Going deeper into the Shock Therapy paper, the results from their MySpace experience say that:

“The baseline velocity (100%) is established for a team during the first Sprint. The Product Owner presents the prioritized Product Backlog in the Sprint Planning meeting. This is estimated using Planning Poker and story points. The team selects what can be accomplished during the first Sprint and the Product Owner determines exactly what is “Done” at the end of the Sprint. The number of points completed is the baseline velocity.”

When I first read that, I was like “what am I missing here”. Hyper-productivity is defined as improvement over the first iteration of a brand new Scrum team. That didn’t strike me as a very fair metric, you’d think *most* brand new teams would get dramatically better after a few weeks of working together, using Scrum or not.

But here is their key assertion that supports the measurement:

“At MySpace, the baseline velocity is often significantly higher than their previously chaotic implementation of waterfall, so the baseline is conservative”

I don’t know about you guys, but I’m kind of bothered by the assumptions, and the numbers that support them. Can we always assume the first sprint of a Scrum team is better than the same team doing Waterfall? I’m not sure. Are all Waterfall projects chaotic? I can tell you first hand they aren’t. It might also be interesting to see the team stabilize first, and then see how much better they’d get by systematically removing impediments.

These numbers have been nagging me ever since I read this paper and first heard Jeff explain them. Is it only me? What am I missing here?

BTW – I loved the image here… seemed pretty much with the Shock Therapy theme. Check out the artists website at
Next Interesting Post... 6/6/2010 through 6/13/2010

Comments (8)

  1. Andrew Fuqua

    I was likewise at that event and inquired about what is the criteria for calling a team "high performing". My recollection is that Jeff told me that a high performing team had to produce more than a given number of function points per man-month, and that number varied depending on the type of code that was being developed. (I trust my recollection on this because it involved a couple emails to Jeff, a study of "Applied Software Measurement: Global Analysis of Productivity and Quality" by Capers Jones, a weighty tome to be sure, and the writing of a couple lengthy white papers detailing the productivity of 3 development teams using different processes. I'm looking for that email and those white papers on a thumb drive — not having quick success with that.

  2. Andrew Fuqua

    Here's that email from Jeff (11/07/2008) that I mentioned in my previous reply. For context, below his email is my very long question to Jeff. The net is that his definition of hyper-productive teams related to a higher than average number of FPs/man-month.


    Thanks for the note.

    To compare to Xebia numbers, you need to remove all test code from
    your numbers and fully burden the cost. It must include all testers,
    all administrative staff, any managers or project leaders as FTEs. If
    half you code is test code you will go from 20 to 10. If you include
    testers and other staff it will drive you well below 10. Xebia's cost
    is fully burdened with anyone associated in any way with the project.


    On Thu, Nov 6, 2008 at 5:35 PM, Andrew Fuqua wrote:
    > Hi Jeff.
    > I enjoyed your presentation at Turner in Atlanta on the 27th. (I was the guy
    > who gave the book away before your talk.)
    > I've used your approach to compute the function points per developer per
    > month for two of my teams. The 1st team is an XP team. The second is not as
    > extreme. I've come up with 20 FP/man-month (558,633 java sloc, 537
    > man-months) and 22 FP/man-month (107,647 java sloc, 90 man-months). During
    > your presentation you said that 15 would be a hyper productive team and that
    > 20 was the highest ever recorded and that 12.5 is the industry average. My
    > team's quality is very high (102 defects = 0.183/kloc, and 11 defects =
    > 0.102/kloc), but we thought we were slow to develop so I was very surprised
    > by this FP/man-month being 20+.
    > I can think of a couple possible explanations. First, these source line of
    > code counts include JUnit based unit tests, functional tests, and acceptance
    > tests. We have about equal amounts of test code and production code. I used
    > the 53 loc/fp for java ratio that you mentioned in your presentation. Should
    > I remove the test code from the sloc count?
    > Second, my man-month figure only counts the programmers. At the peak when
    > these projects were running at the same time, we had 15 programmers. We also
    > had 3 professional testers. They would do some by-hand testing during the
    > iteration after each story was demo'd. They would do more manual testing
    > after each 2 week iteration. They also automated a regression test suite
    > using Rational Functional Tester, but those lines of code was not included
    > in my sloc count above. Should I have included these additional tester
    > man-months in my calculation?

  3. Mike Cottmeyer

    Andrew, not sure your point.

    Jeff clearly defines 'hyper-productivity' in his paper Shock Therapy. It's a pretty easy Google search if you are interested. The point is not cost/line of code or function point. The point is that Scrum teams measure their performance baseline against, what I consider possibly, an invalid baseline, and against an invalid set of assumptions.

    Take a look at Sutherland's paper and let me know what you think.

  4. Jeff Sutherland

    Well, you can argue about the data. If you don't like the MySpace data there is the Xebia or Systematic data. I had Capers Jones teams measure every team at IDX (now GE Healthcare) for a couple of years on actual function points delivered at every release of every product (dozens of products). While most teams only doubled velocity the hyperproductive teams were all 10-20 times more function points that the teams in Capers own database.

    The point here is to measure the velocity of your teams. In a group of 30 people in training today only 3 of them knew the velocity of their team. Are you in the 10% or the 90% Scrumbutt teams. If you are in the 10% your job is to multiple their baseline velocity by 500% something that was possible on every team at MySpace when the ScrumMaster was Scott Downey.

    The point is not whether you like my data. Gather your own data.

    Jeff Sutherland

  5. Shmoo

    Oh, come now, Mike.

    You’re asking a man who has a vested interest in an activity to provide figures that show that the activity in which he has vested interest is better than activities in which he does not.

    Do you expect accuracy? Honesty? Full disclosure? The truth and nothing but the truth?

    If so then you should read the papers that Microsoft release about Windows. It will convince you that Windows is better than everything. Your operating system doubts will vanish.

    The world of agile is the world of spin.

    Take how I came to you site, for example.

    I read Tim Ottinger’s (big TDDer) tweets.

    He writes, “Maybe people feel bad about “hyperproductive” because definitions are not clear? 4x better than non-agile teams — not too bad.” And, “So 400% of production of non-agile team, 4x the quality.”

    Does he mention that your entire piece is questioning the validitiy of those figures? No, not at all; he just presents them as though they were unquestionably true.

    You need to grow up a little, Mike. Coffee’s brewing.


  6. Mike Cottmeyer

    One more thing… I actually believe that Scrum can make a team ‘hyper-productive’… I just don’t like the baseline. I started with a team that planned their first sprint, committed to 40 points, and delivered 1. The next sprint, they delivered 40+. Did they improve 4000%?On paper maybe, but in real life, there were other factors involved.

  7. Jordan

    If the Scrum marketers are being blatantly dishonest, and Shmoo here is certainly alluding that to be the case, then they should be called out on it. False/deceptive advertising is illegal in some cases, and never a good sign in any case.

    If they make false claims, they cannot be trusted… I really doubt any team is 400% more productive, especially considering the scrum overhead itself takes up 20% or so of the available work hours. Getting 400% more work done in 80% of the time seems quite unlikely to me.



Leave a comment

Your email address will not be published. Required fields are marked *