Skip to main content

Creating the Conditions for High-Performance Development Teams

Mike Cottmeyer Chief Executive Officer
Chris Beale Managing Director, Consulting
Kevin Smith Principal Engineer
Reading: Creating the Conditions for High-Performance Development Teams

In Agile, you only get the full benefits of Scrum and SAFe when certain conditions are met in the organization. People will often break the methodology or the framework instead of changing the org design and sacrificing their ability to be truly Agile.

XP and DevOps are no different. Solid technical practices and your full-stack developer’s software craftsmanship are also limited by the conditions inside your organization.

Today, Mike Cottmeyer brings in our CTO and one of the main architects of our Studios offering to discuss the conditions that must be present for solid technical practices and software craftsmanship to thrive.

Video Transcript

Mike Cottmeyer:

Hey everybody. Thanks for joining us today. We’re going to try something a little bit different. A couple weeks ago we did a recording session with Chris Beale, who is one of our most senior executive consultants, runs a lot of our large enterprise class engagements. And Kevin Smith, who is a kind of chief architect, principal consultant in our studio. I invited them here to have a conversation with me because I was out with clients doing presale stuff and working with executives on the teams and the conversation around full stack teams, software craftsmanship, clean coding, different strategies for how to get developers engaged and involved was coming up. And as somebody who hasn’t had hands on keyboard directly for probably 30 years or something at this point, sometimes I feel like I know enough to be dangerous. And so what we were talking about was the idea, on theme for the last couple of these recordings of, what are the necessary preconditions to be able to do software craftsmanship well?

And so we started this conversation to talk about test harnessing and rapid deployment and the kinds of technology things that you might want to consider. And the conversation really took a tone into really how do you get the teams involved? How do you get leadership engaged? What’s kind of the attitude and mindset around it? So we’re going to drop into that conversation now, and what I’d really like to know is what you guys think, where else should we take the conversation? What other questions that you might have? And for you hardcore engineers out there, not really so much talking to you. What I’m really talking about is to your project managers, to your team leads, to your executive teams around what kind of conditions they need to create for you so that we can do Agile and Agile engineering practices really well. So I hope you enjoy this talk. It’s a little bit of an experiment for us, and I’ll check in with you at the end and see what you thought. Thank you very much.

The Preconditions for Agile

Hey everybody, welcome to our little video recording. We were just talking with our marketing team that we probably need to come up with a name for this. We tend to do these on Friday afternoons. It happens to be a Friday afternoon when we’re doing this one. Maybe we’ll call this Fridays with Mike or something like that. But over the last couple weeks we’ve been talking about this theme and the theme that that’s on my mind is if we’re going to do Agile, if we’re going to do Scrum, we’re going to do XP, we’re going to do something scaled like SAFe for large scale Scrum, one of the things that we have to really be attentive to is creating the appropriate pre-conditions to do Agile. And at the smallest, simplest level that it’s usually like if we’re going to do Scrum, we have to have complete cross-functional teams of five to six to seven people.

They have to be operating off of clean backlogs that are written in a way, something like Mike [inaudible 00:03:13], Bill Wake’s INVEST model. We have to be able to have the ability to produce a working test in increment at the end of every sprint. And with the idea that if we create the right preconditions, the methodology work tends to work out. If we want to do SAFe for something at scale, we have to figure out how to encapsulate value streams. We have to figure out how to encapsulate release streams. And so I was actually out on site with a client last week and we were talking about the idea of having complete cross-functional feature teams, teams that could kind of roam around the entire code base, that could do frequent commits, that could get fast feedback, that could deploy rapidly. And we started having this discussion about what were the underlying technical considerations that had to be in place to do that.

And so I started to engage my team, because I know just enough about the technical side to be dangerous, so I can talk about the benefits of pair programming, I can talk about the benefits of CICD or the benefits of test-driven development, but it’s been a long time since I’ve had my hands on a keyboard. So that’s not something that I consider myself a deep expert at. So what I did is I brought in a couple experts and we’re going to see if we can have a conversation here today. So I have Chris Beale who is on my executive team. He is one of the people that basically owns a portfolio within LeadingAgile and supports a lot of our high-level customers helping to create strategy and vision. The cool thing about Chris is that Chris comes from a really solid XP background, and when he joined us, he was five, six years ago now Chris, something like that?

Chris Beale:

Seven.

Mike Cottmeyer:

He really just had to suspend disbelief and kind of throw himself into this whole process in transformation world. And I also have Kevin Smith here with us who is one of our principal engineers, and he is one of the main architects of LeadingAgile’s studio. That’s something that I don’t talk about a ton, but we actually have the capability to provide tech coaching, velocity pairs. We can deploy whole teams. We do a lot of stuff around product extraction and such like that. So I brought Kevin in.

The Conditions for Software Craftsmanship

Mike Cottmeyer:

So I’m going to throw these guys a couple of questions and we’re going to get some conversation going and see where it goes. So, Chris, I’m going to hand you the ball first. And in the context of this, we want to have just the most perfect extreme programming XP teams that we could possibly do. Software craftsmanship, attention to detail, all the things that Kent Beck and Ron Jeffries and those guys were talking back in the early 2000s. In your experience, what are the conditions that have to be created organizationally for that to work really well?

Chris Beale:

Yeah, so thanks Mike. The thing that’s top of mind for me with respect to conditions for software craftsmanship, clean coding, all of that to work, probably number one is it has to be a team sport. So I’ve been in organizations before where one developer on a team is really passionate about that, they’ll do tests first. They’ll keep their tests running, they have good CI hygiene and others, like in a standup, I was a little amazed once when one of the developers told the other developers, “Hey, I just broke 14 year tests. You need to fix them.” So it has to be a team sport. So within the team, there has to be agreement, there has to be common practice, there has to be common commitment across all of the developers and all the team members that this is the way we’re going to work.

We’re going to do ensemble programming at times, so we’re going to pair, we’re mob, we’re going to test drive, we’re going to do these things as a team. And then when you have multiple teams working together on things that are dependent, you actually need those decisions to really be across teams. So you’ve got to have a common commitment across the scope or the span of people who are working together to get something valuable done that this is how we’re going to work.

Mike Cottmeyer:

Okay. Typically, what’s the case you make? I mean, one of the questions we get asked at an executive level is, how do you get your CIO or your CEO to want to do Agile? What’s the value prop for a developer in operating this way?

Chris Beale:

Yeah, so it’s kind of a personal thing really. I mean, my hands on exposure to extreme programming was 20, geez, 21 years ago at a company that I used to work for. And before that I was a boil the ocean architect. Everything was big up front design, OA, OD, had to know everything up front, was risk proofing my software 10 years out. And then I hired into this thing that I didn’t even know what they were doing. I didn’t understand it, but it was amazing. They were getting defect-free releases done and deployed with a push of a button. Actually, it was a one command line to deploy an entire system defect-free, completely regression tested, and it was like nothing I’d ever seen. So I had the benefit of seeing it before I had to actually accept it and experiencing it.

I questioned it and I could learn from it. So for me, it was very objective because I hired into the middle of it and had no idea what it was. A lot of people don’t get that. That’s a bit of a blessing for me. A lot of people don’t get that. They’re being asked to do this without being able to see it. So if at all you have the chance to see it and actually to experience it, there are generally local labs or something you can go to where they do this kind of work and work this way and you can just see it. It’s really very evident, very quick what the differences are. That being said, I think for folks who are learning it from our coaches, there’s some art in the coaching that where I used to fail, I would write some code, I would try to integrate it and it would fail.

There’s ways coaches can coach to make that failure small. It’s not expensive to decrease the amount and cost of failure, but to actually show the benefit then of how I just saved you a bunch of time and a bunch of rework and a bunch of frustration, possibly missing a deadline. So there’s art and possibility and opportunity with coaches to actually make it… Because failure’s a good teacher, it’s a very strong teacher. You’ll have a hard time sometimes convincing somebody of anything by just talking about it.

We look for agreements to act your way into a different way of thinking. So just start, use the Karate Kid metaphors like sanding the floor, paint the fence, but we’ll do it in little enough cycles to where, okay… And the epiphanies happen, okay, I was just able to do that. I have a failing test. I wouldn’t have known that for hours or days, but now I know that in minutes and I can address it. And so those little epiphanies start to add up for many developers and then they catch on, okay, wait a minute, now we want to know more. And you start to build momentum in those little learning cycles.

What Gets in the Way of Solid Technical Practices?

Mike Cottmeyer:

Yeah. So Kevin, let’s say somebody goes off and they do their homework, they go to the labs, they maybe get a chance to experience it, and then they’re coming back into maybe a legacy code base or a code base that wasn’t written with TDD or isn’t appropriately unit tested. What are some of the challenges that teams who want to do XP, software, craftsmanship, clean coding, what are some of the challenges that you see that make it difficult or maybe even unsafe to actually do those things?

Kevin Smith:

Yeah, Mike, and feeding off of Chris’s big upfront architecture background, I had a bit of that as well. And it wasn’t true 20 years ago, and it’s even less true now that any architect or any developer can hold the entire system in their head. So that’s the false belief that a lot of developers fall under, especially if you’re developing a new system, is you build up over time until you get to the point where you don’t know and it’s pretty quick that you don’t know the entire system, you don’t know how it’s operating. And going into a legacy system, you’re jumping right into that. How do you look at code that you’ve never seen before, make a fix or make a new feature in that code base? What are the safety mechanisms that you need around you to actually push that into production?

And then what are the things that can help optimize your understanding of that code base of how it works? And that’s the purpose of tests. And tests first help you design a system that’s testable, but if you’re jumping into a legacy code base, there’s characterization tests or exploratory testing that you can automate to help you understand what it is that already exists and how it runs. And then even outside of testing in production, there’s patterns around observability so that you get realistic logging that’s all tied together. So you can push data through a system and see how it navigates through all the different components that makes it usable for developers to be able to get up to speed and be able to code. But yeah, it is very difficult to jump into a legacy code base without those types of things and be expected to add a feature without creating a defect. So you’ve got to create some safety nets.

Mike Cottmeyer:

Well, so in my non-technical front of the room, talking to executives all the time, I think about just really simple things like, if we’re going to have multiple teams operating on the same code base, it’s like when I started learning about XPOs reading all the Kent Beck books and stuff back in the day, the idea is that if we have sufficient testing, one of the things that actually increases speed and agility from the development team is like, if you’ve broken something all the time.

Kevin Smith:

You want your biggest failures to be on your tightest feedback loops. If you’re going to break something, if you have a test that can show you that within seconds versus weeks later in UAT, you can make that fail fast. You can make that failure very inexpensive. But if you put that piece of code in and it goes through, you don’t have good test coverage and then you integrate it and it goes over to a QA team that’s completely separate or UAT environment and it takes weeks before you find that failure, you have to unwind that whole thing to get back to the team to make that change and do it all over again. It becomes very expensive.

Mike Cottmeyer:

For a team that’s motivated to do this, maybe we’re kind of talking around about what makes it difficult? What are the impediments? Because what I find is that, again, in this particular case that I’m thinking of, you have a team that wants to do this, they’re trying to do it, but in the presence of the dependencies, in the presence of the lack of test harnessing or Red, Green, Refactor kinds of stuff, it’s like it’s actually introducing more errors than it’s actually… It’s just not serving the purpose of it.

Kevin Smith:

Off the top of my head, I can think of you need some boundaries of the area that you’re working in. And we’ve talked about this in the past on hard, crunchy shells and product extraction and things like that. But you want your team to have some sort of area of focus. If you’ve got this massive legacy system, how can we isolate the area that we impact? And whether that’s putting certain types of tests in or actually going through and extracting some things out or putting some APIs around things, there’s definitely some pre-work to take into consideration so that you can move faster later.

Chris Beale:

Yeah, I think the folks that taught me, I learned from some very talented people, the principle they gave me was trust breeds momentum, momentum breeds success, and success breeds trust. So it’s a virtuous cycle that underneath all of it with XP. So as an example, if I don’t know what the code is, the code has unknown dependencies to me, I don’t want it to break if I touch it, then I’m going to act very carefully. I’m going to forensically try to understand it, I’m going to make changes, but I don’t know what I’ll break. But if the code is giving me feedback, if I’ve got tests that are saying, “Okay, wait a minute, you just broke something.” Now I can actually act courageously. I can try just about anything I want because I’ll get instantaneous feedback, “Wait a minute, you just broke something.” Okay, do I want to fix that and move forward with what I was doing, or do I want to revert?

So I can move with courage versus trying to forensically figure out how to play operation and not touch the sides. To Kevin’s point though, you can almost think about when we’re talking about just normal Agile stuff, Mike, with our customers, I often describe the things that we do as is really the goal of those things is to validate learning, it’s to explore and discover and have better conversations and you get all the other stuff, the software kind of falls out the side when you’re really good at that. So it’s really about learning and getting collective cognition, multiple people on the same page so we can move together efficiently and effectively. That’s what user stories do, that’s what Scrum does. It’s getting everybody’s brain to act as one for one small iteration.

When you’re going through legacy code, the things Kevin’s talking about, so you can do characterization testing. If the goal is to write a test, you probably have the wrong goal. If the goal is to understand-

Mike Cottmeyer:

Explain that to me a little bit.

Chris Beale:

So the goal should be to learn and understand. So even when I’m doing test-driven development on a brand new thing, the goal isn’t to test, the goal is to learn. It’s I’m exploring my design ideas, I’m exploring the requirements. I’m trying to figure out what it needs to do and how I want to do it in a very changeable way. So it keeps cost to change very low. My learning cycle’s really fast so I can explore my design. I don’t generally ever come out with a class diagram or the software I thought I was going to get when I started, because learning happened through all those little micro-iterations of build, test, learn in the software. So my classes emerged, the responsibilities emerge, the relationships emerge for me. I mean, I have a place where I want to start, but in exploring my design and how I want it to work with tests, I get software. The software is a byproduct.

When I don’t know the code and how it works… And so there was a customer back in the day where I was told to do… It was a turn by turn thing. So the early days of turn by turn automation, you’re going to get directions from your house to the mall or something-

Mike Cottmeyer:

Like GPS turn by turn is what you’re talking about.

Chris Beale:

Yeah, GPS turn by turn. So the customer said, “Okay, here are 14 services that you can use and this is how they work.” And I said, “Okay, it’s great, but I want to test them.” So immediately I just wrote some tests. Turns out 12 of them didn’t work the way that the customer said that they work. But I knew how they worked because I had tests, running tests, green tests that said, okay, when I pass it this thing, this is what it actually gives me.

And so I could basically alter the design and actually we altered some of the requirements as well because the services didn’t work as advertised. Testing is primarily about learning and understanding. When testing is about testing, that’s generally what test after is. I’ve got some code, did I do the right thing? Did it work? When you’re really in the XP frame and you’re thinking test first, it’s really about learning, exploration, and discovery, and reducing risk. And so you understand the boundary of the software you’re going to change by writing little characterization tests. When I pass it this, what does it do? Oh, it does that. Okay, so I’ll name my test this and now I know what it does in that area. And I can actually explore the boundaries of my change so that I know that when I make the change around that boundary, I don’t change behavior. So I can learn the software, that’s why I’m writing the test. I get the automation and I get the change as kind of a side effect of that.

Kevin Smith:

And that learning carries forward as you onboard new team members or over time as developers come on board and change, those tests are there for their learning. That’s how they learn the system, how they learn how it functions, how they learn, where the boundaries are. It’s also, the safety net for them from an automation perspective because those tests will break when they do something unexpected. But it is a learning mechanism.

Chris Beale:

If I come into a new system with tests, the first place I go to understand the software isn’t the software, it’s the tests. The tests will tell me how the software works because the tests won’t lie to you. If they’re well-written tests and they’re running green, they’ll tell you how the software works.

Mike Cottmeyer:

You mean you don’t go to the big design specification to see how the software actually works, Chris?

Chris Beale:

The old adage is, documentation lies in the XP world. Documentation is going to lie to you. And in the case of that example I gave you, I won’t give you the customer’s name, but in the case of that example, the documentation clearly was lying about the contracts that those services were supposed to provide. And it’s because everybody knows keeping the documentation up to date is an extra thing. And even if you want to do it and you’re conscientious about it, it’s hard to think about all the behaviors you’ve added or changed. You might just miss stuff. So over time, enough of that happens to where the docs aren’t actually describing what the software does. The cool thing is you have an executable specification. It’s the test itself. It’s actually a specification for how the software works that has to be true because it’s running green. So if it ever gets out of date, it’ll turn red on you.

Supporting Engineers Doing Agile and Economic Justification

Mike Cottmeyer:

One of the tensions that I’ve always felt in the Agile space, Agile really grew up out of the software engineering world. So the Kent Becks and the Martin Fowlers of the world. And then it got taken over what I’d say about the Jeff Sutherlands and the Ken Schwabers of the world, it became almost like a Scrum XP dichotomy. And I think one of the things that we see quite a bit are folks that get introduced to Agile that don’t really have the deep expertise in how software’s built, how it’s architected, how it’s run, how it’s tested, how it’s deployed, all those kinds of things. So we kind of wrap development in these management practices, but sometimes because we say we’re doing Scrum and Scrum doesn’t necessarily require XP practices, then we can break things a little bit.

So maybe Kevin, you could lead us off. I would love, as a developer, what would you want maybe your Scrum masters, your Agile coaches, your leaders in that organization, what would you want them to know such that when they’re implementing Scrum or SAFe, what would you want them to be aware of and cognizant of to create the right environments for the engineers to do Agile well? Any thoughts?

 

That’s a great question. The thing that pops to the top of my mind comes back to what I said earlier around feedback loops is understanding the one to 10 to 30 layers of feedback that a feature is going to need to get to production. So anytime you can move risk to a smaller and faster feedback loop, the better off you’re going to be. If you could focus on not just in your most TDD feedback loop where you’re writing a test, writing the minimal code to pass that test and then refactoring it and committing it, but all the way through the cycle, how can you reduce the size of those feedback loops through automation? And this is where DevOps comes in. And by doing that, that gives you the ability to take more risks, experiment more with more things, push things out to production more quickly, get feedback on them and so on. I think that that’s the big thing in my mind from a support from Scrum masters or support outside of the development team, is understanding all of those feedback loops and how to work towards reducing them.

Mike Cottmeyer:

Chris, you led off with the idea, you talked about the idea of pair programming. I was introduced to that through, I was at the XP White Book back in the day, something like that. And the common objection of leaders would be, well, why would I have two people writing the same code when I could split those two people up and I could write twice as much code? So one of the things that becomes a barrier to what Kevin’s talking about is the idea of economic justification. So how would you create economic justification for building the necessary infrastructure so that you could achieve those faster cycles? Because it’s not just a mindset or an attitude or even a practice. There’s some fundamental barriers sometimes in how that code is written and deployed that actually make it really expensive to create those feedback loops. How do you create the economic justification for that change?

Chris Beale:

So I think it’s always difficult to… When you look at outcomes, causation can be very, very difficult. It’s like the business outcomes that we create for our customers, an increased revenue or decreased expense or whatever. Direct causation, if revenue’s going to go up a hundred million dollars, can you say, okay, it’s these features that cause that, specifically these three features? So causation can be difficult, correlation a little easier. Over time we show that we’re doing these things and our customer SAT scores are going up and our revenue’s going up, and you can start to correlate things. When it comes to specifically what are now being called ensemble coding practices where you have more than one developer working at a time, you can think about it from an input perspective, an output perspective, and an outcome perspective. I think a lot of the people we run into are dealing with finance organizations that think in inputs.

So dollars per hour, number of resources working, do the math. Very easy to say, we’ve got X amount of dollars working at that screen at any given time, and what are we getting for it? And I think we are very bad as an industry at saying, okay, but your productivity went up by this much and your defects went down by that much, and the cost of a defect is this much for you. So I think those things can be actually measured. I’ve done it in prior jobs. I actually proved in one job that the worst thing you could possibly do with this system is to have anybody change a line of code. If you just stop, that is the best economic outcome you can have. Because every time you change it, you break this many things and each thing costs you about this much to fix and the benefits you’re getting doesn’t actually outweigh the cost.

So if we’re good at tracking metrics around the cost of change, and we can correlate that to then outcomes, so what kind of improvements are we getting in productivity, decreases in waiting. So how does that correlate to waste in the system and productivity and throughput and all of those things? I think that can be powerful. I have run into situations where even with that economic justification, it’s just some people just can’t wrap their brain around it. I mean, the example I gave earlier, we had an organization transformation went from a new product was 21 months and 700 defects and don’t touch it, to three months, no defects, 20 hours of production support per year. We had the metrics for all of that stuff. And so CTO moves out, new CTO moves in, didn’t care about any of that, just saw two people at a tube and absolutely couldn’t understand why we would do that and why were we writing all these tests when we could write features?

And no matter what we gave them in numbers it just wouldn’t land because of the way they saw the world in inputs, not outputs and outcomes. But I think the outputs are very measurable and you know can correlate to the outcomes, you might not be able to do direct causation. I think in many cases the world has changed, Mike. I don’t think there’s as much pushback for allowing people to do this. I think actually there’s more expectation that people will, but I still think there’s a good case to be made for saying how valuable is this? So measuring throughput, measuring rework, quality, all of those things, and then correlating it to how much value now can we put out to the customer based on the fact that we’re no longer doing all of these other things.

Mike Cottmeyer:

Okay, awesome.

Kevin Smith:

It’s never been about about how fast developers can type, it’s always about their problem solving skills, which is collaborative. If it was about how fast you could produce code, we’d have typing tests for developers. That’s never the case.

Mike Cottmeyer:

Yeah, that’s an interesting point, Kevin, because sometimes what I’ll talk about is a lot of times it’s decision latency, it’s waste, it’s wait time, it’s defects, it’s all those things that is actually what’s driving the cost and the timeline. If we were just clear, so many organizations resist the idea of getting the business and the developers in alignment on even what they’re going to build. It’s too expensive, it’s too up front, it’s too this. And I’m like, but what’s the cost of them not understanding it? They’re just building the wrong thing.

Kevin Smith:

And if you can get that in front of them faster through the feedback loops and get a reaction from the end users about what’s actually being built, you can make directional changes sooner at a lower cost, even if it takes-

Chris Beale:

That’s something that can be measured though, time to feedback. So the longer we-

Mike Cottmeyer:

I actually told an engineer one time I wasn’t interested in how many lines of code or how much software he could write, I was interested in how much validated features in the product that he could demonstrate to the customer. It was a really kind of a different mindset. So we’re going to wrap this thing up. I really appreciate you guys stepping into this with me, and I think we’ll do more of these over time. I think this is a fun topic to explore. And I think just at the highest level, just bridging the XP and the Scrum worlds together a little bit and help them each understand how we need each other, I think will be a cool thing to do. So anyway, everybody, thank you guys for joining us and we will talk with you next Friday. Have a great day. See you, bye.

Kevin Smith:

Thanks, Mike.

Parting Thoughts From Mike

Mike Cottmeyer:

Well everybody, I hope you liked that interview. I hope it was informative. I hope you learned a lot. If there’s anything else we could do that would help you understand how to create the necessary conditions, whether they be management, whether they be cultural, whether they be technical, we would love your feedback. Please drop a note in the comments. Let us know what you’d like us to explore further, and we will get Chris and Kevin back and we’ll do a follow-up to this and take it whatever direction you guys want to go. This is a really important issue because one of the things that we have to keep in mind is that specifically when we’re doing Agile in a software development world, the practices of Scrum and the practices of SAFe aren’t enough.

The kinds of things that really enable you to go fast, to move fast and to break things are these very technical issues that we’re talking about. And one of the biggest failure patterns in the industry is when we start doing Scrum, we start doing rapid planning and rapid delivery, but we haven’t created the necessary safety in the software product to do that very well. So again, help us help you. We would love to go deeper into any of these topics and we’re looking forward to hearing what you guys have to say. So until next time, we’ll talk to you later. Bye bye.

Next Enabling Change with Data Modeling & Forecasting

Leave a comment

Your email address will not be published. Required fields are marked *