Coaching the Agile Architect – Creating Value, Ensuring Quality, and Empowering Change

Written by Michael Robillard Tuesday, 22 April 2014 08:44

man looking at business concept

Coaching the Agile Architect

As an Agile Coach, I typically find two shifts in perspective helpful for new agile architects: first, the possibility of delivering slices of functionality (typically prioritized by risk or value) and second, the possibility of delivering slices of architecture required to support incremental delivery.

I experienced this perspective change personally in what now feels like a previous life as an architect who was expected to construct solutions in addition to architecting them. As a result, I thought it might be most appropriate to discuss how an agile coach might approach the second shift in the spirit of the art of the possible while at the same time trying to keep it simple.

My goal is to incrementally and iteratively formulate a framework to help coach a new agile architect through the required transition of thought in designing architectures that enable incremental delivery of value. While it does not require BUFD, it does require thoughtful consideration of architectural requirements such as quality attributes, NFRs, and expected points of volatility and extensibility. As will eventually become evident, the details contain gotchas that are easily avoided with due consideration.

Two Potential Perspectives

The way my mind works, there are two obvious representations of the context.

In the first, I view an architecture using the following entities:

  • Things – which commonly talk to other things using…
  • Messages – which contain or, in some way, transmit…
  • Information – for context, configuration or other processing

A second, and maybe more useful, perspective is to view the architecture in more traditional layers:

  • DomainCapabilityInteractionDomain – the public face, internal or external, of architectural value delivery, typically in the form of services
  • Capability – the functional or component building blocks
  • Interaction – the integration of the capabilities that reside both in and on the architecture


While both perspectives resonate with me, for this discussion I will use the latter. In future posts I will delve in more depth, which will more naturally align with the former.

What are the challenges?

So what are the challenges I commonly face in this particular area as an agile coach? The following are common questions that arise:

  1. How to support domain service stability during incremental refinement?
  2. How to incrementally increase the complexity of integration points while maintaining robustness and managing volatility?
  3. How to support component stability while incrementally increasing capabilities?

In addition, each case typically comes with requirements to ensure extensibility and backward-compatibility, as well as the realization of quality attributes such as scalability and performance.

Recognize & Address Each Type of Challenge

As a result, any systematic approach that a new agile architect will find useful must address at least each of these types of challenges.



So I’d like to propose one way to look at this challenge and help your customers complete this mental transition. A particular benefit of this approach is that all the referenced content already exists, is in use, and has extensive information available in both books and the public domain. See the References and Further Reading section at the end.

So let’s dig into each challenge with a bit more detail.

How to support domain service stability?

I suggest approaching this challenge with a discussion of Service Oriented Architecture patterns. The primary benefit of these patterns is to manage the dynamic and volatile service contexts and contracts encountered as the architecture is extended in increments. More detailed information for each of these can be found at (see references at the end).

Some useful patterns to leverage in this way include:

  • Service Façade supports dynamic service contracts by decoupling the core domain capability from the public contract.  This will allow either the core domain capability or the public contract to evolve independently.
  • Agnostic Capability helps derive new service contexts by making available well-defined yet common domain capabilities as granular services.
  • Protocol Bridging supports protocol volatility by ensuring differences in communication protocols – either type of variant – are handled adequately based on expected protocol volatility, planned or emergent.
  • Compatible Change / Concurrent Contracts generally support ongoing modifications by introducing new capabilities without impacting existing ones, and by creating multiple contracts of the same domain capability for specific consumer types.

How to support incrementally built integration?

For integration that will be changing through incremental architecture development, introduce the Enterprise Integration Patterns body of work (Hohpe). The primary benefit of these patterns is to stabilize points of integration from Day 1 as they increase in capability and complexity.

Some common refinement patterns to identify and plan for include the following, which I have personally experienced on several large-scale projects:

  • Basic Integration Refinement occurs as a simple integration becomes more complex and capable. An example evolution of integration may include transitions from hard-coding, to file sharing to shared memory to RPC, and finally to messaging. Using other protective patterns, this refinement can evolve with minimal impact to the solution and customers.
  • Message Channel Refinement involves the paths of integration becoming more powerful and robust.  For example, message channels may transform from Point-to-Point to Message Bus to the use of Message Bridging. This would use the EIPs called Point-to-Point Channel, Message Bus, and Messaging Bridge. (Hohpe).
  • Message Routing Refinement occurs as the routing mechanisms used to put messages on channels move from relatively dumb to elegant and smart. Some examples I have used to incrementally build out a robust routing infrastructure include content-based routing, message filtering, Recipients List, Splitter & Aggregator, Composed Message Processor and Message Broker. While the core integration may have also required change, it was minimal compared to the message routing capabilities protected by architectural design patterns.

How to design capabilities robust enough for incremental development?

This is probably where most technicians are the most familiar, yet it is worth discussing within your coaching conversation as this not only ties the other pieces together but also provides more tools which can be applied at all levels. Here we consider GoF Patterns at the capability level.

Some common design patterns that bring great value to incremental delivery include the following from Design Patterns:

  • Factory Method to abstract and extend creation functions. Through a creation interface and inheritance, the implementation determines which class to instantiate.  In addition, more simple factory patterns that do not rely on inheritance may also be applicable here.
  • Adapter, Bridge, Decorator, Façade, and Proxy to protect and stabilize structure. Each of these leverages abstraction to stabilize areas where the structure is or may end up being volatile. In most cases, if the extent of true volatility is less than expected, the cost expended for protection will have been minimal.
  • Strategy, Command, Mediator, and Template Method for incremental extension of behaviors. Again, through the power of abstraction, component capabilities and behaviors can be extended as needed with minimal impact to the core solution.


My goal is to create a systematic approach to engage and coach new agile architects in the art of the possible when it comes to architectural slicing. Using generally available and proven patterns at each critical layer of an architecture: the domain, the capability, and the integration, architectures can be designed, communicated, and built in vertical slices which meet quality attributes and stability using an incremental delivery approach. As a coach, communicating this possibility and providing the tools to new agile architects is critical to empower early and often creation and delivery of value.

What do you think?

  • What other patterns have you found useful for slicing architecture?
  • How have you communicated this technical challenge to new agile architects?
  • Would such a system be of use to agile coaches in the technical realm?

The posts that will follow this will go into greater detail to describe my coaching perspectives on the:

  • benefits of specific patterns to slicing,
  • gotchas to consider,
  • concrete examples from enterprise projects and
  • steps for using these tools which form their own pattern of analysis

…. all in the hopes to spark some thoughtful discourse and provide coaches with additional tools in the technical realm.

webinar ad

References and Additional Reading

Be The First to Leave A Comment

How Agile Helped Me Survive Being a Business Analyst

Written by Jeff Howey Tuesday, 15 April 2014 09:16

Business Analyst

Over the last 20+ years, I’ve enjoyed more of my career as a Business Analyst than in any other role I’ve been able to play on a team. The control freak in me loves to practice my Project Management skills, the perfectionist in me always loved getting into Testing roles, the innovator in me loved trying my hand at Product Management. But the perfect balance, for me, is in playing the Business Analyst role.

As the consummate “contractor” for the first 20 years of my career, I was able to play the Business Analyst role in at least 10 different companies with 50 different ways of doing things. For the past 3 years as a consultant and trainer, it’s been my pleasure to work with another dozen or so clients with specific emphasis on Business Analysis skills. I want to share with you how Agile has changed the way I see this role and challenge you to try something new, if needed, to survive in an environment (and industry) undergoing drastic changes in expectations and practices wherever the word “Agile” is uttered.

The Traditional Business Analyst

In the previous century, the typical cadence of a Business Analyst (at least from my vantage point) looked something like this:

  • Spend a month or two with the Project Manager and Project Sponsor developing a charter and business case to present to the leadership team for approval and funding
  • Spend a few months meeting with stakeholders from across the enterprise to define the Business Requirements of the project and to analyze the impacts that needed to be documented
  • Once the 300-page Requirements Package was signed off by 23 people, consult to the design and architecture team as they developed the technical implementation and documented the details – inevitably revising 298 of the original 300 page Requirements Package multiple times and taking it back to 23 people for their review and sign-off
  • When the epic novel of details was handed to developers, make sure that time was set aside from the new project that I was moved to in order to answer questions and continue revising the 300-page Requirements Package that I started writing 6 months ago
  • As the behemoth of documentation and code was handed off to testers, make sure that I made myself scarce and referred to the 300-page Requirements Package I wrote a year ago as the “source of truth” for what needed to be tested

I’ve never been one to dwell on painful things for long, so this approach just didn’t sit well with how I view life and wrap my mind around how the world works. It’s too slow, drawn-out, full of surprises and rehashing of ideas and details that I would often like to forget. I wasn’t going to survive my career choice given how things were going.

The Agile Business Analyst

When I found Agile, nearly 10 years ago now, that changed to some degree. Life moved at a more acceptable pace. The Agile Business Analyst cadence looked more akin to:

  • Spend a day or two with the Product Manager to identify the Vision of an upcoming release or new product
  • Spend a week or two defining the Epics and Features necessary to build the MVP necessary to meet the needs that the Product Manager envisioned
  • Iterate around the most important and necessary Stories that were important to the release for a week or two with the Product Manager, various stakeholders and even customers to write User Stories in a simple format that were no more than 1-2 pages in length, and make sure that I fully understand the criteria by which the customer, Product Manager and users would be measuring successful implementation of a small, compact story
  • Once I had the Acceptance Criteria well-formed and meaningful, work with the delivery team in Backlog Grooming sessions to make sure that the details were clear, understood, estimable, and informed what they needed to do to code and test the small packet of functionality
  • During a Sprint in which a Story was being developed and tested, answering the day-to-day questions was much simpler and rarely required the revision of details, certainly not 298-pages of details
  • On occasion, I would find that there are small gaps in understanding and the “shuttle diplomacy” of clarifying the details between the delivery team and Product Manager or customers would begin, but the process seemed to be faster and simpler than it ever was revising a 300-page Requirements Package

I wasn’t a fan of the “shuttle diplomacy” and frequent Story Review sessions, but I understood that this was a better way to get input from the delivery teams regularly but still “protect their time to code and test” to deliver requirements I had written for them in the previous weeks. Agile, and the use of Stories vs. 300-page Requirements Packages had made my life more bearable!

The Aha Moment?

Then, something unexpected happened. I started working with clients in Story Workshops and changed the format that I’d been following. Instead of meeting independently with Product Managers, stakeholders and customers to define Stories, then meet with the delivery teams to refine the details, I started to bring some members of the delivery teams with me. In a room, I could have the Product Manager, 3-4 Stakeholders from various parts of the business, an Architect, a Tech Lead, and a QA Lead.  Sometimes I’d even have my UX Designer there.

At first, this was unsettling. I was used to facilitating in a controlled environment with just a handful of people at a time and scribing the Story details as they were being discussed. In the first round of story discussions, I could project the Story and show the Product Manager and stakeholders what I was typing, have them validate and wordsmith/improve as we went. The follow-up with the delivery team was a similar approach – projecting what I had crafted and scribing additional details and documentation as we talked through it. But that was simply not going to work in this new venue – not with up to 10 people in a room; some of whom didn’t even speak the same language (“business” vs. “technical”) and all of whom had differing opinions on how far in the details the discussion should go. I thought I’d made a mistake in trying this out and was tempted to go back to shuttle diplomacy as my form of implementing progressive elaboration of requirements.

What would you do?

Lucky for me, I have a network of wise and experimental mentors who advised me not to give up. The benefit of having the people who need the work in the same room as the people who did the work (to write the software) was too valuable to give up on. They encouraged me to continue this approach and advised me to bring help. I’m a control freak, remember. And a perfectionist. So that wasn’t easy. But, I asked for help.

webinar ad

Leave Your Comments. There Are Currently 3 Comments

Don’t Estimate Spikes

Written by Andrew Fuqua Tuesday, 8 April 2014 07:51


Don't Estimate Spikes

Don’t Estimate Spikes

I don’t estimate spikes.

Do All Your Spikes in Sprint 0?

If you implement all of your spikes for each release in a “Sprint 0″, you could estimate them. If you do, do so always so that your velocity will be reliable. Spikes are, like defects, generally harder to estimate correctly relative to user stories. It’s best to time-box them.

If you don’t estimate spikes, your Sprint 0s or HIP Sprints may have no points. That’s no problem. When computing your average velocity and when release planning, you can exclude Sprint 0s or HIP Sprints — don’t count them in the devisor when averaging your velocity; don’ t allocate points to them when release planning.

Even if you do all of your spikes in Sprint 0, additional spikes often come along during the release. For those, don’t estimate them for the same reason I don’t estimate defects.

Don’t Do All Your Spikes in Sprint 0?

Spikes in your backlog represent risk. Spikes often correspond to un-estimated user stories and stories with ambiguity — stories where we need to do a research spike before we can estimate. For these reasons, we want to knock out spikes early each release. (I’m assuming periodic releases — Prioritizing spikes in the face of continuous planning and continuous deployment is another story.)

However, spikes do come along later in our releases as unplanned work. And in that case, I still want to execute the spikes ASAP. And I still don’t want to estimate them.

Therefore, for me, spikes don’t stay in the backlog very long. I rarely consider them planned work of the same order as stories. I may have planned stories for the next 4 months, but I don’t want a backlog of spikes that I can’t implement in the next 2 sprints. This is not about discouraging spikes. It’s about resolving them quickly just like I would want to do for defects. It’s about not having a big backlog of unresolved spikes, just like for defects.

One great benefit we get from estimating user stories is that estimating drives clarity about the work. Estimating can help us clarify our acceptance criteria. Being clear about the work, the goal, and the acceptance criteria is just as important for spikes as it is for user stories. But estimating doesn’t always achieve that clarity. Sometimes we get lazy, especially for spikes. So, setting a small time-box limit for spikes is another way to encourage clarity. I like to set explicit policy for spikes. For example, one team’s policy was that all spikes must be completed in 12 hours or less, each must have one explicit question to answer, we must know who the answer goes to, and there must be a “demo”. No vague spikes with vague results.


Therefore, whether you do all of your spikes in Sprint 0 or spread them out over the course of a release, I don’t want them hanging around a long time and as for defects, I don’t estimate them. I choose my poison of NOT including spikes in my velocity. I don’t want to inflate my velocity. Defects and spikes are estimated differently than how we estimate stories (relative to each other). It’s very difficult to correctly relatively compare to a 1 point story a spike that is time-boxed in terms of hours or days. Likewise with defects — it’s a very different type of work than normal story development.


webinar ad

Be The First to Leave A Comment

Managing the Impossible with an Agile Budget

Written by Isaac Hogue Tuesday, 1 April 2014 08:13

agile budget

The Agile Budget

Release planning is without a doubt one of the most challenging responsibilities for agile teams… at least that’s what I’ve experienced both personally and while coaching enterprises through transformations.

Most teams are working to deliver solutions where the question of “what will I get” at the end of a release can not be left open ended. Furthermore, these teams don’t have an unlimited capacity. They are working within what appears to be a constrained iron triangle, cost, time and scope are all fixed. Mike Cottmeyer’s recent blog about the agile home builder, discusses this challenge from the perspective of establishing a categorized budget.

It is an approach that I’ve seen work on several occasions.  The process is pretty straight forward, not easy… but also not complex :)

Here’s a script that I like to use to help move teams from “this is impossible” to, “hey I think we can deliver!”

Getting Started

Before determining how much of the budget should be spent on features, its important for the team to understand the goal of the eventual release.  To help with this, I usually encourage the team to form of a mock press release, announcing their successful release to the world.  Typically this includes key areas or attributes of the release that have made the release impactful to the reader. These are now key success areas for the release

Establishing your Budget Categories

Each of these key success areas start to emerge as high-level categories within the context of a releases budget that can be used to help focus initial scope conversations. From here the key stakeholders can allocate their budget across each of the category areas.

Quick sidebar, the asset that is available for budgeting is usually the delivery system’s planning velocity.

Keep your Eye On the Ball (successful release, budget available)

Now that budget categories are set, the teams need to start working through their release plans and refining their needs for the “right” implementation based on the budget available.  There are many methods for going about this process; but, by far my goto method starts with high-level acceptance criteria for each category, or feature area, that can be clarified through a mix of example user journeys or system interactions. The funds available to each of the categories should be brought down and further applied to each of these so that the planning team remembers to keep its eye on the ball.  A successful release will need to both (1) deliver the functionality needed, and (2) live within the budget that is available.

This is key, without a focus on the budget available (cost) most teams will struggle to limit the scope of a release until its too late. Early budget constraints help to drive out scope that is not critical to the success of a release.

What do you think, what are your favorite ways to vary scope to successfully deliver on previous market commitments?  For more on this topic, take a look at an earlier blog about calculating the budget, in cost, for agile teams.


Leave Your Comments. There Is Currently 1 Comment

Introduction to Business Capability Modeling

Written by Doug Brophy Tuesday, 25 March 2014 07:24

Business Capability Model

Business Capability Modeling

In any Enterprise-scale Agile transformation, having the right structure and governance to support how work flows through the organization is crucial to having a successful transformation (see “How to Structure Your Agile Enterprise“). Business Capability Modeling is a method LeadingAgile uses to inform and customize our recommendations around this structure and governance. We work closely with our clients to develop their unique business capability model. We then heat map the capabilities in terms of business value, performance, and risk, based on interviews with key stakeholders. The resulting heat-mapped capability model promotes a shared understanding and can be used as a basis for how to structure teams (what to form the teams around) and for prioritizing and scoping work.

The Capability Model is a modular description of a business in terms of desired business outcomes. Desired business outcomes are identified and defined at each level of detail in a hierarchical fashion. So for example at the highest / top level of the hierarchy we would have for a typical business capabilities like “Develop and Manage Products and Services”, “Market and Sell Products and Services”, “Deliver Products and Services” and “Manage Customer Service”. Complementing these operational capability areas are supporting capability areas like “Develop and Manage Employees”, “Manage Information Technology”, and “Manage Financial Resources”. The American Productivity and Quality Center (APQC) is a member-based, non-profit that provides a “Process Classification Framework” (PCF) from which these examples are taken. LeadingAgile works with the client in a process of discovery to identify the client-specific capabilities, using the APQC PCF as a reference model.  

The capability names are chosen to be action oriented, written in a verb-noun format, like “Acquire Inventory” or, “Authorize Customer”. The descriptions are written to define the desired business outcome, like “Maintain enough inventory to support demand”, or “Enable registered customers to use the system”. There are likely  one-hundred or more capabilities across 5-10 or more major areas such as those listed above. This gives us a very useful model since it describes the business in terms of its “Whats” and not “Hows”. We need to avoid the “How Trap” when discovering and defining the capabilities. It’s so easy to go down a rat hole of discussion around how a business outcome is achieved. We want to stay focused on what needs to be achieved. This gives us an objective model of the business upon which we can have objective conversations. Meaning, without getting into how capabilities are achieved since that ties us to the current implementation.

As mentioned, the Capability Model provides a modular, outcomes-based description of the business. As such, this is naturally a Service Oriented model or view of the business. So, the business processes made up of the capabilities could be implemented in software with a Service Oriented Architecture. Thus, for greenfield development, the Capability Model could be used to help drive a Service Oriented Architecture (SOA)/ design and implementation. For legacy systems, it can be used to refactor the legacy architecture to be more Service Oriented, in addition to the other uses listed above. To learn more about Capability Modeling and SOA, see the Harvard Business Review paper “The Next Revolution in Productivity” by LeadingAgile’s Dennis Stevens, et al.


Also check out a recent post where Mike addressed the questionIs Your Business Model is a Good Fit for Agile

Be The First to Leave A Comment