In my previous post Agile Assessments, I wrote about reasons to do an assessment and considerations when doing one. In this post, I’ll continue the assessment topic with focus on Rating Scales and Frequency.
Select a Rating Scale
When conducting an agile assessment or self-assessment, an understandable rating scale should be used that allows the rater to assign a value to the practice or capability being rated.
A Scale of 1 to n is so common it has become part of our vocabulary and for good reason: It’s easy to understand and very straightforward.
Including a qualitative description of each number in the scale will improve the usefulness of the rating scale. For example, if a rating of 2 is described as “Apprentice” or “Beginner”, the category is more likely to mean the same thing to different people. And if we expand the definition further, the raters are even more likely to apply the rating similarly. Let’s look at this further in the following example:
|Numeric Only||Numeric + Minor Qualification||Numeric + Detailed Qualification|
|0||0 – Not started||0 – Not started – The practice/capability is not yet understood and/or is not yet in use by the team|
|1||1 – Apprentice||1 – Apprentice – Beginning to learn and apply practice/capability. Guidance from experts is recommended.|
|2||2 – Journeyman||2 – Journeyman – Practice/capability is well understood and is sustainable by the team.|
|3||3 – Expert||3 – Expert – Practice/capability is fully internalized. Team can teach and mentor others.|
Each level of qualification brings more clarification and shared understanding of the numeric rating. Keep this in mind when you select a scale and include a description for each that will help the raters apply the scale with some degree of consistency. Consistency in the interpretation of the rating scale will be important to a team as they continue to self-assess periodically. It is also important for the organization when measuring progress across teams to get an overall view of an agile transformation.
Using Colors and Symbols
Graphic/visual representations, in addition to or instead of a numeric scale, can bring life to the assessment process and quick interpretation of the assessment results, particularly if you have an open work space to display them. Standard traffic light colors, plus black, offer a simple translation:
◉ Black – Not Started
◉ Red – Practice/capability has started but needs significant improvement
◉ Yellow – We are practicing the competency consistently
◉ Green – Team has fully embraced practice/capability and internalized it into culture
You often see the use of four/five stars in online ratings of movies, books and restaurants and we all understand that a ‘5 star hotel’ is very luxurious and comes with many amenities. Our assessment star rating scale would look something like this:
★★★★★ – Not started
★★★★★ – Just beginning use of practice/capability
★★★★★ – Practice/capability is understood but not used consistently
★★★★★ – Practice/capability is used most of the time
★★★★★ – Demonstrable evidence practice/capability has been mastered
★★★★★ – Teaching and mentoring others on practice/capability
Like the traffic light colors, it’s a visual rating that we’re used to seeing and doesn’t require a lot of thought to interpret. The stars really do represent numbers though and you can choose to use them mathematically and roll up scores to a major category level or overall assessment score.
My preference when conducting an assessment for an organization where I am the rater is to use a numeric scale that includes a clear and detailed qualitative description. This helps me take more time to consider the criteria and the circumstances. When guiding a team on their self-assessment, I’ve observed that the use of the graphic/visual format takes away the reluctance some individuals have when choosing a number, allows the process to go faster and achieves a result closer to reality. Personally, I find that I can over think numbers but don’t second guess so much when selecting the color Yellow or 4 Stars. You may not find this true if you’re not a visual thinker.
How Often Should You Reassess?
If you look back at some of the reasons to do an assessment from my Agile Assessments post, many of them set the expectation that it’s a recurring activity. A few from the list include: “Establish Baseline”, “See how you’re tracking against your transformation goals”, and “Determine if you’re ready to move to next level”. While ideally you’re already using metrics to improve your processes in general, an agile assessment or assessment instrument has a more specific purpose and lifespan in support of an agile transformation. Once a team and/or organization reaches a greater level of maturity in their adoption and agile becomes ’the way you build software”, the assessment instrument will largely cease to provide value. It doesn’t mean you stop continuously improving but we have the retrospective for that.
The frequency of the assessment should take into consideration the rate of change you’re expecting between assessments. For example, if I assess a new team, send them to training and leave them mostly alone to figure it out, I’m not going to anticipate a great deal of improvement in the next 30 days and therefore, I may wait a couple months to re-assess. Contrast that with a team who is given training, has a scrum master or other leader on the team with agile experience, and they also have continued and focused coaching. I would expect to see change in a months time and will want to have visibility to the progress via the assessment results. Every few sprints is a good, general guideline when you’re getting started. Once you set the interval, stick to it.
Most important to me is that someone is using the assessment results for a clear purpose. I’ve seen teams complete their self-assessment on a regular cadence because the organization required them to but do absolutely nothing with it. I know the company was truly interested in understanding how their transformation was progressing and to know where support was needed for the team but the assessment results weren’t being used to understand progress or areas for improvement. If the results aren’t being used or understood, it might be time to step back and re-evaluate.
On to Methods and Results
You can look forward to my next post where I’ll share a few different approaches for administering an assessment and how to report your results.