Loading...
 
Location : AgileCoachCamp Wiki > DoMetricsMatter DoMetricsMatter

DoMetricsMatter

Do Metrics Matter?


Lead by: Greg Frazer
Participants: Camille Bell, Saleem Siddiqui, Aimme Keener, Ken Pugh, Catherine Louis, and Larry


Summary



The general notion of this session turned out to be that metrics are not for the faint of heart, but also not to be ignored. Even in their absence, teams will create their own, or try and interpret them from high-level goals. Without constant clear guidance, any worker will tend to use what they are measured by as default direction.

Spending time contemplating how a metric contributes to business value can pay huge dividends, as knowledge workers are intrinsically motivated by how they measure up. This can also work in the reverse of value when old, free, or misaligned metrics are in place (see horrible examples below).

In the scatter plot of potential metrics, difficult valuations around business value (such as speed to market or actual customer value) and easy metrics around a products health to absorb future change tended to be the big winners. Easy metrics the focused on only a small part of the current development process tended to be the most dangerous.


Things to consider



  • Pay attention to 'characteristics' of the metric
  • Cost should be close to zero (but not completely free)
  • Metrics should have ROI
  • Use metrics that can be 'positively' gamed
  • Metrics are quite often not for the faint of heart
  • Visibility can drive actions and force team value assessment of metric
  • Where possible, favor leading indicators



Metric Attributes



  • Measurement
  • Leading or Lagging Indicator
  • Easy or Hard to track/maintain
  • Frequency of measurement


Good Reasons for metrics



  • Metrics driven from Business Goals
  • Favor metrics spawned by the team (knowledge workers)
  • Track actual customer benefit


Bad Reasons for metrics



  • We have always tracked XYZ metric, therefore we should keep doing it
  • Awareness of measurement can effect behavior in misguided ways
  • Too much precision can be demoralizing, painful, and wasteful
  • Creativity in gaming the metric has a consistent presence
  • All metrics that are easy or free to track should be tracked
    • Horrible example: Testers being tracked on number of bugs they can find
    • Horrible example: Coders being tracked on how many lines of code they add
    • Horrible example: Number of calls a Call Center representative completes


Metric Samples


Each metric's attributes are the first bullets under metric in Measurement, Difficulty, Leading/Lagging, Frequency order.

  • Tested Feature Metric
    • All or nothing score, Easy, Lagging, Automated
    • Measuring how Done-Done a feature is (Automatic Tests, Code Integrated, Deployed, etc.)

  • Lean Cycle Time
    • Days per 'story', Easy, Lagging, Frequently
    • Encourages positive gaming
    • Focuses on speed of entire process and not just local optimizations

  • Feature Usage metric
    • Either % of users or # of times used per user, Medium, Lagging, Less Frequent
    • Used for future prioritization of enhancements, to encourage Agile Vendor Contracts, and simplification efforts

  • Lean Kanban Queue Lengths
    • Integer, Easy, Leading, Automated
    • Predicts short-term bottlenecks, throughput capacity and can be measured against Takt time (customer demand)

  • Value Points or Real ROI metric
    • Dollars or Point scheme based on $$$, Hard > Very Hard, Lagging, Infrequently
    • Can be single most important metric at knowing that you generate business value. Often can only come with more mature relationship between product development and business/customers. Can be of immense help to future product planning.

  • Code Coverage
    • % of methods, Easy, Leading, Automated
    • Can be used to predict product quality, as well as future product agility towards enhancements

  • Testing Red/Green Metric
    • All or nothing score, Easily Done, Lagging, Automated
    • Look for IDE's with this integrated

  • Variation in estimation (or alternately velocity)
    • Standard Deviation, Easy, Lagging, Less Frequent
    • Used to encourage large story breakdowns and improve predictability

  • Product's Architectural Health (Conway's Law)
    • Assessment score, Medium, Lagging, Less Frequently
    • Similar to Code Coverage only for infrastructure and hardware setup, can be used to predict product durability, as well as extensibility and robustness