Story points should not be used as a metric, nor can relative complexity be used. Demonstrable progress is a function of empirical quality measurements.

2019-01-18



References:
Jim Highsmith, "Agile Project Management"
Robert Brinkerhoff, "Systems Thinking in Human Resource Development"

Story points should not be used as a metric, nor can relative complexity be used. Demonstrable progress is a function of empirical quality measurements of customer-perceived value

Story points are an inadequate measure

Story points are subjective to teams and are hardly ever sized but rather estimated. Estimates are usually given in hours, days or ideal days, all of which are flawed, as they do not reflect reality and cover up systemic problems such as VSM inefficiencies. Moreover, attributing a number to a story’s estimate introduces a slippery slope where quantities are rigged, encouraging a market for velocity points.

Relative sizing is the wrong measure

Relative sizing does not concern itself with estimates but tries to compare complexity between stories. Depending on the VSM, there might or might not be a correlation of complexity to time and effort. Sizing, as opposed to estimation, is usually measured using “small”, “medium” and “large” denominations without a numeric attribute and is a private delivery tool for a team, not intended for outside consumption.

Relative sizing is a step forward as a sprint-level gauging technique but is agnostic to effort and team efficiency. To help product managers have a forecast, delivery leads and business analysts attempt to slice the stories so that they are mostly all about the same size. This, over time, can indicate progress of a feature as it is broken into “right-sized” stories.

Measurements should not be based on story-points as those are flawed, skewed, and are subjective. For other reasons, but to the same effect, sizing based on relative complexity cannot be used, as its currency is complexity and not do not reflect effort and time. Moreover, both techniques rely on the existing delivery process and on the current state of the codebase, including its technical debt, cyclomatic complexity and other quality measurements which may lead to a false measurement.

Use value points instead

Value points were introduced and discussed by Jim Highsmith. His premise is that while classic constraints still apply, outcome indicators should be the primary measurement, focusing on product vision, business objectives, and capabilities.

The product team maps these to features and stories to serve as criteria for backlog prioritization and be a measurement of product’s progress.

At the heart of this exercise, is, of course, the definition of value. The frequently used revenue measure is insufficient, as it measures the company’s value over their customers’ perception of value. Another indication of value has been proposed by Robert Brinkerhoff, stating that valuable initiatives produce an observable change in someone’s way of working. I think this brings a closer to visualizing value from the customer’s perspective, and farther from the company’s. The premise is that greater customer satisfaction will inevitably lead to greater profits for the company.

Quantifying value is the task of projecting the benefit of producing a feature for customers, for values of benefit and customer. Measuring success is achieved by semantic monitoring of the features in production and survival is guaranteed by adaptation based on the results.

Organize to measure value-based outcome, in production

In highly performing companies, product development is governed by outcome-based measurements while lowering investment risk to assure customer satisfaction sampling and adjustments in order to stay relevant in the market.

Accountability for resulting outcomes cannot be achieved unless there is a tight symbiotic relationship between all roles within a unified product development team. This relationship implies that the entire team shares the same goals, duties, and accountability.

Teams lacking that relationship cannot be held accountable for the outcome nor to the value they generates since they only have a partial role in the VSM. As an example, working on QA only or development only, a team cannot be solely accountable for a higher rate of conversion.

The team, in conclusion, can be held accountable for its efficacy or for the business outcomes only if it is participating in the whole VSM.

Under this assumption, organizations should measure progress based on “value points”, and the team’s velocity becomes a measure of how fast valuable features are in production, not how busy it was to adhere to a meaningless number.



Filed under

Agile
Governance
Product Management
Project Management
Risk Assessment
Value

Other Tags

API GW
AWS
ActiveRecord
Agile
Alexa
Analysis
Ansible
BDD
BLE
C
CAB
CloudFormation
CloudFront
CloudWatch
Cross-compile
Cucumber
DevOps
Devops
DotNet
Embedded
Fitbit
GNU
GitHub Actions
Governance
How-to
Inception
IoT
Javascript
Jest
Lambda
Mac OS X
MacRuby
Metrics
MySQL
NetBeans
Objective-C
PMO
Product Management
Programme management
Project Management
Quality Assurance
Rails
Raspberry Pi
Remote compilation
Remote debugging
Remote execution
Risk Assessment
Route 53
Ruby
S3
SPA
Self Organising Teams
SpecFlow
TDD
Unit testing
VSM
Value
arm
contract testing
inception
nrf51
pact
planning
rSpec
ruby
ssh