Story points, business points, T-shirt sizes – either way, we try to frame our work in comparable terms. In the classical world, the effort was estimated, in the agile world it is the complexity of a task. How many points I can deliver in an iteration is what I measure – and then assume that the future behaves exactly like the past, i.e. is self-similar. Based on this assumption – the future is the repetition of the past – predictions are now made. Unfortunately, we are subject to a convenient fallacy here, because we thereby also immediately rule out as impossible everything we do not know.
Some predictions come across as light-weight and fancy (“we have a velocity of 200”), others more heavy-weight and bearing (“with a probability of 85% we will complete the next 30 tasks in less or exactly 20 days”). Either way, they are always based on linear extrapolation – the difference between yesterday to today is the same as that between today to tomorrow. I find this strange – because agility is introduced precisely because we live in a complex world and want to make precisely this complexity manageable. So where do we get the idea that we can be successful in a dynamic environment with non-complex methods?
Agile methods counter complexity by cutting it into small, manageable pieces and thus making it manageable – either temporally in sprints, quantitatively by setting limits or qualitatively by sequencing. Once this is done, however, we suddenly assume that their temporal distribution resembles a steady linear increase. However, if we look at the delivery performance of agile teams, we always see an S-curve. For a functional team, it starts in the lower area of the y-axis and then records an exponential (!) increase before finally flattening out again. A dysfunctional team, by the way, also produces an S-curve – but in the upper area of the y-axis and then falling exponentially – so in every respect dysfunctional teams produce question marks.
To make predictions in an exponential world, we need to start thinking logarithmically. We all know the story of the grains of rice on the chessboard – doubling per square – 1,2,4,8 and so on. How much rice is on square 64? If a grain of rice weighs 0.3 grams, then there are 277 billion tonnes of rice on square 64; 540 billion tonnes on all squares together – a thousand times more than the global rice yield of the year 2018/19. Moore’s law about the doubling of transistors in integrated circuits – i.e. the technical side of digitalisation – follows the same rules. Transferred to the chessboard, we are on square 26. If we drew a bar chart for the change from square to square, there would be only one bar – that of square 26. The previous 25 would look like a flat line. The difference between field 25 and field 26 is so serious that everything before is compressed into a line – and therefore appears linear.
To try to derive predictions about the future from this apparent linearity ignores the fact that we are in an s-curve and that field 27 will be so massively different that field 26 will join the series of apparent linear progressions. The time of micromanagement and the all-knowing uber-leader is over. Only as an organisation can we know the grains of rice in our field; as individuals we no longer have a chance. It is no longer the job of leadership to be able to describe the nature of each grain of rice. The task will be to recognise the differences between field 26 and 27 and to have ideas on how to expand one’s own field of action and scope from the change.