Complex in Hindsight

When we reflect on a complex event, it often looks predictable in hindsight and this makes it harder to deal with complexity in the future. This post is about complexity in the Cynefin sense – where it is not possible to see linkages between events beforehand – but it is possible to see those linkages afterwards.

The two Cynefin domains that have high certainty are called Complicated and Obvious. The complex events that happened in the past – look like they belong in these domains – when facilitating sessions with people and asking if the past events were predictable, they will often say that they were. It is a good idea to double-check that this is the case – ‘are you saying they were predictable with your benefit of hindsight? Or if we went back in time, is it really an unpredictable set of linkages?’

It is useful to take examples of past events and create a Cynefin Framework from them in order to create a shared understanding among a group of people. The Cognitive Edge Method to do this is called 4-points or 4-tables contextualisation. Once the Cynefin Framework has been created, the labels on the groupings of events become very useful in the future. The group can then say that an upcoming event is like one in the framework, and if it is in the Complex Domain, use an exploratory approach rather than a project management method more suited to the Complicated Domain.

The main ‘gotcha’ with using historical events is this one – that the complex ones do not appear complex in hindsight – so be careful when facilitating that this is not a factor – otherwise, the Cynefin Framework created will have more examples of high-certainty events than actually exist in the environment.

Polar binary paralysis, the current social condition.

So what the hell does that mean?

Well in our modern society and culture we tend to see things in Black and White. There has to be a winner and a looser. We tend to see things in absolutes, all or nothing, off or on always binary. Now if you acknowledge this basic fact about our society then you are closer to seeing the issue.

While a binary view of the world is helpful in decision making and rapid responses, which makes us feel more efficient and therefore superior, it also is a very unnatural state. The world is not binary, yes you can define parts of it that way but when looking at the entire system it is more complex and interlaced. There is rarely a binary condition, physic has understood this and made a branch called Quantum mechanic.

So what is the problem with our binary view?

When we lived isolated and disconnected lives a binary view was easy and extremely helpful but as society becomes homogenised, the binary differences become grey and complex. We enter a Quantum state where there can be complex states, off and on at the same time.

I have always explained that most of us when faced with a decision, consider there are only two option positive or negative (Yes or No) but in reality there is always a third option. The third option is actually what I call a Zero state, so instead of positive or negative we also have a Zero. This zero state can range for “wait and see” to do nothing, yet its very passive nature makes us consider this not an option.

We have been trained to polarise in one direction or another. What this means is that in our modern society there has to be a winner and a looser.

We have all seen it recently with Brexit and the US presidential election. There must be a result therefore one side wins with only a fraction of a percent more than the opposition. The winner seems also to state that they have won with a mandate from the electorate. So our desire to have a winner means we end up splitting hairs to find a winner, Polar Binary Paralysis.

There is no middle ground or balanced view only polar opposites which are often shadows and reflections of the other.

Discovery and Re-discovery

We’ve been discussing the concepts of ideation and the workshop activities that we do to generate ideas. These activities use the intent behind ‘brainstorming’ – not that I am recommending the common form, let me explain why.

The method that springs to mind when we mention ‘brainstorming’ is for a facilitator to capture ideas onto a whiteboard while people call them out. There are many issues with using the method in this way related to good old human nature such as our tendencies to focus on the first theme mentioned or our tendency to defer to people in positions of perceived higher status.

No BrainstormingThere are many better ways to generate ideas from design thinking and other facilitation approaches such as

  • Silent brainstorming
  • Rapid sketching
  • Surfacing assumptions and generating hypotheses

What if we are working on a big, important goal? There are many questions that we overlook because it’s easy to make the assumption that once was enough and doing a process of discovery again might generate more work than we desire.

  • Should we facilitate only one of these idea-generation sessions with one group of people?
  • How can we know if we have looked at the goal from enough angles?
  • If we should do it more than once, then how many times and how much time between the sessions?

Perhaps this is the original intent behind governance processes. We know that humans are very creative and are likely to learn much at the beginning of a piece of work that leads to more interesting ideas as we proceed. In an idealistic world, the process of governance is a way of checking in with a bunch of smart people to help us identify key decisions and make those decisions in a timely manner.

Those same smart people can also assist with identification of the needs to re-discover – perhaps they have learned something useful from elsewhere that could help us to reach our goal sooner or obtain better outcomes. This new information might be a reason to facilitate another ideation session – but how many of us would want to set that up? It seems much easier to take the new information and simply work it into our current set of tasks.

How can you tell and why should you revisit old ground?

Things change, information is not static and the believed facts can also change with time as a better understanding is developed.

So if we acknowledge this reality then the attitude that we should only plan, then act, denies the fact of change. Imagine a set and forget toy on a table, the inevitable outcome is that it will eventually fall off. This is the very reason why biology, engineering, mechanics and programming are full of feed back loops and reiterations, so monitoring and corrections can be made. It is naive to think our projects are somehow exempt from change.

The size, complexity, number of inter-dependencies all increase the requirements for re-discovery, so we should always be asking ourselves if it makes sense to continue, or to pause and do some form of re-discovery at regular intervals.

Efficiency and Behaviour in Different Contexts

I like efficiency – when I walk into a shop, I want the sales person to help me find what I’m after and process the sale in a business-like manner. I also like to be treated as a person.

The other day, I was getting an ID card and the person serving me took a little extra time to find out about my transport means to the site. She then let me know about an extra public transport service that I wasn’t aware about. This is now saving me 10 minutes travel time each way which is 20 minutes per day. So a little less efficiency in one transaction has resulted in ongoing efficiency for me on a weekly basis.

ID sketchWhen is efficiency good and when is it bad? It’s related to the type of system we are in.

In the obvious domain on the Cynefin framework, we can attain best practice – repeatable processes. Here we can see the cause and effect relationships, observe bottlenecks and optimise the process to make it more efficient fairly easily.

In the complex domain, efficiency is not easy – or is about something very different. Using my ID example, by having a little chit chat with me, the service centre person was able to identify an extra need that I had and supply me with valuable information. The efficiency in that exchange was the use of ‘anticipatory awareness’ – being sensitive to the hints in conversation that could express a need. Great sales and service people are very good at this – if we asked them to document the process they use, it would not be easy. It would not be a step-by-step ‘recipe’ – instead it might be something like a multi-branching if/then/maybe flow chart thing. I’m certain that it would be adapted or added to after almost every interaction.

Another example of efficiency being bad is in farming, Imagine that we created plants that could extract all of the nutrients they needed from the soil and grow until the nutrients in that place were all taken up. This would be a disaster – we could only grow one crop in that space and it would desperately need all sorts of composts and fertilisers before it was useful again. If it can be that bad in a farming sense, perhaps we do not want all of our best practice processes to become super-efficient – it could deplete supplies in ways that we cannot anticipate.

In summary, efficiency can be good when we have processes that sit firmly in the obvious domain and can achieve standardisation and best practice (except if the efficiency leads to resource depletion). In these cases we can save time, money and effort by becoming more efficient. Efficiency can lead to poor outcomes if we try to apply it in the same way to the complex domain – this can waste time money and effort in the pursuit of gains that are not possible. Instead, here we want to sharpen our awareness and improve our methods of detecting small signals.

Visualising Certainty in Project Estimates

When we are scoping out a project, we are asked to provide estimates for time and cost to certain confidence levels. At the start it might be +/-100%, then +/-30% and then +/-10% (or fixed price).

What we are doing is spending money and effort in order to learn more about what we are going to deliver before we get started on actually delivering it. Using a blanket level of confidence such as +/-30% could be introducing more risk into the project because most of the things that we work on actually have subset levels of certainty. Even at the very start of a project, when we are asked for +/-100% – some of the work will have +/-10% certainty, some +/-30% and the rest will be a bit of a wild guess (+/-100%). The risk that we are introducing by treating the whole estimate as a wild guess is that we will end up spending a lot more on investigation to get to the next levels of confidence than we really need to spend.

There is another way to express our levels of confidence. Imagine a stacked bar graph that could show the amount of work at the different confidence levels. At the start, and what we would call as a +/-100% level of confidence, it might actually be a very skinny layer at high confidence (+/-10 %), a bit more at medium confidence (+/-30%) and a large amount at wild guess level (+/-100%). Depending on the type of scope falling into the three different sections, we can then make better decisions about the remaining work to estimate the project and whether there is value in starting some of the higher confidence work earlier.

Fuzzy stacked bar graph Even when we have an overall estimate to +/-30%, we will still have subsets of scope sitting at the high certainty and wild guess levels also. By expressing the estimate as an overall +/-30% we are hiding crucial information about the project which could be used to make better decisions, be indicators of untapped opportunities, or risks.

For example, if we are looking at a project estimate that looks like the bar graph on the right in the above picture, we could have a good discussion about the amount of effort required to drive out the uncertainty from the wild guess components. It might be the case that we should start the other components first because that work will inform the wild guess components or completing some of that work will make it easier to determine some aspects of the wild guess work. Without this type of visualisation, we will treat the entire scope as still only having medium certainty and might start some work prematurely or delay useful work and potentially over-invest in effort to gain more perceived certainty.

Managing Constraints

Tasks and activities are easy to manage – we see that a job needs to be done, schedule some time for it and then do it.

Queues are easy to see – waiting in line to pay for some goods when we go shopping – or waiting in line to see a popular tourist attraction. Waiting in queues like these is a good time to ponder about why the queue is there and how it could be better managed. Could the shop hire some more cashiers? Could the tourist venue allow more people through at once by setting out the attraction in a different way?

What about the queues that we cannot easily see such as those in our everyday work?

One thing that seems efficient is a regular set of decision-making meetings such as review and approval to proceed with an activity (like recruitment or a project). There could be a better way to manage these decisions – or are these regular meetings actually the most effective way to do it?

How many of us have rushed a presentation together to get it into the next cycle for a decision meeting? What might happen if the decision could be made as soon as we have brought together the information needed to make that decision? Would we have less of a sense of urgency and take longer to do simple things? Would it take less time because we know that the decision will be made as soon as we are ready for it?

The decision to proceed or not is a constraint – up until that time, we are not certain that the project or other action will go ahead. By holding a regular meeting for the decision-making, we are making efficient use of the decision-makers time – but we are introducing queues into the process. Imagine that there will be 5 project ideas presented at the next meeting – all 5 have different sets of data that they need to gather in order for the proceed or not decision to be made. This data-gathering will have different lead times and each project will actually be ready for the go/no-go decision well before the meeting – so the length of wait-time in the queue will be different for each of them. How many of us record these wait times? Those things that remain invisible cannot be managed – people will be waiting on the decision-meeting before they can do the next set of useful actions. We could have 5 teams waiting on one meeting with senior decision-makers – because that is the best way to use the time of those decision-makers.

5 Teams WaitingConstraint management is hard to do because it is very hard to notice the constraints. We get used to it taking a certain time to do things and waiting for the next decision point – because that’s how it’s done and it appears to be efficient.

There are a lot of other constraints out there waiting to be noticed – the clues are there when our colleagues roll their eyes or complain about something. We can empathise with them and move on – or we can empathise and then dig a bit deeper.

  • Is this a part of a pattern?
  • Do other people also complain about it?
  • How can we measure the things that people are complaining about?
  • If we could measure them, what would we expect the measurements to be in order to validate or invalidate those complaints?
  • If we are not already capturing the data, why?

Next time you are planning work – notice if you are planning activities, tasks or constraint management. Planning some time each week for observation sounds like a good idea – only by looking at something for a while and reflecting will we find ways to improve.

Meetings and Assumptions

Have you ever been in a meeting that went a bit like this?

Workshop to MessSomehow the meeting went off-topic, or a conversation suddenly took over the entire session. What went wrong? Perhaps our conversations were assumption-based rather than fact-based.

Assumptions are very valuable things, they help us to move forward. An obvious example is the assumption that the footpath in front of us is solid, if we doubted this all the time, we would have a lot of trouble walking around, let alone running or jogging.

The less helpful types of assumptions are ones that we make about our own or others definitions of words or level of understanding about a topic. I can remember having 30 minutes of strong debate about an issue a long time ago, only to discover that we were actually arguing for the same thing – just using different terminology.

When planning meetings and workshops, list out the topics that might cause debate or miscommunication and then ask what assumptions we might be making about those topics. Whenever possible, spend some time validating or invalidating those assumptions before the meeting so that we turn those assumption-based conversations into fact-based ones and use our time more effectively.

Questions as Boundaries

How do we know when we are asking a good or a poor question? One sign of a poor question is when people are spending a lot of effort for seemingly little gain.

Question‘What should we do?’ can be both.

If it’s been done before and there is a predictable, best practice way to do it, then the obvious answer is ‘The next logical step.’

Obvious StepsHowever, if we are exploring complexity it can be a poor question, there are so many possible options that it can take a long time to decide what to do.

Complex SpaceIn order to explore something which has limited certainty, it is better to form a view or hypothesis and test it. This then creates a type of boundary around the thing we are trying to understand to help us make sense of it.

Questions as BoundariesQuestions like

  • Will ‘X’ happen if I do ‘Y’?
  • Is it feasible to achieve ‘X’ for ‘Y’ effort?
  • Will people buy our product?

For example, we often start a project by asking how much time and money it will take to make our idea a reality. This can drive a very large amount of effort to find out – when the project happens to be in the complex domain (as per the Cynefin Framework). If we change the question to ‘Is it feasible to make this idea a reality for $X and Y time-frame?’ it puts boundaries around the initial exploration stage and we can avoid large amounts of wasted and unfocussed effort. An example can be found in my previous post ‘Why do we Estimate?’

Questions on Cynefin FrameworkIn summary, check which type of system the question relates to according to the Cynefin Framework. Then check if the question is helping to drive useful logical work or useful exploration and if not, then change the question.


There are companion posts from Marc Burgauer and Trent Hone about this topic – these 3 posts are our initial thoughts based on some Twitter interactions.

For the purpose of this post, the term ‘automation’ should be considered in the broadest possible sense.

If we think about software supporting and automating tasks such as ordering products and producing bills, then continuous delivery is like automating parts of the software delivery process. I thought that it might be a useful thought-activity to consider other forms of automation and see if that generates any insights – especially around how we decide what things are safe to automate and what things need to have human judgement applied.

Manual Refrigeration

One form of automation is refrigeration – imagine if it was manual. We would need to keep an eye on the temperature and add ice whenever it went over a certain amount.

Well worn path

In a broader sense, even pathways might be considered an automatic guide for where to step next and a convenient way to find places.

In the above two examples, we might make the decision about where to build a path once we see a worn track where many people have walked. We might decide to automate refrigeration because transporting and adding ice manually is very labour intensive.

On the safety side – a worn dirt track is not as safe as a textured concrete path because in the wet, it might be slippery. Also, hauling ice through the house risks injury and if we forget to add ice at the right time, our food will become contaminated and make us sick.

If we abstract these concepts, here are some good reasons to automate something

  • We have already done it many times and will continue to do it the same way
  • We know exactly what we need to do and it is hard to do it manually
  •  Problems are happening due to lack of care

There is still a bit more pondering to do with these thoughts and I am looking forward to further inputs and insights from Marc Burgauer and Trent Hone  – thank you to both.



Lean Kanban United Kingdom 2013

I enjoyed attending LKUK13 and would like to share some of the snippets that I found interesting. These are from my notes and are my interpretations – any mis-interpretation is entirely my responsibility and I am happy to receive any corrective feedback.

Mike Burrows – Kanban is like Onions!

  • If we organise the work, we make it possible for people to re-roganise around the work
  • Ask if any single improvement can benefit the Customers, team and organisation – the improvement is good if all 3 can benefit from it
  • To help with paying attention to flow, then keep work sized to see movement every day
  • Small acts of leadership – such as the routine from Toyota – leaders can ask
    • What is the process?
    • How can we see it’s working?
    • How is it improving?
  • Agreement from people versus agreement between people

Liz Keogh – Cynefin in Action

  • Frog thinking vs bicycle thinking – we can take a bike apart and put it back together, and it will work again – not a frog
  • We’re discovering how to discover stuff by doing it
  • Deliberate discovery – Risk (newest things) first – tell the story that’s never been told
  • Focus on how we can quickly get feedback

Edward Kay – Mulit-client Kanban

  • The ‘ready’ column makes a good handover point
  • Use ‘Help’ tokens to indicate that you need assistance with a story – either with context or skills – so that you don’t interrupt others and they can select their own time to help you

Torbjörn Gyllebring – #NoMetrics – the ephemeral role of data in decision making

  • Lines of code is the best metric (and everyone hates this) – great for archaeology, but it’s all from the past
  • Ethics – in a position of power, you start to influence people – do no harm
  • Our customers are those whose lives we touch
  • Clarification should be at the centre of any measurement effort
  • Data needs to always be relevant
  • Informational measures are useful – but it depends on how people perceive it
  • ODIM – a good model – Outcomes, then Decision, then Information, then Metrics – use the metric and then discard it
  • Know why you need the data

Yuval Yeret – Kanban – a SANE way towards agile in the enterprise

  • When trying to change culture, engage in marketing – identify and nurture opportunities
  • Start with leaders and managers
  • Need to balance between prescriptive guidance and no guidance
  • After a chance allow time to stabalise and recharge – then provide good reasons to get out of recharge mode

Chris McDermott – The Other Side of Kanban

  • Encourage shared understanding – not managers are dating agents and chaperones
  • Add a ‘ready to celebrate’ column onto the board

Stephen Parry – How to develop Lean leadership and create an adaptive, learning and engaging organisation

  • Reciprocity only works when there is a sincere and genuine feeling – does not work if there is a feeling of manipulation – It can be negative
  • ‘Dont bring me problems, bring me solutions’ is an example of leadership abandonment not empowerment

Chris Young – Models, Maps, Measures and Mystery

  • Asked why customer approval waiting times went up a lot – led to the idea to have customers sit with the developers
  • At one stage the customer started leading the standups
  • Added an extra column to personal kanban board ‘didn’t happen’ next to the ‘done’ column

Jabe Bloom – What is the value of social capital?

  • A value stream is a linear view of the social network
  • Swarms – form temporary teams on high-value problems with volunteers
  • Emergent slack – have 20% of time spent on interruptible tasks (tasks that no-one is waiting on)
  • Social capital is the ability to distribute and leverage trust (reciprocity)
  • In a low social capital environment we use consensus models
  • In a high social capital environment we trust each other to make decsions
  • Authority removes social capital (consumes it)

Jim Benson – Beyond Agile

  • Flow if you can, pull if you must (pull systems are all remedial)
  • No recipe for success – just a recipe for not likely failing
  • Trying to do agile versus delivering value

Zsolt Fabok – I Broke the WIP Limit Twice, and I’m Still on the Team!

  • If you understand small, incremental evolutionary changes and pull, then you can decduce the rest
  • The goal is to have a stable system – easier to improve it

Alexis Nicolas – Management hacking in progress

  • Managers should focus on learning. We can live with problems for 1 or 2 days because we have better risk management
  • Change is viral – not prepared planning – we can design a viral change

Troy Magennis – Cycle Time Analytics – Fast #NoEstimate Forecasting and Decision Making

  • Statistics is more of a logic problem than a maths problem
  • When we forcast, state the level of uncertainty – ask what point would sway the decision
  • Every choice we make changes the outcome – Decision induced uncertainty
  • Diagnostic models allow us to  run ‘what if’ scenarios
  • Estimating what could go wrong is more important
  • We should update our forecast each time we finish a piece of work because we have learnt more