Safety and Information Flow

After reading Sidney Dekker’s book The Field Guide to Understanding ‘Human Error’, here are some of my emerging reflections and connections. As a consequence of me forming my thinking, this post and its series will likely change over the next few weeks.

A consistent theme throughout the series is the concept of information flow and the safety culture within organisations. Information flow not only impacts physically safety in many industries; it is also an indicator of organisational effectiveness regardless of their industry. As Ron Westrum states, “By examining the culture of information flow, we can get an idea of how well people in the organization are cooperating, and also, how effective their work is likely to be in providing a safe operation.”

Book summary

Based on my notes from the book, here is a summary (aided by Google Gemini)

The “Field Guide” distinguishes between the “Old View” and the “New View” of human error. In the Old View, human errors are the primary cause of problems, and the focus is on controlling people’s behavior through attitudes, posters, and sanctions.

In the New View, human errors are seen as a symptom or consequence of deeper organizational issues, and the focus is on understanding why people behaved in a certain way and improving the conditions in which they work.

The Field Guide provides examples of practices that indicate an organization adheres to the Old View, such as investigations and disciplinary actions being handled by the same department or group, and behavior modification programs being implemented to address human error. These practices tend to suppress the symptoms of human error rather than addressing the root causes.

Information Flow

In this article of the ‘human error’ series, I relate Ron Westrum’s concept of information flow to ‘human error’.

In Ron Westrum’s paper on The Study of information flow, he writes

The important features of good information flow are relevance, timeliness, and clarity. Generative environments are more likely to provide information with these characteristics, since they encourage a “level playing field” and respect for the needs of the information recipient. By contrast, pathological environments, caused by a leader’s desire to see him/herself succeed, often create a “political” environment for information that interferes with good flow.

Westrum, The study of information flow, Safety Science 67 (2014) 58–63

As Westrum describes in his paper, generative environments are those that focus on mission, where everything is subordinated to performance aided by a free flow of information. Pathological environments are characterised by a fear and threat, where individuals hoard information for political reasons.

Between those environments, he identified bureaucratic organisations which tend to retain information within departments for protection.

These cultural styles determine how organisations are likely to respond to ‘human errors’, such as mishaps and mistakes, and to unwelcome surprises such as project-halting issues. In his paper, Westrum sets out ways in which organisations may respond to anomalous information:

  1. the organization might “shoot the messenger.”
  2. even if the “messenger” was not executed, his or her information might be isolated.
  3. even if the message got out, it could still be “put in context” through a “public relations” strategy.
  4. maybe more serious action could be forestalled if one only fixed the immediately presenting event.
  5. the organization might react through a “global fix” which would also look for other examples of the same thing.
  6. the organization might engage in profound inquiry, to fix not only the presenting event, but also its underlying causes.

High performance organisation achieve success with global fixes and inquiry where the ‘bearer of bad news’ is welcomed and supported by their leaders and peers.

Practitioner’s insights

There are means to reasonably approximate where along Westrum continuum an organisation may be. An organisation’s position along the continuum will likely reveal why intractable problems may be nigh on impossible to address.

In my own work, I’ve helped a financial services organisation monitor their position by conducting regular surveys. For the department in focus, individuals at all-levels were asked to rate each of the following kinds of statements:

  • Cross-functional collaboration is encouraged and rewarded
  • Failures are treated primarily as opportunities to improve the system
  • Information is actively sought
  • Messengers are not punished when they deliver news of failures or other bad news
  • New ideas are welcomed
  • Responsibilities are shared

This particular set of statements are from the DORA Research.

To understand the state of information flow within an organisation, practitioners should consider employing the same approach.

Connections with other concepts

Over the next few weeks I’ll be writing this series of posts which makes further connections between aspects of The Field Guide and the following concepts:

  • Anti-Fragility, Drift and Resilience
  • ‘Human Error’ and Ron Westrum’s Information Flow
  • Normalisation of Deviance and the idea of Gradual Acceptance
  • Could Wardley mapping be used to map the inattention emergence of drift
  • Drift and Cynefin’s zone of complacency
  • Roger Martin’s Strategic Choice Cascade
  • Hosfstede’s cultural dimensions theory
  • Mary Uhl-Bien’s Leading with Complexity

Safety: Stressors, Resilience & Drift

After reading Sidney Dekker’s book The Field Guide to Understanding ‘Human Error’, here are some of my emerging reflections and connections. As a consequence of me forming my thinking, this post and its series will likely change over the next few weeks.

A consistent theme throughout the series is the concept of information flow and the safety culture within organisations. Information flow not only impacts physically safety in many industries; it is also an indicator of organisational effectiveness regardless of their industry. As Ron Westrum states, “By examining the culture of information flow, we can get an idea of how well people in the organization are cooperating, and also, how effective their work is likely to be in providing a safe operation.”

Book summary

Based on my notes from the book, here is a summary (aided by Google Gemini)

The “Field Guide” distinguishes between the “Old View” and the “New View” of human error. In the Old View, human errors are the primary cause of problems, and the focus is on controlling people’s behavior through attitudes, posters, and sanctions.

In the New View, human errors are seen as a symptom or consequence of deeper organizational issues, and the focus is on understanding why people behaved in a certain way and improving the conditions in which they work.

The Field Guide provides examples of practices that indicate an organization adheres to the Old View, such as investigations and disciplinary actions being handled by the same department or group, and behavior modification programs being implemented to address human error. These practices tend to suppress the symptoms of human error rather than addressing the root causes.

Using stressors to gain safety resilience

Here’s my considerations relating Nassim Taleb’s thinking on stressors and Dekker’s work.

In his Antifragile book, Taleb writes

The crux of complex systems, those with interacting parts, is that they convey information to these component parts through stressors, or thanks to these stressors: [you] get information about the environment not through your logical apparatus, your intelligence and ability to reason, but through stress.

Nassim Taleb, Anti-Fragile

I believe Taleb’s statement relate to what Dekker writes about Erik Hollnagel’s concept of safety resilience

[Resilience is] the ability of a system to adjust its functioning before, during or after changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. This needs to translate into using the variability and diversity of people’s behavior to be able to respond to a range of circumstances, both known and new.

Sidney Dekker, The Field Guide to Understanding ‘Human Error’

To me, incidents such as mistakes and mishaps are stressors which individuals should feel comfortable to share. A condition for sharing are individuals knowing they can do so without fear of retribution or intolerance.

This sharing and the subsequent adjustments to their environment is ideally done in near real-time, and in a manner satisfactory to those closest to the problem; and less through formal procedures.

This ability to safely share stressors for continuous and appropriate improvement creates resilience. It creates resilience against stock-events since it flushes out and addresses weakness that are typically hidden in plain sight. It allows people, processes and services to continuously adapt in a contextual manner.

On the other hand, formal procedures often result in delay and are further hampered through the use of erroneous targets (e.g. targeting zero incidents). Those targets may satisfy the needs of a somewhat separate function, but create perverse incentives for those making decisions in the frontline.

A typical consequence are frontline colleagues striving for flexible practices despite the purported support of formal procedures. Often frontline colleagues establish workarounds and ‘inside knowledge’ which become normalised.

Drift

The mismatch between formal procedures and practices grows over time. It’s an almost universal phenomenon which Dekker describes as Drift. For many in the frontline, it’s a necessary departure from the original ideal, on which counter-productive targets, expectations and officialdom is set. The over-design of procedures takes agency away from the very people who typically know best and instead favours individuals who are rule-followers.

Consider your average corporation, government department or large nonprofit, and we will sadly find this corrosive phenomenon to be endemic. Often entire departments and functions exist to create and prop-up counter-productive ill-fitting procedures.

Mistakes and Mishaps

Hoffman’s Pyramid of Advantage and Catastrophe
Illustrated in the book Unlearn, by Barry O’Reilly

Above I use the words ‘mistakes’ and ‘mishap’ with specific meaning. Ed Hoffman’s states a ‘mistake’ is when something doesn’t go to plan; a ‘mishap’ occurs when the initiative or operation doesn’t fail, but an element of it is wrong.

‘Mistakes’ and ‘mishaps’ are stressors that indicate something is not performing as it should.

It should be down to the frontline individuals to recognise, share and collectively address such incidents. It should be down to their managers to ensure these individuals feel safe to share such incidents, and ensure that the learning is accounted for, regardless of how deep it impacts the organisation.

This sharing and learning can be described as information flow through an organisation (more on this later). A clear upside is the organisation’s ability to adjust itself in a timely and necessary manner.

Practitioner’s insights

The success of open, safe and frank enquiry which leads to lasting change will depend on the organisation’s culture. So, when considering the likelihood of an organisation addressing issues and their underlying causes, practitioners should consider the wider cultural environment.

This experience is backed up by the work of Ron Westrum, which I introduce in the next article. In that article I provide practitioners with a practical recommendation to understand the cultural environment and how it relates to the concept of information flow.

Connections with other concepts

Over the next few weeks I’ll be writing this series of posts which makes further connections between aspects of The Field Guide and the following concepts:

  • Anti-Fragility, Drift and Resilience
  • ‘Human Error’ and Ron Westrum’s Information Flow
  • Normalisation of Deviance and the idea of Gradual Acceptance
  • Could Wardley mapping be used to map the inattention emergence of drift
  • Drift and Cynefin’s zone of complacency
  • Roger Martin’s Strategic Choice Cascade
  • Hosfstede’s cultural dimensions theory
  • Mary Uhl-Bien’s Leading with Complexity

Thanks

Many thanks to Anne Gambles, Zsolt Berend, Nawel Lengliz and Carlo Volpi for giving feedback on this article.

Book Review: Accelerate by John Kotter

Accelerate is a book by John Kotter, first published in 2014

I’ve been aware of John Kotter’s book Accelerate: Building Strategic Agility for a Faster-Moving World as foundational text in the Scaled Agile Framework (SAFe) canon. As someone who operates within organisations adopting the framework, I was curious to understand the significance of the text, and try to appreciate why it is so compelling to help kick start the SAFe movement.

John Kotter is respected thought-leader, author and academic in the field of leadership, strategy and consulting. I had previously read the fictional story Our Iceberg is Melting – another of his books that talks of change and success in a fast changing world. Since this book is written as a fable, I was hoping that Accelerate would not only offer models and practices, but also citable research and thorough case studies to underpin Kotter’s thinking.

The “dual operating system”

Eight accelerators for the network.

In Accelerate, Kotter does well in describing how organisations can apparently adapt to rapid change by adopting a ‘dual operating system’, where the traditional hierarchical co-exists with a more nimble and entrepreneurial network of individuals. These intrinsically motivated individuals are driven by a sense of urgency, are supported by a guiding coalition, are catalysed by short-term success, and other such accelerators that would be familiar to readers of this post.

The dual operating system. Traditional hierarchy on the left, integrated with the entrepreneurial network on the right.

I find the apparent successful marriage of the traditional hierarchical with this network-like structure to be an intriguing, but huge claim. A claim that disappointingly isn’t backed up by citable research or thorough case studies.

For example, Kotter says those operating on the network side need to find time in addition to the time needed to fulfil their duties in the hierarchical side. To explain this, the book is strong on rhetoric, but weak on providing something like detailed first-hand accounts, or day-in-the-life stories. I was hoping to understand how individuals had navigated the trials and tribulations to overcome the cultural status quo, protectionism and suspicion. Without this, I find Kotter claim a tall order, even with the presence of his accelerators.

Extraordinary claims require extraordinary evidence

Knowing that Kotter is an academic, I’m surprise by the lack of evidence-based research from which his models, concepts and practices should emerge. The phrase “Extraordinary claims require extraordinary evidence” comes to mind. I found extraordinary evidence is lacking in Accelerate.

The evidence you’re given are mostly brief paragraphs describing what anonymous individuals and organisations have done to adopt the ‘dual operating system’. This stands in stark contrast to other leadership and strategy books, such as Alan Lefley’s and Roger Martin’s Play to Win. Throughout Play to Win the authors lean upon their lessons transforming Proctor and Gamble. I acknowledge this is only one organisation, but at least the case study is authentic, comprehensive and underpins their advice.

Maybe Kotter has other books and research which provide the rigour I’m seeking (let me know if there are). Until I see it, I am sadly unconvinced about the feasibility of what Kotter advocates in Accelerate.

Reflections on The Startup Way – Part 2

The Startup Way: How Entrepreneurial Management Transforms Culture and  Drives Growth: Amazon.co.uk: Ries, Eric: 9780241197264: Books
The Startup Way: How Modern Companies Use Entrepreneurial Management to Transform Culture and Drive Long-Term Growth. By Eric Ries

In his The Startup Way book, Eric Ries reveals how entrepreneurial principles can be used by businesses to take advantage of enormous opportunities and overcome challenges resulting from our connected economy.

In this series of posts, I’ll share my key takeaways, and relate those to my own experiences and reflections. In the last article, I explored how organisations corner themselves by becoming over-reliant on their successes. In this article, I’ll introduce and reflect upon the missing organisational capabilities for entrepreneurialism and new growth.

Ries describes these capabilities as:

  • Create space for experiments with appropriate liability constraints
  • Fund projects without knowing the return-on-investment (ROI) in advance
  • Create appropriate milestones for teams that are operating autonomously

In my experience, successful innovation occurs when entrepreneurs’ autonomy is bounded within a governance process which selectively and incrementally invests in strategies that are demonstrating early success. If innovators cannot demonstrate early signs of ROI then they either pivot to a different strategy, or they receive no further funding.

This approach enables appropriate liability constraints, operational autonomy and funding without knowing the ROI upfront. It protects innovations teams from over-committing, and it protects the business from over-investing in strategies which haven’t proven themselves.

The team is liable because they have the responsibility to achieve what’s been agreed. The leaders are also liable in that they must remove organisational impediments and protect the team from the status quo that may hamper the innovation team.

Freedom is constrained to ensure the team and leaders focus on what specifically needs to be learnt at a sustainable pace. The constraints should be both time and funding; for example 4-weeks and $25,000.

Yet, this approach doesn’t mean innovation teams are micromanaged; it’s quite the opposite. Once an innovation team has agreed on the measures of success, gained the support of leaders and received the funding, they have the freedom to explore how to achieve their goals within the agreed constraints.

Couple Buying Meat at a Supermarket
The innovation team relied on the leaders’ support, protection and network to work with store managers who were open to testing the benefits of the Scan and Go app.

Photo by Jack Sparrow from Pexels

An example of where I’ve supported a retail client achieve this is with their Scan and Go service they were developing. The team wanted if store operations could support customers purchasing items by scanning and purchasing them using smartphones, and then immediately leaving the store. This allowed customers to avoid the checkout queues and put less pressure on the checkout staff.

There was natural concern that there would an increase in theft (known as shrink in the retail industry). To contain risk the innovation team had liability constraints of 2-weeks when they could test a prototype in two stores which were closely monitored. The types of items which could be purchased were also limited.

Assorted Bottles and Cans in Commercial Coolers
To create safe-to-learn constraints, types of items were excluded from the Scan and Go experiments. For example beers, wines and spirits which are high-value items.

Photo by junjie xu from Pexels

No one knew the ROI upfront. How much would it cost to build and operate? How would it impact sales and shrink? How would store security respond? To find out the team worked with the leaders to agree on measures of success and thresholds.

Within a limited timebox, budget and range of items, the innovation team chose to develop and conduct experiments in a way they felt appropriate. The leaders were on hand to use their social network to identify the rare store managers who were open to partnering with the team.

Without prompting, tentative signs of success were shared between store managers. The excitement of collaborative discovery created interest from area managers. This buzz created a larger opening for the next scale of experimentation. Changed was never pushed onto store and area managers.

This unprompted exchange of stories between different groups shows that innovation is as much a social phenomenon as a technological one.

This example of experimentation with liability constraints, funding without knowing the ROI upfront and allowing teams to operate with autonomy, became an exemplar of entrepreneurialism for my client.

Remember from the last article, that entrepreneurialism is vital to ensure organisations don’t become over-dependent on past successes.

In my next article, I’ll reflect upon my next takeaway for The Startup Way, which will be on scaling the innovators’ successes with the resources of the parent organisation.

Reflections on The Startup Way – Part 1

The Startup Way: How Entrepreneurial Management Transforms Culture and  Drives Growth: Amazon.co.uk: Ries, Eric: 9780241197264: Books
The Startup Way: How Modern Companies Use Entrepreneurial Management to Transform Culture and Drive Long-Term Growth. By Eric Ries

The Startup Way is a 2017 book by Eric Ries, which is a follow-up to his blockbuster Lean Startup.

In The Startup Way, Ries reveals how entrepreneurial principles can be used by businesses to take advantage of enormous opportunities and overcome challenges resulting from our connected economy.

In this series of posts, I’ll share my key takeaways, and relate those to my own experiences and reflections. Let’s start off by exploring how organisations corner themselves by becoming over-reliant on their successes.

Kodak has become the go-to case study of an organisation that became myopic and over-committed to its past successes.

Ries states that if an organisation is constrained by capacity, they’d typically endeavour to acquire more, in a bid to gain greater market share. New products tend to be variations of existing product lines. Firms compete primarily on price, quality, variety and distribution. Barriers to entry are high, and growth is slow.

In my view, if exploited for too long, what Ries describes can result in dangerous consequences. It can create a difficult-to-reverse dependence on legacy successes. Repeating and scaling an organisation’s previous successes can become its unspoken raison d’etre. In an increasingly fast-moving market, this can be disastrous.

The over-reliance on existing successes also develops an expectation that stifles the emergence of innovation within organisations. Success leads to criteria that promote the fine-tuning of existing products, processes and behaviours. This makes it difficult to accommodate internal disruption, vulnerability and relearning – qualities necessary for innovation.

Organisations scale success by developing a highly tuned operational system. This is often at the detriment to their capacity to innovate.

I believe this ties into Apex Predator Theory developed by Dave Snowden. Organisations will eventually fail as they become competent and too wedded to the current operations and market offerings.

Apex Predator Overlapping S-Curves | AGLX Consulting
Apex Predator Overlapping S-Curves. Illustrated by aglx.consulting

My next article reflects on another takeaway from The Startup Way – the missing capability that enables organisations to overcome their over-dependence on past successes.