Skip to content

The Cultural Contradictions of Managing Change: Using Horizon Scanning in an Evidence-based Policy Context

January 18, 2011

Written by SAMI Principal Dr Wendy Schultz.

Every epoch is an epoch of transition.  We know only one thing about the future, or, rather, the futures:  it will not look like the present.  Jorge Luis Borges

In order to appreciate our own epoch of transition, we must sharpen our awareness of change.  Both public and private organisations are struggling to do so, implementing change management processes, technological foresight, strategic scenario planning, and visioning.  But the first step in any attempt to appreciate emerging change and manage resulting uncertainty is spotting change as it arises:  the process of horizon scanning, also known as environmental scanning.  Unfortunately, horizon scanning’s design criteria do not augur well for its quick uptake and widespread dissemination in any evidence-based decision environment.  This article examines why, and offers some suggestions to bridge the cultural and conceptual gap between horizon scanning and evidence-based policy-making.  It is based on three years’ of horizon scanning research experience within the UK government.

Brief History of UK Government Horizon Scanning – Horizon scanning within government in the United Kingdom has its roots in the establishment of the Office of Science and Technology’s Foresight Programme.  The Foresight Programme was founded in 1994 subsequent to a 1993 White Paper by William Waldegrave, “Realising our Potential – A Strategy for Science, Engineering, and Technology.”  Its first round engaged a variety of stakeholders in fifteen sector panels each exploring emerging opportunities in different areas of the economy.  Subsequent rounds focussed on specific trends; the current research strategy involves rolling foresight projects on specific topics, such as nanotechnology or brain science.

In March 1999, the Prime Minister’s Performance and Innovation Unit (now the Strategy Unit), produced a white paper on modernising government that established a Strategic Challenges team in August 1999 (now called the Strategic Futures team).  Their mandate was to identify key challenges facing the UK government over the next 10 to 20 years; that has now expanded to include coordinating and benchmarking foresight activities within UK government.

In July 2004, HM Treasury published the “Science and Innovation Investment Framework 2004-2014,” which specifically called for the creation of a “Centre of Excellence in Horizon Scanning.”   This new government research centre was established in the Foresight directorate of the Office of Science and Innovation.   The UK Horizon Scanning Centre (HSC) began work with two pilot scanning projects in December 2004, performed by colleagues and myself.  That work was then amplified into two major scanning projects, the Sigma Scan (a baseline scan of scans), and the Delta Scan (a science and technology scan).  Both of those were completed in June 2006 and have undergone peer review.

Interest in horizon scanning is widespread in the UK government.  The HSC’s first two pilot projects had in-government “clients”:  the Department of Trade and Industry, the Home Office, the Civil Contingencies Secretariat, and the Department of Constitutional Affairs.  In addition to these centralised scanning projects, a number of  ministries and departments within the UK government have organised their own foresight activities, many of which have included horizon scans:

  • The Department of Defense;
  • The Department of Environment, Food, and Rural Affairs;
  • The Department of Health;
  • The National Health Service; and
  • The Department of Trade and Industry, among others.

Generally, government scanning seems to focus around security issues; health and quality of life issues; economic development issues; and issues of strategic concern.

What are the contexts for foresight?

Public context:  The public context for policy is evolving.  Increasingly, people’s experience with individually tailored services in the private sector – the “Amazon.com effect” – will change their expectations of service in the public sector as well, creating heightened demand for “tailored governance.”  As a result, single-issue political activity will increase.  Our evolving communication and information infrastructure heightens the speed and ease of recognition, connection, and communication between potentially cooperative political interest groups, and between interest groups and the media.  This heightens the capability, and potential, for ‘flashmob’ political responses:  fast, flexible, mobile political action – lobbying, civil disobedience, or terrorism – directed at political targets.

Government context: In the UK the context for foresight within government includes several intersecting challenges:

  • first, specific agencies are under-funded and under-resourced for the complexities challenging them as well as for emerging turbulence and the surprises it will generate;
  • second, both politicians and civil servants are media-traumatised and media-wary in the wake of “mad cow” disease; genetically modified food protests; Iraq intelligence issues; and other political crises serving as fodder for tabloid tempests.

The primary protective response to media trauma involves acquiring unimpeachable information with which to inform public opinion and policy planning.  This leads to policy research strategies focussed on in-depth expert observation and analysis of past data and current statistically verifiable trends. With regard particularly to the last point, government agencies in the UK produce solidly researched policy papers and research monographs on relevant issues, but that draws primarily on past data and present trends, rarely addressing the issue of how change itself will change over time. They assemble the evidence describing the issue at the moment, as if it were frozen in time.

Change context:  Change in this century is accelerating; the process of innovation, of spurring innovations from ideas to production to market saturation, grows continuously more efficient.  In unfolding rapidly it becomes more and more more complex, generating in turn complex impacts.  As each rapidly deployed new innovation layers upon other new innovations, people and social systems fall behind in their ability to adapt.  The resulting intersection of impacts generates turbulence, creating chaotic system behavior.  While this provokes conflict, turbulence is also a rich environment for creativity and growth.  Turbulence produces an environment of ambiguity and contradictions, which challenge old assumptions and open up space for new ideas.  The resulting ambiguities make decisive policy formulation difficult, and generate contradicting viewpoints and political stances.

For ease of research relevant to decision-taking, policy-making, or planning, change is often treated as static:  briefing papers, white papers, or research monographs will focus on trends or drivers of change, collating observations and data on a given topic. Yet change itself is neither static nor neatly parcelled in unrelated single-issue packets.  Thus even the most meticulous traditionally designed research project risks overlooking change emerging from other issue areas with potentially significant impacts.

As an ongoing process, scanning is more faithful to the actual structure and dynamics of change than are briefing papers.  Correctly designed scanning is also systemic, allowing the linkages among different trends and emerging issues of change to be tracked as they develop and evolve.  But correctly designed scanning will create challenges for planning and decision-making by uncovering contradictions and ambiguities in mapping the turbulence of change.  If the scanning design includes a robust process for mapping impacts through their secondary and tertiary levels, it will also identify potential counter-intuitive results of both planned and serendipitous change.

Scanning as a foundation of integrated foresight: – Scanning operates most effectively within the context of an integrated foresight process:  the table below defines the five key activities of integrated foresight, and offers example research methods (an indicative, not an exhaustive list) affiliated with each activity.

Table 1. Five Key Activities of Integrated Foresight: 

Description and Example Related Methods (indicative, not exhaustive, examples)

Key Activity Identify and Monitor Change Assess and Critique Impacts Imagine Alternative Outcomes Envision Preferred 

Futures

Plan and Implement Change
Activity Description Identify patterns of change:  trends in chosen variables, changes in cycles, and emerging issues. Examine primary, secondary, tertiary impacts; inequities in impacts; differential access, etc. Identify, analyse, and build alternative images of the future, or ‘scenarios.’ Identify, analyse, and articulate images of preferred futures, or ‘visions.’ Identify stakeholders and resources; clarify goals; design strategies; organise action; create change.
Examples of Related Research  Methods Data collection, data mining, horizon scanning, survey research, focus groups, leading indicator analysis, cohort analysis. Causal layered analysis, focus groups, cross-impact analysis, futures wheels, systems models & causal loop analysis. Content analysis, econometric models, systems dynamic models, scenario building:  matrix, FAR, dialogue, diversity, parameter. Future Search, Future Workshops, Community Visioning, Fluent Visioning, Learning Organisation. Backcasting, SSM & Rich Pictures, Joint Problem- Solving, SWOT analysis, Strategic Choice, PERT and GANTT charting.

The visual diagram below portrays the movement and transformation of data derived from horizon scanning – also known as environmental scanning – throught the five key activities of integrated foresight:

  1. Trends, drivers, and emerging issues of change (weak signals) are collected via scanning diverse data sources;
  2. Those change data are extrapolated using a variety of statistical computations as well as qualitative impact assessment techniques;
  3. The change data, the extrapolations, and the impacts can then be assembled into scenarios of alternative possible outcomes, neither wholly good nor wholly bad but expressing both opportunities and threats in each scenario;
  4. Those possibilities seen as contributing to most desired outcomes are built into an preferred future, or vision;
  5. Strategies to realise the vision are devised which both build on the positive trends of change, and counter or ameliorate the negatives.

At this point, the organisation or agency involved is itself creating change, and monitoring the effectiveness of its strategies in producing desired outcomes becomes part of more generalised horizon scanning and change monitoring activities.

When considering a news item for potential inclusion in an environmental scanning database, scanners must first determine whether the proposed “scan hit” is objectively or subjectively new:  is it simply news to me, or would a leading thinker from the appropriate field also judge the given change to be a new development?  Next, scanners should consider if the change represents the first mention of a new topic of change, or reinforces previous signals regarding the same topic.  If the scan hit represents additional data on a topic, does it confirm previous scan hits, or does it contradict them, suggesting a backlash or opposition viewpoint on the topic?  Is it possible to judge how quickly the change is occurring, and what the time horizon for “emergence,” or public awareness / market saturation, might be?  How credible is the source — is the source actually an authoritative opinion leader for that area of expertise?  Note that an opinion leader in cyberpunk is unlikely to be perceived as “authoritative” in the same way that an opinion leader in nuclear physics is.

The map is not the territory:  the need for radarTrends — data about change encompassing enough observations for statistical significance — when mature may be considered conditions of the operating environment that planning should already have taken into account.  Globalisation is an example:  it has been widely recognised for some time, and the basket of trends which constitute this driver are well documented.  It should be on the navigational map.  Ongoing environmental scanning acts as radar / sonar, identifying new elements in the territory which have either arisen since the map was drawn, or which are in motion.  The access of Chinese teen-agers to broadband Internet — including the conditions of that access (eg., Google, Yahoo censoring technologies) and the uses to which they are putting that access — is an example of an emerging issue which is dynamically evolving.

Where do we “tour” the territory looking for emerging change? – The point of the process is to learn to identify potentially significant change in time to monitor its emergence while creating contingency plans to manage it, and with sufficient time to implement those plans if needed.

This classic diagram depicts the life cycle of a change, from emerging issue to full-blown trend, both in terms of number of observable cases, and in terms of public awareness. The growth of an emerging issue of change into a full-blown trend follows a typical life-cycle S-curve (provided the emerging issue matures into a full-blown trend, instead of dissipating and disappearing), whether dependent axis is the degree of public awareness of the change, or the number of observable instances (cases) of the change.

For maximum usefulness, policy-makers or planners should identify emerging issues near the origin point of the curve, allowing sufficient time for policy response.  But that implies identifying and responding to the emerging issue when little documentable evidence exists — when there may only be an N of 1.  In epidemiological terms, we are looking for “patient zero.”

Note that perceiving weak signals of change requires monitoring publications and activities on the far lower left end of the curve:  specialist and fringe publications, blogs, conferences, media output.  A robust scanning strategy will monitor change all along this curve, and discriminate between the uses and usefulness of data emerging from different points of the curve.  As a change matures, more and more data points are available with which to analyse it:  we can speak of the change as a variable that is displaying a trend in some direction.  When a change is just emerging, and only a few data points exist with which to characterise it, we can only analyse it via a case study approach.

How do we scan for dynamically evolving change?Chun Wei Choo at the University of Toronto, describes four distinct modes of scanning:

  1. Touring, or undirected viewing, has no particular goal in mind, drawing on a wide variety of sources in order to detect emerging change early.  This “40,000 foot view” sensitises scanners to issues of potentially significant change.
  1. Tracking, or conditioned viewing, focusses more on change items with potential impacts on the organisation or goals.  Scanners review pre-selected sources relevant to specified topics of interest.
  2. Satisficing, or informal search, digs more deeply into a single issue or event, in order to determine quickly and efficiently if a need for action — including formal research in greater depth — exists.
  3. Retrieving, or formal search, is an in-depth research process focussed on a specific issue or event, characterised by an articulated methodology and an exhaustive search for and of sources. [5]

Horizon scanning involves both broad, unfocussed searches through a wide variety of sources — touring — and more focussed research once topics of strategic interest are identified — retrieving.  The former is beyond any one person to perform well; the latter requires at least one person with specialised or local expertise for in-depth analysis.  Thus while the “ownership” of the scanning project may be vested in one person, or a core team of two to three people, the actual process should engage a broad network — both to handle the high bandwidth of sources to be scanned, and to provide specific expertise when required.  For public policy and planning purposes, it is especially useful to engage stakeholders in the process.  This enables greater contribution to, and ownership of, subsequent stages of the integrated foresight process, eg, scenario building and visioning, and ensures policy relevance to constituents and allies.

The figure below graphically maps one possible structure of the relationships among participants in a broadly based scanning network.  It encompasses both participants internal to the organisation, and stakeholders, allies, and experts external to the organisation, as well as links to other scanning projects nationally and internationally.  The latter can be a useful reality check, validating entries, as well as a means of overcoming cultural filters and biases.

How do we choose sources of information about emerging change? – The basic steps of wide-ranging horizon scanning – what Choo described as “touring” – are simple.  But the simplicity is deceptive:  because the core goal of broad horizon scanning is discovering change emerging outside an organisation’s normal frames of reference, scanners must constantly guard against in-built sources of bias in either source identification, taxonomic structure, evaluative processes, or validation criteria.

Scan sources:  how do we choose and document scan sources?

  • In science and technology, we look for sources that those communities themselves use to announce news.
  • For changes on the social and cultural fringe, we look for voices that express values and ideas bubbling among artists and youth (as an example).

Unfortunately, intuitive recognition of a source as useful is not a transferable decision rule.  So, in the best tradition of expert systems analyses, we need to ask ourselves what we are actually doing when we choose sources.  To which the shortest possible answer is probably, “identifying opinion leaders.”  Because our current social construction grants credibility to adventuring within formal structures, such as science, we label those opinion leaders “experts.”  As innovative social and cultural ideas and behaviors challenge the status quo with the potential for transformation, they are generally marginalised – hence the usual scanning label of “fringe” for sources on emerging issues among youth, artists, social movements, the underclass, etc.

For Defra and OST, I created source documentation templates that acknowledged this by asking scanners to categorise sources as “expert,” “popular,” or “fringe.”  This is not perjorative, merely descriptive.  It does, to some extent, conflate a judgement of location of emergence of insight (scientific / rational genius vs. artistic / intuitive genius) with the timeframe of emergence (expert and fringe vs. popular:  the assumption being that something spotted in the popular press is further away from the origin point on the emergence growth curve).

How can we validate scanning sources? – What would be measurable or documentable attributes that would help us distinguish among expert, popular, or fringe sources, and that would establish sources’ credibility as opinion leaders for their communities of interest?

  • High numbers of citations by members of the community:  for science documents, literally the extent to which they are cited; for popular media, their distribution; for “fringe” literature, the “buzz,” measurable also by popularity within their target audience and, in the case of blogs, their ranking by links and hits.  Is the source therefore credible as an opinion leader for that community?
  • Market niche:  to whom is the source targeted?  The Lancet  and New England Journal of Medicine are targeted to professionals in medical research; New Scientist is targeted to scientific professionals and decision-makers, as well as interested laypeople; Discovery is targeted entirely to interested laypeople and students.  Is that documentable, e.g., by reference to mission statements or self-descriptions?
  • Distribution:  does distribution data, or access data (in the case of websources / infofeeds) demonstrate widespread use by members of the source’s target audience / community of interest?  This would to some extent duplicate, and therefore corroborate, the citations variable, above.
  • Media:  the medium of information distribution itself might help distinguish among expert, fringe, and popular, in terms of print journal, professional association newsletter, tabloid, etc.

What other / better observable descriptors might help us formally document sources as best choices for scanning research?

How can we organise scanning data? – Having sifted useful information from a diverse array of sources, organising the data for efficient and useful retrieval is the next step.  Traditionally, scanning databases are organised using a taxonomy that classifies each change by its point of origin:  is it a social change?  An environmental change?  A new technology?  A variety of taxonomic systems are in common use:  STEEP (social, technological, economic, environmental, political); PESTLE (political, economic, social, technological, legal, environmental); PESTLEC (political, economic, social, technological, legal, environmental, cultural).  Each broad section then may be further subdivided, eg, environmental: atmosphere, geosphere, biosphere, hydrosphere.  These taxonomies are defined by the researchers, often a priori.  Ethnographic futures studies offer a different taxonomic approach, classifying the trends of change by their point of impact, rather than their point of origin, eg, how people “Create, Relate, Consume, Connect, and Define” (taxonomy devised by Michele Bowman and Kaipo Lumn, Global Foresight Associates).

But both of those are top-down taxonomic designs, expert designated.  More intriguing is the possibility of using “folksonomies:”  the self-organising taxonomies created by large groups of people individually assigning keyword tags to database entries, as are found on del.icio.us and Flickr.  If all users — both the original scanners and readers — of a scanning database were able to tag entries with their own keywords, a “folksonomic” organisational structure would emerge, which might potentially be more useful in spotlighting trends relevant to user issues and decisions.

How can we validate scanning data?  – We have identified sources; scanned them for trend data and signals of emerging change, organised and annotated them, and now need to consider which are the most important:  when engaged in scanning as part of evidence-based policy-making, we must have strategies for validating the data, especially “weak signal” data, which may be sparse.

Three strategies can aid validation:

  • Confirmation, or accruing multiple citations — scanning is meant to be an ongoing process, a monitoring of emerging change as more and more cases occur.  Thus accruing evidence from a variety of sources of multiple occurrences validates the existence of a change, and indicates the direction of the emerging trend.
  • Convergence, or emerging scientific consensus — truly transformational weak signals will challenge current scientific paradigms.  As more data is available, however, researchers will begin to discard some of the explanations the challenge provoked, and come to agree on a new paradigm.  The past thirty years’ history of the scientific dialogue regarding climate change illustrates this.
  • Parallax, or acquiring “depth of field” on the weak signal of change by collecting views from multiple perspectives, eg, multiple cultural viewpoints.  This ensures that the original perception of the change is not merely an artifact of a cultural filter (and by “culture” I mean organisational as well as ethnic cultures).

Other validation strategies also emerge from specific observational and scientific disciplines, of course; but these represent a general beginning.

Where cultural contradictions arise between scanning and evidence-based policy-making:

By what criteria do we evaluate the robustness of scientific evidence and research data?  Excellent research and data are usually:

  • Credible;
  • Documented;
  • Authoritative;
  • Statistically significant;
  • Coherent:  the data agree;
  • Consensus-based:  the experts agree;
  • Theoretically grounded; and
  • Monodisciplinary.

These also establish the credibility of facts and patterns of present observations that are cited as evidence in policy formulation and decision-making.  The cultural contradiction arises because useful environmental scan “hits” often register on the opposite end of the continua these criteria represent, as indicated:

  • Any emerging issue unusual enough to be useful will probably lack apparent credibility;
  • it will be difficult to document, as only one or two cases of the change may yet exist;
  • it will emerge from marginalized populations, and be noticed initially by fringe sources, hardly the sort of authoritative sources that civil servants feel confident in citing;
  • as emerging issues are by definition only one or two cases, they are also by definition statistically insignificant;
  • the data will vary widely, converging over time only if the emerging issue matures into a trend;
  • not only will consensus be lacking, but experts will often violently attack reports of emerging issues of change, as they represent challenges to current paradigms and structures of expertise, power, and entitlement;
  • emerging issues of change often challenge previous theoretical structures and necessitate the construction of new theories;
  • and the most interesting new change emerges where disciplines converge and clash.  As the impacts ripple out across all the systems of reality, emerging changes and their impacts require a multi-disciplinary analytic perspective.

Scanning specifically – and foresight generally – can help identify emerging opportunities for forward-looking policy, as well as help assess prospective policy risks, security threats, and public vulnerabilities.  But it will face resistance in an evidence-based policy environment for the reasons elaborated above.  Clearly articulated strategies to validate both scan sources and scan data such as those proposed can increase its acceptance, and therefore usefulness, even in an evidence-based policy context.

Dr Wendy  L. Schultz is both Principle at SAMI Consulting and Director, Infinite Futures.  For further information please visit http://www.samiconsulting.co.uk or email Wendy at wendy@infinitefutures.com

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: