RiskSocial scientists have used the term socio-technical system for a long time; however, risk and resilience practitioners are increasingly using this term. So, what exactly is this type of system and what does it have to do with risk management?

This article summarises the characteristics of socio-technical systems in a way that is consistent with descriptions published from the 1950s to the present day. It also discusses the distinction between social resilience and technical redundancy in the design of this type of system. To put these terms in a risk management context, it is important first to understand the evolution of ideas that led to the development and use of these terms when describing socio-technical systems.

Historical Context of Socio-Technical Systems

The socio-technical concept first arose in about 1949 during the post-war reconstruction effort in Britain. In 1951, research was published on how social systems behaved in organisations that built and operated engineering systems.

Prior to the 1950s, engineers designed technologies for specific purposes. The organisations that built and operated these technologies were designed to suit the requirements of the technology, often by translating ideas from technological systems into management systems. Factories, mines and power stations of the day are all examples. The technical objectives of the system tended to take priority over the social needs and requirements of the workforce that built and operated the technology. Unsurprisingly, labour disputes were frequent. Weber’s principles of bureaucracy and Taylor’s concept of scientific management were seen as the best way to design an engineering organisation.

By the early 1960s, the technological imperative in organisation design was giving way to emphasis on positive economic and human results. The aim was to find the best ways to match the requirements of both the social and technical systems. Emery and Trist studied this development in technological organisations and developed some principles. An important principle was that the work system was defined as a set of activities that made up a functioning whole.

The idea of redundancy of parts is well known to engineers to improve the reliability of an engineered system. During the 1960s, the idea of redundancy of functions developed, which led to the idea of multi-skilling of individuals within the workforce. This was quite different to earlier ideas of Taylorism in technical organisations.

The idea of socio-technical improvement developed to find the best match between the technological and social components of a system.

Hence, the term socio-technical system was coined to describe holistically the functioning of these social and technical components.

The notion of socio-technical systems is concerned with interdependencies both internally and externally. Internally, self-regulating groups are dependent on one another in order to achieve the desired output from the whole system. However, whole enterprises also interact with their external environment. The enterprise as a socio-technical system is open to supplies and services provided by other enterprises; it is also open to customers who rely on the enterprise to provide goods or services. The socio-technical system is also open to disturbances or disruptions, sometimes with surprising consequences which cause instability. This dynamic instability is generally referred to as risk and, in extreme cases, can lead to a crisis.

Today, almost all organisations are dependent on technology to achieve their goals, especially networked computer and information-sharing technologies. However, unlike the earlier socio-technical systems, many organisations today do not directly employ the designers and operators of the technologies. The interdependence of technologies, both internal and external to the enterprise, is now far more complex.

While earlier socio-technical systems were relatively simple (for example, the power station of the 1960s), today’s complex socio-technical systems are interconnected in ways that even the designers do not fully understand (for example, the Internet of Things).

The next section summarises the terminology that is widely used in the design of socio-technical systems. These are not formal definitions, but are an attempt to describe the concepts generally understood by people who design, operate or study socio-technical systems.

Terminology Used by Designers and Operators of Socio-Technical Systems

A socio-technical system is a network of interconnected elements comprising groups of people and technology that functions as one simple or complex system designed to achieve specific goals.

For a socio-technical system:

  • Error is a technical concept that describes the difference between the system goal and the actual output of the system.
  • Redundancy is a technical concept used to describe certain processes or components that improve the reliability of the system as a whole (for example, duplication of elements or processes). In a high reliability socio-technical system, the design aim would be to have no errors in operations.
  • Regulation is a technical concept that is used to refer to the process of detecting and responding to system errors, including those caused by surprising disturbances in the operating environment. Self-regulation is a key feature of high reliability socio-technical systems.
  • Resilience is a social concept that refers to the elasticity or adaptability of a socio-technical system so that after a significant failure of the system to self-regulate (after a disturbance beyond the design parameters of the system) it may in time resume the achievement of its goals (reboot), or alternatively set new goals (after a redesign). Resilience is important when high reliability is not achievable or not desirable.

Some important references that discuss these issues in more detail are listed below.

References

Jarman, A. 2001 ‘Reliability’ Reconsidered: A Critique of the Sagan-LaPorte Debate Concerning Vulnerable High-Technology Systems. Chisholm and Lerner Paper, Canberra

Landau, M. 1969 Redundancy, Rationality, and the Problem of Duplication and Overlap. In Public Administration Review, Vo. 29, No 4, (July/August), 346-358

LaPorte, T.R. 1996 High Reliability Organizations: Unlikely, Demanding and At Risk. Journal of Contingencies and Crisis Management, Vol 4, No 2, 60-71. Oxford, Blackwell Publishers

Perrow, C. 1999 Normal Accidents: Living with High-Risk Technologies. Princeton New Jersey, Princeton University Press

Rochlin, G.I. 1993 Defining ‘High Reliability’ Organizations in Practice. In Roberts, K.H. (ed.), New Challenges to Understanding Organizations. New York, MacMillan

Sagan, Scott D. 1993 The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton New Jersey, Princeton University Press

Schulman, P.R., Roe, E., van Eeten, M. and de Bruijne, M. 2004 High Reliability and the Management of Critical Infrastructures. In Journal of Contingencies and Crisis Management, Vol 12, No 1, 14-28. Oxford, Blackwell Publishers

Trist, E. 1981 The Evolution of Socio-Technical Systems: a Conceptual Framework and Action Research Program. Occasional paper No 2. The Ontario Quality of Working Life Centre, Canada

Von Bertalanffy, L. 1950 The Theory of Open Systems in Physics and Biology. Science, Vol 3

Wildavsky, A. 1989 Searching for Safety. New Brunswick USA, Transaction Publishers

Kevin Foster
Dr Kevin J. Foster is the managing director of Foster Risk Management Pty Ltd, an Australian company that provides independent research aimed at finding better ways to manage risk for security and public safety, and improving our understanding of emerging threats from ‘intelligent’ technologies.