In this article, I will focus on the the Old View and the the New View of human error. This is a first, short introduction, which lays the ground for further articles on this interesting topic.
The term ‘New View’ is already 20 years old and basically not that new anymore. However, in many minds and subsequently in numerous organizations, the New View has not yet become established.
Errors occur in every company. Fortunately, these mistakes usually have no consequences and often they are not even noticed. But unfortunately, sometimes there is financial impact or even personal injury.
But why are these errors happening? Are these avoidable mistakes by individuals that should just have been more careful? Or are errors emergent properties of a complex socio-technical system and have little to do with the individual?
The Old View
One possible view is that human error and thus its negative consequence would be avoidable if everyone adhered to the rules. If an error occurs due to carelessness, it is sufficient to point this out to the person and, if necessary, to punish it, to solve the problem. In extreme cases, the punishment can go as far as to remove the culprit from the system. Criminal consequences are also conceivable. These are not necessarily initiated by the company, but in the case of ex officio offences by the prosecutor.
The system itself is considered to be inherently safe. People in the system are seen as potential sources of error and system weakness. If all acting persons make an effort and adhere to the rules, nothing can actually happen. The safety level of the system can be measured by the number of incidents or accidents within a period.
But how does this view help to make a system more secure?
I am inclined to say: not at all! Companies are complex socio-technical systems. A characteristic of these systems is that not all effects of the interaction of different system components are known. Errors, but also system safety, are emergent system properties.
But what is actually an error?
We differentiate between different types of errors. There are the unintentional or unconscious errors that happen without knowing the effects on the system. And there are intended or deliberate mistakes. These are mostly deliberate deviations from existing procedures or rules. Such deviations occur, for example, in the event of conflicting goals, under high production pressure, or because no better alternatives are available. Thus, they are a result of inadequate systems. It is also often the case that these deviations have achieved better results for some time than official procedures.
Whether an action was an error or not often has to do with the result itself. The term ‘error’ is therefore a backward-looking view of an action of which the result became known in the meantime. Especially in an environment with high complexity and incomplete information, it can happen that the same action leads to a positive result and another to a negative result. So whether someone made an error or not can be due to circumstances that were still unknown at that time.
The New View
The New View is distancing itself from the perspective of the human as a source of error and as the weakest link in the chain. Humans are seen much more as a system component that enables high system safety. The starting point is that people come to work to do a good job. If an error occurs, it cannot simply be reduced to the action of an individual. It is necessary to consider the error in the system context. Because the action that later turned out to be an error, was considered by the acting person to be useful for achieving the goal at the time of execution.
People make decisions under high pressure, with conflicting goals and in great uncertainty. In a complex system, decisions have to be made with incomplete information, or the amount of information is so large that it cannot be processed at all. This can lead to information being overlooked or deliberately not being included in decision making.
Make the system more safe
In this context, I consider a system to be an organization or organizational unit with employees, technical systems and processes. If appropriate, the term system can also be extended to external components.
Fortunately, as mentioned at the beginning, most errors remain without consequences. This is primarily due to people’s resilience and sometimes simply due to chance.
Errors provide an opportunity to learn and make the system safer. If errors occur, it is not expedient to limit the analysis to the actions of the individual (the Old View). Removing the ‘culprit’ from the system does not improve it. It is crucial to take a system perspective in the analysis and to want to understand why the decision for this individual made sense in this specific situation (the New View). It must also be taken into account what information the person had available and what conflicting goals it was exposed to. This also raises the question of whether another – comparably competent – person might have made the same decision in a comparable situation or not. If this question is answered with ‘yes’, an adjustment in the system is required in order to achieve sustainable improvement.
 In the following I will write ‘errors’ for better readability. By this, ‘human errors’ are meant.
 I deliberately use the phrase ‘happen’ because these mistakes are not made consciously.
 This is a safety-I perspective. A system can be made more secure based on learning from mistakes. However, this is not the only way. Understanding the factors that make a system work with extremely high reliability can further improve the system without first having to experience negative events. I will also take up the Safety-I and Safety-II topic in upcoming articles.