Here I provide my thoughts that popped up during the reading of the forth chapter “Human error? No, bad design” from “The Design of everyday things” book.
Key take aways
- try to think that most of the time “human errors” can be fixed by a better design
- not 100% true but thinking this direction and trying other branches only if you’re really stuck here should be productive
Blaming others
Most of the time when we blame “human error” it’s an attempt to avoid redesigning the system or avoid accepting the fact that we designed the system the wrong way. I’ve seen this many times, when as a response on errors in our system designers say “people are just doing stupid things” or something like this. It’s not all the time but quite often it could be solved with the better design. So it should be quite beneficial for me thinking that way.
Culture is an important part as well. If we’re in the culture where “errors” are source of data, not the reason to blame people, we’ll get the data from people. They will be willing to share it. But important part is to use this information for improving. Because I saw examples where after another incident the person was “brave” to say smth like “Yes, it was a mistake, I did it”… and… that’s it. Nothing happened, it was kind of a cult of “blamelessly accepting” mistakes. Of course it lead to the awful working environment.
Time is a huge stress factor
It’s true that the design of the system might be really good and takes into account all the declared needs from different stakeholders. But then management demands higher performance and people start to violate some procedures or skip them.
And what to do in that case?
When I started to think about it I realised that it’s a rabbit hole and there is lots of things to think about, so I’ll just provide few ways how I think it can be solved and maybe I’ll think about it later more thoroughly
way to tackle | comments |
---|---|
Design the system with the assumption that management will be more demanding and throughput should be increased | 1. Overengineering the system => it might be more expensive 2. By adding this not explicit requirement you may sacrifice other, more important ones |
Make it impossible for people to skip procedures | 1. Doesn’t sound possible 2. People are quire creative and their behavior is hard to predict 3. Creating foolproof system may be impractical and costly |
Reevaluate the system requirements from time to time and adjust the procedures or the system | 1. Sounds good. In that case we’ll change the system only if it’s needed and we leave the possibility for changes. Sounds good, keeping in mind that some changes will be needed all the time 2. How to be sure that reevaluation happens and if it happens - the needed changes will be implemented? |
Introduce continuous monitoring of the system and create a mechanism that will tell that some procedures are not good anymore | 1. Sounds the most advanced for me but also the most reasonable 2. it will also lead to the higher cost of the system but the adjustments will happen only if they are needed |