“A robot that follows commands without question could put humans in danger” warns a Scientific American article that argues why robots need to be able to say “no” to our commands. Just as people should not always do what others tell us to do, neither should advanced robots of future blindly follow our directions. It strikes me that the same is true of lean systems.
Humans can give instructions to machines that are harmful intentionally, unintentionally or that become harmful due to changes in the environment or context of the action. Here is a video of a robot being asked to walk off the edge of a table, reacting to this dangerous request but ultimately obeying the command.
As versatile robots become a greater part of our daily life, the question of “how do we teach robots to say no to bad human instructions?” become increasingly important. The article states
The parallel with lean management systems should be obvious. The original goal of lean was to meet customer demand profitably by doing more with less. This began with Toyota’s efforts to improve quality and output without increasing resources. The pillar of jidoka was put in place to allow people and machines detect problems and stop. Kanban systems were put in place to prevent the “bad instruction” to overproduce. Commands to robots and commands within a lean system also have in common the importance of context. It is essential to understand what is going on upstream and downstream, another word for looking at the system end-to-end, and how the local lean process fits in. An action such as “throw a ball out a window” may be OK for a robot in one situation but not in another. Likewise, lean methods cannot be blindly applied, context-free.
Another interesting parallel is the need for both robots and lean systems to be guided by morality. The article advises
In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable. Hence, they will need representations of laws, moral norms and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles.
Robots need standards, in other words. Lean systems are designed to say “no” to immoral outcomes. Waste of resources is immoral. Disrespect for people is immoral. Hiding problems is immoral. Lean methods such as reducing inventory to make operational problems such as defects, set-up times, work imbalance between process steps, and transportation distance, painfully visible. At the moments humans are smarter than robots, finding ingenious ways to defeat even the best pokayokes. Hopefully we can direct some of that ingenuity towards building jidoka robots, capable of detecting bad commands and stopping.
A true and effective lean system must be able to say ”no” to bad human instructions, and stop or refuse to function harmfully. This requires the ability to detect changes in context or environment, create new instructions to counter the effects of new and unforeseen problems, and to update standards accordingly. Until we can create robots that are both smarter and morally superior to ourselves, this will continue to require human monitoring and improvement of the systems we live and work within.