Superintelligence, as defined in the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, is “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Humans are already building and expanding upon intelligent systems, albeit slowly. The book asks questions about the future when machine intelligence, machine-human hybrid intelligence, or genetically enhanced human intelligence achieves levels far beyond that of the human brain today. What are the risks? What are the potential rewards? What are the chances any of these things will come to pass? What should we be doing to influence the best outcome? The possibilities of superintelligence are both exciting and sobering.
One danger the book discusses is a of a “fast takeoff”. When an artificial intelligence learns how to learn, is able to invest more processing power and resources into learning, this creates an “intelligence explosion”. A fast takeoff is when exponential improvement results in superintelligence beyond human ability to grasp or perhaps even control. Fortunately, Bostrom explains that there is a factor that pulls the exponential curve downward to steady improvement, or even to a plateau. He calls this factor recalcitrance. A recalcitrant person, system or situation is one which stubbornly refuses to heed good advice, resists improvement, or does not respond to remedies. Bostrom uses recalcitrance in this context as a factor liming how fast intelligent system can improve itself, offering the following equation.
Rate of change in intelligence = Optimization power / Recalcitrance
Optimization power is the amount of effort that is put into improving a system’s intelligence. We would expect an intelligent system to invest a portion of its resources back into improving its intelligence. That would be the smart thing to do. A superintelligence will be smart enough to find ways to win access to ever more computing power, data and other inputs of optimization power. Unless it is effectively “boxed in” and somehow kept from grabbing more resources, a superintelligent AI may improve so quickly that by the time we notice, it is too late to do anything.
Recalcitrance is the tendency of a system to resist improvement. The amount of recalcitrance in the formula above is the answer to the questions, “How hard it is to make an intelligent system smarter?” in general, and specifically, “How hard it is for this intelligent system to make itself smarter?” Formalized and plotted in graphs, the three basic scenarios look something like this.
The recalcitrance is significantly higher than optimization power. Improvement of a system’s intelligence levels off. This is a slow takeoff. We are in little danger from a superintelligence explosion in this scenario.
The recalcitrance is lower than the optimization power. Improvement of a system’s intelligence is steady. This could be a fast takeoff, depending on the angle of the growth line. However, we in this scenario we have a chance to keep an eye on the superintelligence, a hand near the off switch.
In this scenario the recalcitrance is significantly lower than the optimization power. Improvement of a system’s intelligence is exponential. This is a fast takeoff. Such a superintelligence explosion may happen faster than we can roll out the red carpet for our AI overlords.
Which one of the three curves above most resembles your organization ability improve how it improves, how it learns or how it performs? The improvement curve of most organizations often looks like squiggly line, from a combination of all three, at different stages of the journey. Often, this is by happenstance and not by design, because of limited efforts to assess risk, challenges and source of resistance prior to launching into a lean journey, as well as the difficulty leaders have in finding and facing the bad news.
The formula to estimate the fast takeoff of superintelligence is also interesting from the point of view of how organizations make a fast takeoff in their lean transformations. Recalcitrance is the common denominator in whether organizations succeed or struggle with transforming themselves into learning organizations, improving performance through continuous improvement and building adaptive cultures. We can increase the effort to optimize our performance by investing in more training, hiring consultants or sponsoring workshops to redesign our work. But these inputs have diminishing returns unless we address our recalcitrance. Applied to systems that rely on human behavior, recalcitrance can be seen as people’s stubborn uncooperativeness, mistrust or defiance toward authority, leaders’ struggles in managing change, or resistance to adopting new ideas.
In part 2 of this series we will examine ways to identify and address recalcitrance.