There’s been a lot of talk lately, from big names like Musk, Hawking, and Gates, that humanity might face some future threat from the intelligent software systems, aka artificial intelligences, that we are likely to build. Kevin Kelly, a longtime pundit on things digital since co-founding Wired magazine, just published an essay saying that these fears are way overblown. He listed 5 common assumptions that people make about the growth of superhuman AI, claiming that there is no evidence supporting any of them. Therefore he thinks that we might be waiting superstitiously for super AI like the 20th century Micronesian tribes waited fruitlessly for the WW II cargo planes to return with trade goods. His article is worth reading, but in case you don’t, the 5 unsupported assumptions are these:
- Artificial intelligence is already getting smarter than us, at an exponential rate.
- We’ll make AIs into a general purpose intelligence, like our own.
- We can make human intelligence in silicon.
- Intelligence can be expanded without limit.
- Once we have exploding superintelligence it can solve most of our problems.
People jumped all over this, many of them taking the position that Kelly’s arguments were straw men. I was shocked to realize that my recent studies for this blog’s book project actually gave me an informed opinion. I posted same, and repeat the opinion here below.
[Kelly’s post is …] Right in many ways but wrong on risk of the superhuman. Here are two risky scenarios where AI exceeds the abilities of either single humans or groups, without ever needing to be the superhuman artificial general intelligence straw man. Either scenario or both could be imminent. [ I meant imminent in a historical sense, but probably not the next decade. ]
[ My first point below relates to Kelly’s argument that intelligence alone can’t increase knowledge very much without having a way to do research and engineering in the real world. ]
(1) Yes, new knowledge often requires real world experiments. However, models and simulations can and do help zero in on which experiments to do. A better, faster facility for gathering and integrating existing knowledge will do better at picking the simulations to try. Sims are already an existing strength of current silicon systems. Give such a system effectors for doing experiments (there are many ways to do this, including help from cooperative or coerced humans), and it learns more about the real world. Because it would probably not have human cognitive or emotional biases, the peer review that we use to eliminate those errors would not be needed. It could do this faster than any team of humans, and with a focused agenda that might be kept secret from us. The resulting knowledge is power, which might be wielded to our detriment by any system whose goals don’t align well with ours.
(2) Yes, the strongly established society of mind concept means that our ability to solve problems partakes of a variety of knowledge-extracting abilities that we lump under “intelligence”. We very poorly understand how this gets coordinated in a single human mind. But people are working on it. And we likely understand better how problem-solving coordination gets done in groups of human minds. For a AI, there is no difference between the group or individual situation [ an AI can easily be a “group” ], so principles derived from either or both will help it. If we give strong coordinating power to an AI with any fruitful set of intelligent abilities, then what it can do will have emergent properties above and beyond the mere sum of its component abilities. Such a system could emerge at any time, and lacking the functional equivalent of moral reasoning and a conscience, could do us damage. It would not need new knowledge, just an understanding of Machiavelli and access to the internet.