byThe research community is beginning to understand that motivations are not a human “artifact” of consciousness, but part of the essential glue that binds consciousness together…
Without motivations we have nothing that holds us to this vessel, ensuring that we continue to eat, pay our rent, and do other things necessary for our survival. Conscious machines will for this reason have motivations as well. Otherwise they simply just wouldn’t function. This is an important point because talk of the singularity often brings up visions of a single integrated “machine” that will inevitably enslave humanity. A better question is:
“Will AI be used to gain immense advantage for a single party (whether that party is the AI itself or the human that controls it), or will AI be used to maximize benefit for us all?”
Even if the AIs have interfaces that allow them to share information more rapidly than humans can through reading or watching media, separate AIs will have separate motivations from a single centralized AI. Given that a signature of consciousness is motivation, any consciousness will obviously be motivated to secure all the resources it needs to ensure its survival. In some cases, the most efficient way to secure resources is sharing. In other cases, it’s through competition. AIs might share resources, but they might also compete.
When and if an artificial consciousness is created, there’ll almost certainly be multiple instances of it. Because a consciousness cannot exist without motivation, and because the motivation of each consciousness differs, requiring what might be great effort to get on the same page, it may very well be true that multiple consciousness’s cannot “merge” in a way that would become truly threatening to humans unless one subsumes all others. Anything else would merely be a co-location of minds with different objectives, negotiating a sharing of resources.
One AI with far fewer resources than another would in fact probably fear the far more powerful AI might just erase it and take over its resources. Think of your “several generations out of date” home computer trying to hold its own against Big Blue. Rather than us humans needing to fear AI, an AI might more likely need to be afraid of humans not protecting it against other AIs.
Centralization rather than technological advance is the real danger for ANY conscious entity. Yet when you consider the competitive advantage technology gives, the near infinite rate of change of the technology singularity introduces the possibility of a future in which the technology arms race concentrates power and resources to a degree never seen before. Could it put a few into positions of unimaginable power that may not ever be unseated? If so, there will be nothing stopping those few from becoming unimaginable despots to whom the rest of humanity are merely disposable commodities whose suffering means nothing.
Think of what you would do if you had infinite power over everyone and there were no consequences for your actions. Think of what would happen if you needed a kidney and that child over there had one that would fit just fine. Think of what would happen if some man with unimaginable power wanted that woman, or the next, or the next thousand. Think of what would happen if you wanted to buy something and you could just flip a switch and empty out the world’s bank accounts, then watch with casual detachment as millions fight like animals for food and water. Think of what would happen if that one man in control just happened to wake up one morning to the conclusion that there were several billion people on the earth too many.
The technological singularity, if it exists, is a kind of Armageddon.
In my upcoming book “The Technology Gravity Well” I delve into these and other issues, including how a new breed of massively collaborative software could usher in the singularity in the next 5 years. This may be one of the most important books you come across this year.