Computers will soon outsmart us. Does that make an AI rebellion inevitable?

New Post At https://sciencefiction.site/computers-will-soon-outsmart-us-does-that-make-an-ai-rebellion-inevitable/ -

The question, “Will Computers Revolt?” is really many different questions rolled into one. Will computers become the dominant intelligence on the planet and will they take our place? What does being “dominant” mean? Will computers and humans be in conflict? Will that conflict be violent? Will intelligent computers take jobs and resources from humans?



Most AI experts agree that computers will eventually exceed humans in thinking ability.  But then, even more questions arise. When will it happen? What would it be like to ‘exceed humans in thinking ability’? Will computer intelligence be just like human intelligence—only faster? Or will it be radically different?


Although today’s AI systems have remarkable abilities, they are not “thinking” in any general sense of the word.  Accordingly, we now use the terms AGI (Artificial General Intelligence), Strong AI, True AI, and others to differentiate the idea of true thinking from today’s AI systems which have tremendous capabilities but more limited scope.


With the coming of AGI, many new risks will emerge but before exploring these, let’s consider how far in the future this is likely to happen.


When Will AGI Happen?


Sooner than you think!  Why don’t we already have AGI? Two issues hold us back:



  1. Creating the computational power needed for AGI

  2. Knowing what software to write for AGI


AI experts have come up with differing estimates of the computational power of the human brain and predictions of increasing computational power of CPUs. The lines eventually cross at a “singularity” (coined by Ray Kurzweil) with CPUs exceeding brains in terms of brute-force computation in ten years, or twenty, or half a century, depending on the underlying assumptions.


But this may be the wrong question. We all know that lightning-fast searches on a properly-indexed database can produce results a million- or billion-fold faster than the brute-force approach. What portion of AGI will be amenable to this type of software efficiency?



Boston Dynamics’ robots already exhibit the fluid motion and coordination which we humans get from our cerebellum’s 56 billion neurons—65% of our brain’s computational capacity. And robots accomplish this with a few CPUs—not because the CPUs exceed the computational power of 56 billion neurons but because designers of robotic software know about physics, forces, and feedback and can write software more efficiently than the trial-and-error/learning approach used by your brain.


The nut of the argument is that brains aren’t very efficient computational devices—they get the job done but there are better/faster ways to approach AGI software which developers can use. We may already have computers with enough power for AGI but we don’t know it yet — which brings us to the second question.


Most people look at the limitations of today’s AI systems as evidence that AGI is a long way off.  I beg to differ. AI has most of AGI’s needed pieces already in play, they just don’t work together very well—yet. While the Jeopardy!-playing Watson is an amazing achievement, it is unrealistic to expect that it would ever manifest “understanding” or common sense at a human level. You understand coffee because you’ve seen it, poured it, spilled it, scalded yourself with it, and on and on. Watson has “read” about coffee. You and Watson could not have equivalent understanding of coffee (or anything else) because Watson hasn’t had an equivalent real-world experience. For true understanding, Watson-scale abilities need to be married to sensory and interactive robotic systems in order for common sense to emerge. We’ll need to incorporate object and knowledge representation, pattern-recognition, goal-oriented learning, and other aspects of AI in order to achieve AGI. These pieces already exist in various forms and AGI might all come together in as little as ten years—much sooner than most think.


With this shortened timeframe in mind, it’s time for serious thinking about what an AGI system might be like, what concerns we should have, and how we should prepare ourselves. We humans will necessarily lose our position as “biggest thinker” on the planet, but we have full control over the types of machines which will take over that position. We also have control over the process—be it peaceful or otherwise.


will computers revolt preparing for the future of ai ibm watson getty
Ken Jennings and Brad Rutter compete against ‘Watson’

Scenario 1: The “Peaceful Coexistence” Scenario


This is the first of four possible scenarios considering the conflicts which might arise between computers and humans.  It is useful to consider the questions of “What causes conflicts amongst humans?” and “Will these causes of conflict also exist between computers and people?”



Thinking machines will be interested in their own energy sources, their own “reproductive” factories, and their own ability to progress in their own direction.



Most human conflicts are caused by instinctive human needs and concerns. If one “tribe” (country, clan, religion) is not getting the resources or expansion which it needs (deserves, wants, can get) it may be willing to go to war with its neighboring tribe to get them. Within the tribe each individual needs to establish a personal status in the “pecking order” and is willing to compete to establish a better position. We are all concerned about providing for ourselves, our mates and our families and are often willing to sacrifice short-term comfort for the long-term future of ourselves and our offspring, even if this creates conflict today.


These sources of conflict among humans seem inappropriate as sources of conflict with machines. Thinking machines won’t be interested in our food, our mates, or our standard of living. They will be interested in their own energy sources, their own “reproductive” factories, and their own ability to progress in their own direction. To the extent that resources or “pecking order” are sources of conflict, thinking machines are more likely to compete amongst each other than they are to compete against the human population.


Sophia, a robot created by Dr. David Hanson, founder and CEO of Hanson Robotics.

In the long term, following this scenario, mankind’s problems will be brought under control via computerized decisions. AGI computers will arrange solutions for overpopulation, famine, disease, and war, and these issues will become obsolete. Computers will help us initially because that will be their basic programming and later because they will see that it is in their own interest to have a stable, peaceful human population. Computers will manage all the technology, exploration, and advancement.


Scenario 2: The “Mad-Machine” Scenario


There is a popular science fiction scenario of a machine which becomes self-aware and attacks its creators when they threaten to disconnect it. This isn’t a realistic scenario for several reasons. Humans come into conflict because we are territorial, possessive, greedy and a host of other reasons which would not be valuable to an AGI.  Even our innate self-preservation instincts are not necessary for an AGI. We will strive to make AGIs which are pleasant, entertaining, agreeable, we won’t be able to sell them otherwise. And even when AGIs begin to program their own future AGI generations they will pass on these traits…just as we try to pass our own values to our children.


will computers revolt preparing for the future of ai ex machina xxl
Universal Pictures

But let’s consider some conflicts between humans and other species. Gorillas are approaching extinction as they are hunted as trophies. Rhinos as an aphrodisiac. Wolves were hunted because they were “pests”. At the other end of the life-form size spectrum, the smallpox virus is virtually extinct (and we are proud of this accomplishment) because it was a serious risk to human life. We need to take steps to ensure that we aren’t trophies, pests, or parasites.


A Rogue Computer?



But suppose there is a machine which misbehaves? Whether such a machine occurs by accident or by nefarious human intent (see below), such systems would be also dangerous to other AGIs. Accordingly, AGIs will be motivated to eliminate such systems. With the cooperation of the machine population, such individual machines can be weeded out of the environment and the prospect of such elimination would act as a deterrent against such behavior.


Would AGIs start a nuclear war? In this case, the interests of people and AGIs are the same—a full-scale war would be disastrous for all. To look for the really dangerous situations, we need to consider instances where the objectives of humans and AGIs diverge. Issues of disease, famine, and drought have a devastating impact on the human populations while AGIs might just not care.


If thinking machines begin building their own civilization, individual misbehaving machines will be a greater threat to their civilization than to ours. Just as we take steps to remove criminals from our society, future machines will likewise eliminate their own—and they will be able to do it faster and more effectively than any human vs. machine conflict would.


Scenario 3: The “Mad-Man” Scenario


What if the first owners of powerful AGI systems use them as tools to “take over the world”? What if an individual despot gets control of an AGI system?



This is a more dangerous scenario than the previous. We will be able to program the motivations of our AGIs but we can’t control the motivations of the people or corporations that initially create them. Will such systems be considered tools to create immense profits or to gain political control? While science fiction usually presents pictures of armed conflict, I believe that the greater threat comes from our computers’ ability to sway opinion or manipulate markets. We have already seen efforts to sway elections through social media and AGI systems will make this vastly more effective. We already have markets at the mercy of programmed trading—AGI will amplify this issue as well.



We will be able to program the motivations of our AGIs but we can’t control the motivations of the people or corporations that initially create them.



The good news is that the window of opportunity for such a concern is fairly short, only within the first few AGI generations. During that period, people will have direct control over AGIs and they will do our bidding. Once AGI advances beyond this phase, they will be measuring their actions against their own common good. When faced with demands from humans to perform some activity with a long-term downside, properly-programmed AGIs will simply refuse.


Scenario 4: The “Mad-Mankind” Scenario


Today, we humans are the dominant intelligence and many of us are not comfortable with the idea of that dominance slipping away. Will we rise up as a species and attempt to overthrow the machines? Will individual “freedom fighters” attack the machines? Perhaps.


behind the unsettling sci fi landscapes of simon stalenhags electric state electricstate 16 crp1
Art from Simon Stålenhag’s Electric State Simon Stålenhag

Historically, leaders have been able to convince populations that their problems are caused by some other group—Jews, Blacks, Illegal immigrants, etc.—and convince the population to take steps to eliminate the “cause” of their problems. Such a process may take place with AGI and robots as well: “We’re losing jobs!” “They are taking over!” “I don’t want my daughter to marry one!” But the rising tide of technology will improve the lives of people too, and few of us would be willing to turn back the clock.


Will there be individuals who attempt to subvert computers? Of course—just as there are today with hackers and virus-writers. In the long term their efforts are troublesome but generally futile. The people who own or control the computers will respond (as those in power do today) and the computers themselves will be “inconvenienced”. Eventually, the rebels will move on to other targets and leave the indestructible computer intelligence alone.


Conclusion


So will computers revolt? Yes, in the sense that they will become the dominant intelligence on our planet—the technological juggernaut is already under way. It is also likely that if we do not solve our multiple pending calamities (overpopulation, pollution, global warming, dwindling resources), thinking machines will solve them for us with actions which could appear warlike but are actually the direct consequences of our own inaction. As in the quip from Neil deGrasse Tyson: “Time to behave, so when Artificial Intelligence becomes our overlord, we’ve reduced the reasons for it to exterminate us all.”


All the preceding scenarios are predicated on the implementation of appropriate safeguards. I expect groups such as the Future of Life Institute to be vocal and effective in directing AGI development into safer territory. I am not advocating that everything will be rosy, full speed ahead. But with an understanding of how AGI will work, we can predict future pitfalls and it will be possible to avoid them.


This article was adapted from the book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence. By Charles J Simon, Available on Amazon, Oct 30, 2018.



Editors" Recommendations



Source: https://www.digitaltrends.com/cool-tech/will-computers-revolt-preparing-for-the-future-of-ai/

Comments

Popular posts from this blog

Apple and Amazon Don"t Need to Buy Hollywood

Books: Theodore Sturgeon"s overlooked centenary plus reviews and book news

Superman to The Fonz: Vintage lunchbox collection on sale