Artificial Intelligence Revolution: Will Human Beings Be Permanent or Destroyed (I)

Time:2019-8-12

We are faced with a very difficult problem to solve in an unknown time, and the whole future of mankind is likely to depend on it. —— Nick Bostrom

<u style=”text-decoration: none; border-bottom: 1px dashed grey;”>Welcome to the first part of the second part of the article series. </u>

Part 1 begins with a discussion of weak artificial intelligence (ANI), which focuses on a narrow task, such as navigation or chess, and how it works in the world around us. Then we studied why from ANI to strong artificial intelligence (AGI) (AI is at least as intelligent as humans) was such a huge challenge from the beginning. We discussed why the exponential speed of technological progress tells us,’What we have seen in the past shows that AGI may not be as far away as it seems. 。 The first part tells you the fact that once our machines reach human intelligence, they may behave like this:

Let’s look at the screen, at our strong concept of artificial superintelligence, ASI (artificial intelligence smarter than anyone), and try to figure out what kind of emotion we should hold when we think about this problem.

Before we go further, let’s remind ourselves what super intelligence means for machines.

A key difference is the difference between high-speed superintelligence and high-quality superintelligence. Usually, when they imagine a super-intelligent computer, the first thing they think about is that it’s as smart as people, but it can have more, faster AI than people think, and it’s a million times faster than humans, which means that it can calculate the data that humans need to calculate for ten years in five minutes.

This sounds impressive, and ASI is thinking much faster than anyone else, but the real difference is that it has an advantage in intellectual quality, which is totally different. The reason why humans have higher intelligence than chimpanzees is not because of the different speed of thinking – but because the human brain contains many complex cognitive modules, which can achieve complex language expression, long-term planning or abstract reasoning functions, which do not exist in the chimpanzee brain. Accelerating chimpanzee brains thousands of times won’t bring them to our level – even in a decade, he can’t figure out how to assemble a complex model using a set of custom tools, which humans can do in just a few hours. No matter how much time chimpanzees spend trying, it is impossible for them to achieve human cognitive function.

But not only can chimpanzees not do what we do, their brains can’t understand the existence of the world. A chimpanzee also knows what a person is and what a skyscraper is, but he can’t understand what a skyscraper is.fromIt was built by human beings. In his world, anything huge is a part of nature and history, and he can not build a skyscraper, nor can he realize that any skyscraper can be built by anyone. This is the result of the difference in intellectual quality.

The quality gap between chimpanzees and humans is very small in the intellectual range scenarios we are talking about today, even in the smaller biological range. In previous articles, I used staircases to describe the range of biocognitive abilities:

To understand how powerful super intelligent machines are, imagine a person standing on a pale blue staircase, while a super intelligent person standing on a dark green staircase is only two steps higher than human beings. This machine may have only a little superintelligence, but it’s as far apart from our cognitive improvement as the chimpanzees and humans we just described. Just as chimpanzees can’t understand skyscrapers that can be built, even if machines try to explain to us, we can’t understand what machines on dark green stairs can do, let alone let us do these things. This is only two steps ahead of us. On the second to the highest steps on this staircase, a machine is like an ant to us – it can try for many years to teach us the simplest things it knows, but this attempt is hopeless.

But the super intelligence we’re talking about today goes far beyond anything on this staircase. In a smart explosion – the smarter the machine, the faster it can increase its intelligence until it starts to soar upwards – a machine may take years to climb from the chimpanzee steps to the upper ones, but it may only take hours to jump onto the second dark green step above us, when it is higher than the previous one. When we are ten steps taller, it may jump four steps per second. That’s why we need to realize that soon after the release of the major news story about the first machine reaching the human level of AGI, we may face coexistence on Earth with this thing on the ladder (possibly a million times higher).

And since we have just identified a desperate activity to try to understand that machines are only two steps more powerful than we are, let’s be very specific about what ignorance is.It is impossible to know what will happen or what consequences will result from ASI.No pretender knows what superintelligence means.

For hundreds of millions of years, evolution has been slowly and gradually promoting the development of biological brain. In this sense, if human beings were born with ASI machines, we would greatly trample on evolution. Or maybe it’s part of evolution – maybe evolution works by increasing intelligence until it reaches the level where it can create machine superintelligence, which is like a stumbling block to the explosion of global change. Determining the new future of all living things:

The reason we will discuss later is that a large part of the scientific community believes that the question is not whether we will reach the intelligence line, but when we will hit it, which is a bit of crazy information.

So what should we do?

No one in the world, especially me, can tell you what happens when we meet this line. But Nick Bostrom, Philosopher and Chief Artificial Intelligence Scientist at Oxford Philosopher University, believes that we can classify all the results into two main categories.

First, looking at history, we can see that the declaration works like this: species suddenly appear, exist for a period of time, after a period of time, they inevitably fall off the balance beam of existence and go extinct.-

It is a reliable rule in history that “all species will eventually die out” is the same as “all humans will eventually die out”. So far, 99. Nine percent of species have fallen off the balance beam, and it’s clear that if a species sways down the beam, it’s only a matter of time before some natural wind or an asteroid suddenly knocks it down for others. Bostrom uses a state of attraction to show that species in a place are teetering into the abyss of destruction.

Although most of the scientists I met acknowledged ASI’s ability to exterminate humans, many also believed that ASI’s ability could be used to bring humans and even entire species to the second state, species immortality. Bostrom believes that species immortality is as attractive as species extinction, that if we can do this, we will never be affected by extinction – we will have the opportunity to defeat death. So even though all species have fallen off the balance beam so far and are on the verge of extinction, Bostrom still believes that the balance beam has two sides, but nothing on Earth has enough intelligence to figure out how to reach the other side.

If Bostrom and others are right, from everything I read, they may really be right, and we have two rather shocking facts to accept:

1) The emergence of ASI will provide the possibility for species to land on the immortal side of the balance beam for the first time.

2) The emergence of ASI will have such an unimaginable impact, no matter in which direction, it is likely to knock human beings off the balance beam.

It is likely that when we evolve to the intellectual line, it will permanently end the relationship between human beings and the balance beam and create a new world, whether human beings or not.

It seems that the only question human beings should ask at present is: when can we reach that line, and when this happens, which side of the balance beam will we fall on?

Nobody in the world knows the answers to these two questions, but many smart people have spent decades thinking about them. We will use the rest of this article to explore what they propose.

Let’s start with the first part of the question: When do we reach that line?

How long does it take for the first machine to achieve super intelligence?

Unexpectedly, opinions differ greatly, which is a heated debate between scientists and thinkers. But many people, such as Professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or Ray Kurzweil, the most famous inventor and futurist, agree with the graph presented by machine learning expert Jeremy Howard in his TED speech:

These people believe that this will happen soon – exponential growth is playing a huge role. Machine learning is only slowly approaching us now, but it will roar past us in the coming decades.

But others, such as Paul Allen, co-founder of Microsoft, research psychologist Gary Marcus, Ernest Davis, computer scientist at New York University, and Mitch Kapor, a technology entrepreneur, argue that thinkers like Kurzweil greatly underestimate the ability of humans to face challenges and think that we are not actually so close to that. Lines.

The Kurzweil camp retorts that the only underestimate is the rate of exponential growth, and they compare skeptics with those who saw the slow growth of the Internet in 1985, arguing that the crowd will have no impact on anything influential in the near future.

Skeptics may retort that the efforts needed to make intellectual progress will also become more painful with each subsequent step, which will offset the typical exponential nature of technological progress, and so on.

The third camp, including Nick Bostrom, argues that neither side has any reason to guarantee the timeline, and acknowledges that: A) this is absolutely possible in the near future; B) but it may take longer.

Others, such as the philosopher Hubert Dreyfus, think that all three groups naively believe that there will be intelligence lines, and they think that ASI may never be achieved.

So what do you get when you put all these ideas together?

Vincent C, 2013. In a survey conducted by M ller and Nick Bostrom, hundreds of AI experts were asked at a series of meetings the following questions: “In terms of this issue, it is assumed that human scientific activities will continue to develop without significant negative impacts. In which year you will see such a probability of HLMI4 (10% / 50% / 90%) survey asking them to come up with an optimistic year (10 may have AGI), a realistic guess (that year we have a 50% chance of getting AGI – that is, we are more likely to have AGI after that year). And a safe guess (90% chance of having AGI that year). Collected as a data set, the results are as follows:

Median optimistic year (10% probability):2022

Median actual year (50% probability):2040

Median for pessimistic years (90% probability):2075

So participants thought we would have AGI in 25 years. The 90% median answer in 2075 means that if you are a teenager now, the median respondents and more than half of AI experts are almost sure that AGI will happen in your lifetime.

Another recent study, conducted by author James Barrat at Ben Goertzel’s annual AGI conference, eliminated percentages and simply asked participants when they thought AGI would be achieved. – 2030, 2050, 2100, or 2100, or never. The result is:

By 2030:42% of respondentsChoose this option

By 2050:25% of respondentsChoose this option

By 2100:20% of respondentsChoose this option

After 2100:10% of respondentsChoose this option

Latter Lammas:2% of respondentsChoose this option

Very similar to the results of M ller and Bostrom. In Barrat’s survey, more than two-thirds of participants believed that AGI would emerge in 2050, while less than half predicted that AGI would emerge in the next 15 years. It is also surprising that only 2% of respondents believe that AGI will not be part of our future.

But AGI is not that line. So when do experts think we will reach ASI?

M ller and Bostrom also asked experts what they thought we might have ASI: A) within two years of having AGI, ASI (almost immediately a smart explosion) and B) within 30 years. The result is:

According to the median table name of the answer, the possibility of fast conversion from (2 years) AGI to ASI is limited.10%However, a longer conversion period of 30 years or less is possible.75%

Participants in our data are 50% likely not to know the length of the transition, but for the general purpose, based on the above two answers, let’s estimate that they will say 20 years. So the median of the most realistic guess in the field of Artificial Intelligence Experts about when we can hit the line of ASI is (the 2040 forecast for AGI + our 20-year transition estimate for the transition from AGI to ASI)= 2060Year.

Of course, all the above statistics are speculative. They represent only the opinions of the AI expert community, but it tells us that a large part of the people who know the topic best will agree to estimate the arrival of ASI that can change time as 2060. Only 45 years ago.

Well, now the second part of the question is: Which side of the balance beam will we fall on when we reach that line?

Super Intelligence will generate tremendous power – the key issue for us is:

Who are the people who control this power and what are their motivations?

The answer will determine whether ASI will be an incredibly great development, or an incredibly bad development, or somewhere between the two.

Of course, the team of experts discussed the answer to this question. The Muleller and Bosteren surveys asked participants to assign probabilities to the possible effects of AGI on humans, and found that 52% of the responses were good or very good, and 31% were bad or very bad. For relatively neutral reactions, the average probability is only 17%. In other words, the people who know it best are very sure that this will be a huge thing. It is also worth noting that these probabilities refer only to the emergence of AGI – if the question is about ASI, I think the neutral percentage may be lower.

Before we go further into the good and bad parts of the problem, let’s ask the question, “When will it happen?” And “Will it be good or bad?” Part of the problem is grouped into a graph containing views, which includes the opinions of most relevant experts:

We’ll discuss the main camps in detail later, but first of all – what’s your point of view? Actually I know what your point of view is, because that’s my point before I start to study this topic. Some of the reasons why most people don’t really think about this topic are:

  • As described in Part 1, movies confuse things by presenting unrealistic AI scenarios that make us think AI is not worth taking seriously in general. If the CDC issues a serious warning about vampires in our future, James Barrat will compare the situation with our response.
  • Because of some so-called cognitive bias, it is difficult to believe that something is true until we see evidence. I’m sure that computer scientists often talked about the possible impact of the Internet in 1988, but people may not think that it can change their lives until it really changes their lives. This is partly because computers in 1988 can’t do such things, so people look at their computers and think, “Really? Will this change my life? “Their imagination is limited to their personal experience telling them what computers are, and it’s hard to vividly describe what computers might be like. The same thing is happening in artificial intelligence. What we’re hearing is that this is going to be a big deal, but because it hasn’t happened yet, and because of the relatively useless AI experience we’re using in the world today, it’s hard to really believe that it’s going to change our lives dramatically. These prejudices are faced by experts, who frantically try to attract our attention through collective performance.
  • Even if we do believe it – how many times have you thought today that you would spend most of your time thinking about things that don’t exist? Not much, right? Even if it’s much stronger than anything you do today? This is because our brains are usually focused on small things in everyday life, no matter how crazy our long-term environment is. That’s how we think.

One of the purposes of these two articles is to get you out of the way I like to think about other things and into an expert camp, even if you are just standing at the intersection of two completely uncertain dashed lines in the square above.

During my research, I met dozens of different opinions, but I quickly noticed that most people’s opinions belong to what I call mainstream opinions. In particular, more than three quarters of experts are divided into two subgroups of mainstream opinions:

We need to study these two camps in depth. We will release two camps for the future in the middle and the next part of this series. You are welcome to continue to view them.

Recommended Today

Protocol basis: use telnet to learn IMAP protocol

IMAP introduction IMAPThe full name is Internet Mail Access Protocol, or Interactive Mail Access ProtocolPOP3Similar to one of the mail access standard protocols. The difference is, it’s onIMAPAfter that, the e-mail you received from the e-mail client remains on the server, and the operations on the client will be fed back to the server, such […]