Too Short for a Blog Post, Too Long for a Tweet 503

 



Here are a few excerpts from a book I recently read, "Life 3.0: Being Human in the Age of Artificial Intelligence," by Max Tegmark.


The AI they had built, nicknamed Prometheus, kept getting more capable. Although its cognitive abilities still lagged far behind those of humans in many areas, for example, social skills, the Omegas had pushed hard to make it extraordinary at one particular task: programming AI systems. They’d deliberately chosen this strategy because they had bought the intelligence explosion argument made by the British mathematician Irving Good back in 1965: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”




Unfortunately, better AI systems can also be used to find new vulnerabilities and perform more sophisticated hacks. Imagine, for example, that you one day get an unusually personalized “phishing” email attempting to persuade you to divulge personal information. It’s sent from your friend’s account by an AI who’s hacked it and is impersonating her, imitating her writing style based on an analysis of her other sent emails, and including lots of personal information about you from other sources. Might you fall for this? What if the phishing email appears to come from your credit card company and is followed up by a phone call from a friendly human voice that you can’t tell is AI-generated? In the ongoing computer-security arms race between offense and defense, there’s so far little indication that defense is winning.



Scenarios where humans can survive and defeat AIs have been popularized by unrealistic Hollywood movies such as the Terminator series, where the AIs aren’t significantly smarter than humans. When the intelligence differential is large enough, you get not a battle but a slaughter. So far, we humans have driven eight out of eleven elephant species extinct, and killed off the vast majority of the remaining three. If all world governments made a coordinated effort to exterminate the remaining elephants, it would be relatively quick and easy. I think we can confidently rest assured that if a superintelligent AI decides to exterminate humanity, it will be even quicker.



Elon’s stage performance consisted of an hour of fascinating discussion about space exploration, which I think would have made great TV. At the very end, a student asked him an off-topic question about AI. His answer included the phrase “with artificial intelligence, we are summoning the demon,” which became the only thing that most media reported—and generally out of context. It struck me that many journalists were inadvertently doing the exact opposite of what we were trying to accomplish in Puerto Rico. Whereas we wanted to build community consensus by highlighting the common ground, the media had an incentive to highlight the divisions. The more controversy they could report, the greater their Nielsen ratings and ad revenue. Moreover, whereas we wanted to help people from across the spectrum of opinions to come together, get along and understand each other better, media coverage inadvertently made people across the opinion spectrum upset at one another, fueling misunderstandings by publishing only their most provocative-sounding quotes without context.

Comments

Popular Posts