In a book that’s become the darling of many a Silicon Valley billionaire — Sapiens: A Brief History of Humankind — the historian Yuval Harari paints a picture of humanity’s inexorable march towards ever greater forms of collectivization. From the tribal clans of pre-history, people gathered to create city-states, then nations, and finally empires. While certain recent political trends, namely Brexit and the nativism of Donald Trump would seem to belie this trend, now another luminary of academia has added his voice to the chorus calling for stronger forms of world government. Far from citing some ancient historical trends though, Stephen Hawking points to artificial intelligence as a defining reason for needing stronger forms of globally enforced cooperation.
It’s facile to dismiss Stephen Hawking as another scientist poking his nose into problems more germane to politics than physics. Or even to suggest he is being alarmist, as many AI experts have already done. It’s worth taking his point seriously, though, and weighing the evidence to see if there’s any merit to the cautionary note he rings.
Let’s first take the case made by the naysayers who claim we are a long time away from AI posing any real threat to humanity. These are often the same people who suggest Isaac Asimov’s three laws of robotics are sufficient to ensure ethical behavior from machines – never mind that the whole thrust of Asimov’s stories is to demonstrate how things can go terribly wrong despite of the three laws of robots. Leaving that aside, it’s exceedingly difficult to keep pace with the breakneck pace of research in AI and robotics. One may be an expert in a small domain of AI or robotics, say pneumatic actuators, and have no clue what is going on in reinforcement learning. This tends to be the rule rather than the exception among experts, since their very expertise tends to confine them to a narrow field of endeavor.