AI’s Cassandra moment Premium
The Hindu
AI systems may not be plotting to incinerate humanity, but they are mushrooming at a time when globalisation has withered, and corporations, not countries, are poised to control technological advances and neural networks.
Nobel laureates are exceptional scientists but Geoffrey Hinton, the co-winner of this year’s Nobel Prize for Physics, is particularly so. Few laureates have expressed regret over the consequences of their own prize-winning work; none before they won the coveted prize.
Explore this year’s Nobel winners and their achievements with this interactive guide
In May 2023, Hinton, a pioneer of deep learning, who has nurtured talented researchers in the computer science and Artificial Intelligence (AI) domain, quit his advisory role at Google. His reasons, according to The New York Times, were to be able to speak more freely about the “dangers” posed by AI. He said that a part of him “regrets his life’s work”. Developments in the ideas that he pioneered enable today’s learning machines to drive cars, write news reports, produce deepfakes, and take aim at professions that seem invulnerable to automatisation.
From being dormant for decades, neural networks, in his view, had suddenly become “a new and better form of intelligence”. He reckons that it would not be too much of a leap to expect AI systems to soon create their own “sub-goals” that prioritised their own expansion. Moreover, AI machines are able to almost instantly “teach” and transmit their entire knowledge to other connected machines — a feat that is slower and error-ridden in the animal brain. He expressed concern that AI could fall into the “wrong hands” and believes that Russian President Vladimir Putin would have little compunction in weaponising AI against Ukraine.
Whether or not experts saw AI as apocalyptic was a matter of being “optimistic or pessimistic,” he told MIT Technology Review, but there was near-consensus among those who understood these developments that AI presented a form of learning superior to that in people.
Ilya Sutskever, who completed his doctoral studies under Hinton, mirrored his mentor’s concerns. Sutskever as the Chief Scientist of OpenAI, the developer of ChatGPT, voted to fire Sam Altman as the CEO of the company last November. The coup failed, and ChatGPT lives in Microsoft’s stable. OpenAI’s foundational goal was to build “safe and responsible AI” and Sutskever, according to media reports, felt that the company was prioritising “profitability” over this original mission. Coincidentally, on the day that the Physics Nobel was announced, Hinton said that he was “particularly proud of the fact that one of my students (Sutskever) fired Sam Altman”.
Should Hinton’s assessment of the dangers of AI carry greater weight than, say, those of businessman Elon Musk, who has also spoken of AI as being a “risk to humanity”? Can a scientific authority always be trusted upon to do the right thing?