![AI could have catastrophic consequences — is Canada ready?](https://i.cbc.ca/1.7145806.1710538500!/cpImage/httpImage/image.jpg_gen/derivatives/16x9_620/que-ai-conference-trudeau-20230928.jpg)
AI could have catastrophic consequences — is Canada ready?
CBC
Nations — Canada included — are running out of time to design and implement comprehensive safeguards on the development and deployment of advanced artificial intelligence systems, a leading AI safety company warned this week.
In a worst-case scenario, power-seeking superhuman AI systems could escape their creators' control and pose an "extinction-level" threat to humanity, AI researchers wrote in a report commissioned by the U.S. Department of State entitled Defence in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.
The department insists the views the authors expressed in the report do not reflect the views of the U.S. government.
But the report's message is bringing the Canadian government's actions to date on AI safety and regulation back into the spotlight — and one Conservative MP is warning the government's proposed Artificial Intelligence and Data Act is already out of date.
The U.S.-based company Gladstone AI, which advocates for the responsible development of safe artificial intelligence, produced the report. Its warnings fall into two main categories.
The first concerns the risk of AI developers losing control of an artificial general intelligence (AGI) system. The authors define AGI as an AI system that can outperform humans across all economic and strategically relevant domains.
While no AGI systems exist to date, many AI researchers believe they are not far off.
"There is evidence to suggest that as advanced AI approaches AGI-like levels of human and superhuman general capability, it may become effectively uncontrollable. Specifically, in the absence of countermeasures, a highly capable AI system may engage in so-called power seeking behaviours," the authors wrote, adding that these behaviours could include strategies to prevent the AI itself from being shut off or having its goals modified.
In a worst-case scenario, the authors warn that such a loss of control "could pose an extinction-level threat to the human species."
"There's this risk that these systems start to get essentially dangerously creative. They're able to invent dangerously creative strategies that achieve their programmed objectives while having very harmful side effects. So that's kind of the risk we're looking at with loss of control," Gladstone AI CEO Jeremie Harris, one of the authors of the report, said Thursday in an interview with CBC's Power & Politics.
The second category of catastrophic risk cited in the report is the potential use of advanced AI systems as weapons.
"One example is cyber risk," Harris told P&P host David Cochrane. "We're already seeing, for example, autonomous agents. You can go to one of these systems now and ask,... 'Hey, I want you to build an app for me, right?' That's an amazing thing. It's basically automating software engineering. This entire industry. That's a wicked good thing.
"But imagine the same system ... you're asking it to carry out a massive distributed denial of service attack or some other cyber attack. The barrier to entry for some of these very powerful optimization applications drops, and the destructive footprint of malicious actors who use these systems increases rapidly as they get more powerful."
Harris warned that the misuse of advanced AI systems could extend into the realm of weapons of mass destruction, including biological and chemical weapons.
![](/newspic/picid-6251999-20250216184556.jpg)
Liberal leadership hopeful Mark Carney says he'd run a deficit to 'invest and grow' Canada's economy
Liberal leadership hopeful Mark Carney confirmed Sunday that a federal government led by him would run a deficit "to invest and grow" Canada's economy, but it would also balance its operational spending over the next three years.