Extremists certain to use deepfake to carry out attacks in coming year: analysis
Global News
A report by the federal Integrated Terrorism Assessment Centre warns that visual trickery, known as a deepfake, poses 'a persistent threat to public safety.'
Violent extremists who lack the means to carry out an attack in Canada could compensate by perpetrating hoaxes with the help of artificial intelligence, says a newly released analysis.
The May report by the federal Integrated Terrorism Assessment Centre, obtained through the Access to Information Act, warns that such visual trickery, known as a deepfake, poses “a persistent threat to public safety.”
The assessment centre’s report was prompted by an image of dark smoke rising near the U.S. Pentagon that appeared May 22 on social media, causing stocks to drop temporarily. Officials confirmed there was no emergency.
Synthetic images, video and audio are becoming easier to generate through applications driven by artificial intelligence, allowing people to spread false information and sow confusion.
The centre, which employs members of the security and intelligence community, predicted threat actors would “almost certainly” create deepfake images depicting Canadian interests in the coming year, given the available tools and prevalence of misinformation and disinformation.
Rapid Response Mechanism Canada, a federal unit that tracks foreign information manipulation and interference, recently highlighted such an episode, saying it likely originated with the Chinese government.
The foreign operation, which began in August, employed a network of new or hijacked social media accounts to post comments and videos that called into question the political and ethical standards of various MPs, using a popular Chinese-speaking figure in Canada, RRM Canada said.
The terrorism assessment centre analysis says extremists could use deepfakes to advocate for violence, promote specific narratives, cause panic, tarnish reputations, and erode trust in government and societal institutions.