OpenAI private study finds A.I. in education to be a major risk in India but experts disagree
The Hindu
Indian policymakers specialised in artificial intelligence (A.I.) who were surveyed by tech giant OpenAI on AI risk perceptions broadly said that threats to education from A.I. are a uniquely high area of concern in India compared to other countries. However, experts within government, industry, and academia told The Hindu that they disagreed with many findings of the OpenAI risk perceptions study. They said that A.I. threats to education are overblown, misplaced, and failed to recognise that the benefits greatly outweigh the dangers in India.
Indian policymakers specialised in artificial intelligence (A.I.), who were surveyed by tech giant OpenAI on AI risk perceptions, broadly said that threats to education from A.I. are a uniquely high area of concern in India compared to other countries.
However, experts within government, industry, and academia told The Hindu that they disagreed with many findings of the OpenAI risk perceptions study. They said that A.I. threats to education are overblown, misplaced, and failed to recognise that the benefits greatly outweigh the dangers in India.
OpenAI’s private research conducted between September to December of 2023 through surveys and expert interviews with a few dozen policymakers in five countries found that “Education risks (e.g., students over-relying on AI tools at the expense of critical thinking skills), were viewed as least risky,” but “India is a notable exception: Indian respondents ranked risks to education as the fifth priority area of concern, greater than geopolitical risks or the alignment problem.”
No explanation was given in the OpenAI study for why Indian policymakers found A.I. in education risks to be of particularly high concern. OpenAI did not respond to multiple requests for comment for this article.
The OpenAI study, which The Hindu exclusively obtained, focused on four broad categories: benefits and risks from A.I., pace of A.I. development, AGI (Artificial General Intelligence) and existential risks, and A.I. risk management. The study implicitly focused on cutting edge generative A.I. use cases such as A.I. tools that generate new text, images, videos etc. rather than broad uses of artificial intelligence which have existed for many years.
OpenAI, the largest and most popular generative A.I. company in the world, found in the study that the greatest dangers from the technology came from “ ‘A.I. misuse/malicious use’ by bad actors and ‘economic risks’ (like job displacement due to automation),” according to policymakers surveyed in five different countries: India, Japan, Taiwan, U.K., and the U.S.
On the other hand, OpenAI’s risk perceptions study, which was not released publicly, found that “advanced research and discovery and health advancements,” were identified by survey respondents from all countries as the most beneficial applications of A.I. over the next five years.