As AI becomes more human-like, experts warn users must think more critically about its responses
CBC
Tech giant Google has announced upgrades to its artificial intelligence technologies, just a day after rival OpenAI announced similar changes to its offerings, with both companies trying to dominate the quickly emerging market where human beings can ask questions of computer systems — and get answers in the style of a human response.
It's part of a push to make AI systems such as ChatGPT not just faster, but more comprehensive in their responses right away without having to ask multiple questions.
On Tuesday, Google demonstrated how AI responses would be merged with some results from its influential search engine. As part of its annual developers conference, Google promised that it would start to use AI to provide summaries to questions and searches, with at least some of them being labelled as AI at the top of the page.
Google's AI generated summaries are only available in the U.S., for now — but they will be written using conversational language.
Meanwhile, OpenAI's newly announced GPT-4o system will be capable of conversational responses in a more human-like voice.
It gained attention on Monday for being able to interact with users while employing natural conversation with very little delay — at least in demonstration mode. OpenAI researchers showed off ChatGPT's new voice assistant capabilities, including using new vision and voice capabilities to talk a researcher through solving a math equation on a sheet of paper.
At one point, an OpenAI researcher told the chatbot he was in a great mood because he was demonstrating "how useful and amazing you are."
ChatGPT responded: "Oh stop it! You're making me blush!"
"It feels like AI from the movies," OpenAI CEO Sam Altman wrote in a blog post. "Talking to a computer has never felt really natural for me; now it does."
But researchers in the technology and artificial intelligence sector warn that as people get information from AI systems in more user-friendly ways, they also have to be careful to watch for inaccurate or misleading responses to their queries.
And because AI systems often don't disclose how they came to a conclusion because companies want to protect the trade secrets behind how they work, they also do not tend to show as many raw results or source data as traditional search engines.
This means, according to Richard Lachman, they can be more prone to providing answers that look or sound confident, even if they're incorrect.
The associate professor of Digital Media at Toronto Metropolitan University's RTA School of Media says these changes are a response to what consumers demand when using a search engine: a quick, definitive answer when they need a piece of information.
"We're not necessarily looking for 10 websites; we want an answer to a question. And this can do that," said Lachman,