OpenAI unveils newest AI model, GPT-4o
CNN
OpenAI on Monday announced its latest artificial intelligence large language model that it says will be easier and more intuitive to use.
OpenAI on Monday announced its latest artificial intelligence large language model that it says will be easier and more intuitive to use. The new model, called GPT-4o, is an update from the company’s previous GPT-4 model, which launched just over a year ago. The model will be available to unpaid customers, meaning anyone will have access to OpenAI’s most advanced technology through ChatGPT. GPT-4o will enable ChatGPT to interact using text, voice and so-called vision, meaning it can view screenshots, photos, documents or charts uploaded by users and have a conversation about them. OpenAI Chief Technology Officer Mira Murati said the ChatGPT will now also have memory capabilities, meaning it can learn from previous conversations with users, and can do real-time translation. “This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said during a live demonstration from the company’s San Francisco headquarters. “This interaction becomes much more natural and far, far easier.” The new release comes as OpenAI seeks to stay ahead of the growing competition in the AI arms race. Rivals including Google and Meta have been working to build increasingly powerful large language models that can be used to bring AI tools to their various products. Meanwhile, the latest GPT release could be a boon to Microsoft, which has invested billions of dollars into OpenAI to embed its AI technology into Microsoft’s own products. OpenAI executives demonstrated a spoken conversation with ChatGPT to get real-time instructions for solving a math problem, to tell a bedtime story and to get coding advice. ChatGPT was able to speak in a natural, human-sounding voice, as well as a robot voice — and even sang part of one response. The tool was also able to look at an image of a chart and discuss it.