The OpenAI event had some really "magical" moments, as earlier stated by Sam Altman. The AI chatbot's ability to understand emotions in real-time, its natural voice, and native vision capabilities will unlock a ton of new features. The new chatbot will be rolling out in the coming weeks.
There are also reports that Apple is partnering with OpenAI to develop an AI chatbot for iPhone and other devices. After today's demo, almost all Apple users would love Siri to get some of these ChatGPT features.
That's all for tonight!
The enhanced ChatGPT, powered by the new GPT-4o model, is also capable of detecting emotion by looking at your face through the camera. During the event, an OpenAI employee showed a smiling face and the AI was able to reveal the mood of the person and even went on to engage in a conversation by asking "want to share the reason for your good vibes."
ChatGPT's ability to understand voice and translate in real-time might soon make Google Translate a thing of the past. During a demo, the OpenAI team demonstrated ChatGPT's live translation tool. The AI bot captured words in Italian from Mira Murati and converted them to English, then took replies in English and translated to Italian, all in real-time with a natural voice and emotions.
In another demo, the new ChatGPT was able to view code being written and analyze the code. It was also able to spot coding errors in real time, suggesting that the vision capabilities of the ChatGPT Desktop app seem to include the ability to view the desktop.
One of the new features of GPT-4o-powered ChatGPT is native vision capabilities. It is essentially the ability for it to "see" through the camera on your phone - much like Google Lens. However, the capabilities are truly next-level. In a demo, the OpenAI staff showed ChatGPT an equation they'd just written on a piece of paper and asked the AI to help solve the problem. It offered advice and talked them through it step-by-step.
With GPT-4o, ChatGPT has not only become emotional but also sounds more natural and dramatic, if needed. In one of the demos, ChatGPT narrated a bedtime stoy with different voices including a robotic sound, a singing voice and with intense drama.
One of the biggest upgrades coming to ChatGPT with GPT-4o model is live speech. The model is capable of perceiving your emotion in real-time and you can interrupt it mid-way rather waiting for it to complete its response. In the demo of this feature the OpenAI staffer did heavy breathing into the voice assistant and it was able to offer advice on improving breathing techniques. It even warned him "you're not a vacuum cleaner," showing its ability to pick emotions in real-time.
OpenAI is also bringing the new GPT-4o model to its API. Developers can start building with GPT-4o starting today. As per OpenAI, the access to the new model will be "2x faster," "50% cheaper," and "5x higher rate limits" compared to GPT-4 Turbo.
Mira Murati says the biggest benefit for paid users—those subscribed to ChatGPT Plus—will be five times more requests per day to GPT-4o than the free plan. However, it raises questions over whether it is worth getting ChatGPT Plus for $20 per month given the upgrades coming to the free plan.
GPT-4o is free for all users and brings "GPT-4 class intelligence," says Murati says. It reasons across text, voice, and vision, boasting advanced multi-modal capabilities. Starting today, free users can use GPTs (custom ChatGPTs) in the GPT Store. Users will be able to upload documents and images to enage in conversations with ChatGPT.
OpenAI's Spring Update event is now live with Mira Murati, the company's CTO, addressing the keynote. The event will focus on the desktop app for ChatGPT. The AI chatbot is also getting refreshed UI today. But the main highlight for today will be OpenAI's new "flagship model" called GPT-4o. It offers GPT-4 level "intelligence" to all users, including free users of ChatGPT.
OpenAI's event today comes just before Google's annual developer conference, Google I/O 2024, which is set to take place tomorrow starting at 10:30pm IST, with major AI announcements on the way. With some "magic-like" features to be revealed today for ChatGPT, OpenAI might steal Google's thunder in the ongoing AI race.
Over the past couple of weeks, several reports have mentioned that OpenAI is building an AI search product to take on Google's dominance in the search domain. However, OpenAI CEO Sam Altman has confirmed that the event will not be about "a search engine" or GPT-5 LLM model, the much-awaited successor to GPT-4. In a tweet he said, "We’ve been hard at work on some new stuff we think people will love! feels like magic to me."
As per OpenAI, the event is only "to demo some ChatGPT and GPT-4 updates." We are expecting some new features and enhancements regarding ChatGPT and GPT-4, for both personal users as well as enterprise clients of the AI startup.
In minutes from now, Open will announce some fresh updates for its AI chatbot, ChatGPT and GPT-4, the large language model that powers the popular chatbot. For those in India, the event will start at 10:30 pm. You can catch all the live updates here in our blog or watch the streaming via OpenAI's official YouTube account.