In a post on X (formerly Twitter), OpenAI posted an update about the advanced Voice Mode feature. As demonstrated last month, the feature is powered by GPT-4o and allows ChatGPT to respond to prompts and queries verbally in real time. The chatbot was also shown to modulate its voice, express emotions, and even sing. This feature was one of the central announcements of the OpenAI event.
However, the company has now confirmed that it will not arrive before July. There are several reasons behind the delay, and OpenAI says that it is working on the feature’s ability to detect and refuse certain content. It is also scaling its infrastructure to let millions of users access real-time voice responses without any lag.
The current plan is to release the feature to a small group of ChatGPT Plus users as a part of its alpha testing programme. Based on the feedback and learnings, the AI firm then plans to roll out the feature to all Plus users in the fall, without sharing a concrete launch timeline.
Additionally, the company also acknowledged the delays in deploying other features showcased in the event. For instance, the ability of ChatGPT to see the surroundings through the user’s video feed and interact with it in real-time and the screen-sharing capabilities also do not have a release timeline. OpenAI said it will keep users posted about it.
Separately, the AI firm launched its ChatGPT app for macOS on Tuesday for all users. The app comes with new features such as shortcut keys for quick launch, loading screenshots directly into the app, and support for the standard Voice Mode.