close
close

ChatGPT finally offers an update for the advanced voice mode

ChatGPT finally offers an update for the advanced voice mode

A screenshot from the OpenAI Spring Update showing three representatives on stage in front of a screen.
OpenAI

ChatGPT first announced its enhanced voice mode in May during its spring update, and it’s been pretty quiet around the launch. However, in a new post on X (formerly Twitter), OpenAI has provided an update on the situation and indicated when it will finally be generally available.

According to the announcement, an official launch will not take place until “this fall,” making it clear that “the exact timeline depends on compliance with our high safety and reliability standards.”

However, a smaller alpha version will be released in late July. The post admits that it’s a bit late, explaining: “We had planned to make the alpha version available to a small group of ChatGPT Plus users in late June, but need another month to reach our launch threshold.”

We’re releasing an update to the enhanced voice mode that we demonstrated during our Spring Update and that we continue to be very excited about:

We had planned to roll this out as an alpha version to a small group of ChatGPT Plus users in late June, but we still need a month to reach our launch limit. …

– OpenAI (@OpenAI) 25 June 2024

Some will be disappointed that the delay is official, especially since the rollout was supposed to happen “in the coming weeks” as stated in May. One X user was upset that OpenAI tricked people into signing up for the paid ChatGPT Plus subscription even though it took months for the feature to roll out. Still, official confirmation is always welcome for those desperately waiting.

Meanwhile, the post states that OpenAI continues to make improvements to the overall system: “For example, we are improving the model’s ability to detect and reject certain content. We are also working to improve the user experience and prepare our infrastructure to scale to millions of users while maintaining real-time responses. We are also working to roll out the new video and screen sharing features we demoed separately and will keep you updated on that schedule.”

The enhanced voice mode made a big impression when it was unveiled in May, showing a zero-latency, human-like response time – including emotions like laughter. The demo even allowed the user to interrupt the AI ​​mid-sentence while maintaining the continuity of the conversation.

Editor’s recommendations