0

    OpenAI introduces GPT-4o model, promising real-time conversation

    2024.05.14 | exchangesranking | 93onlookers
    OpenAI announced a new model, GPT-4o, which will soon allow real-time conversation with an AI assistant.

    In a May 13 demo, OpenAI members showed that the model can provide breathing feedback, tell a story, and help solve a math problem, among other applications.

    Head of Frontiers Research Mark Chen noted that, although users could previously access Voice Mode, the new model allows interruptions, no longer has a multi-second delay, and can recognize and communicate in various emotional styles.

    OpenAI CEO Sam Altman commented on the update in a separate blog post, calling it the “best computer interface I’ve ever used,” adding that it “feels like AI from the movies.”

    He said:

    “Getting to human-level response times and expressiveness turns out to be a big change.”

    In addition to improved text, video, and visual capabilities, GPT-4o is faster and offers the same level of intelligence as GPT-4.

    Full availability pending

    Initially, GPT-4o will have limited features, but the model can already understand and discuss images “much better than any existing model.” In one example, OpenAI suggested that the model can examine a menu and provide translations, context, and recommendations.

    Each of the company’s subscription models includes different access limits. Starting today, ChatGPT Free users can access the feature with usage limits. GPT-4o to ChatGPT Plus and Team users can also access GPT-4o with five times greater usage limits.

    The company also plans to extend the feature to Enterprise users later with “even higher limits.”

    OpenAI will introduce the updated “Voice Mode” in the near future. It plans to release an alpha in the coming weeks, with early access for Plus users.

    Competitive AI sector

    OpenAI’s updates follow other upgrades from competing companies.

    In March, Anthropic released an upgrade to Claude that it called superior to OpenAI’s GPT-4. Meta, meanwhile, announced Llama 3 with an improved parameter count in April.

    Other industry developments are yet to occur. Google is set to host its I/O conference on May 14, featuring AI within several keynotes. Apple is expected to announce iOS 18 in June, which is expected to include various new AI features.

    Mentioned in this article
    OpenAI Anthropic Meta Google Apple Sam Altman
    The content on this website comes from the Internet. Due to the inconvenience of proofreading the authenticity and accuracy of the copyright or content of some content, it may be temporarily impossible to confirm the authenticity and accuracy of the copyright or content. For copyright issues or other ssues caused by this, please Call or email this site. It will be deleted or changed immediately after verification.