OpenAI reveals new GPT-4o model  

OpenAI revealed its latest artificial intelligence (AI) model, GPT-4o, during a live demonstration Monday.  

GPT-4o provides “GPT-4 level intelligence,” but is faster and improved on the system’s capabilities across, text, vision and audio, OpenAI’s Chief Technology Officer Mira Murati said.  

Murati said the updated model is a “huge step forward with the ease of use” of the system. 

“This is really shifting the paradigm into the future of collaboration, where this interaction becomes much more natural and far, far easier,”  

The updated model will be available to all users of OpenAI’s ChatGPT AI chatbot, even those using the free version, OpenAI CEO Sam Altman said on the social platform X.  

During the demonstration, OpenAI showed off the capabilities of the updated model, including audio and visual queries.  

For example, the company showed how GPT-4o can translate between two speakers during a real-time conversation, as well as detect a person’s emotions based on a selfie they are taking.  

OpenAI unveiled GPT-4o one day ahead of Google’s annual developer conference scheduled for Tuesday.  

Google revealed its latest AI chatbot Gemini in February as the company continues to compete with OpenAI.  

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment