The reason for OpenAI’s o1 model: they think they’re building God

OpenAI’s new model, called o1, appears to think and ponder as you use it. But is it thinking? Or pondering? And what does it mean if it is? Would that make it worth the risks, which appear to be both greater and more plausible than ever? How do you balance the risks of destroying humanity with the possibility of improving it? This is the thing about talking about artificial intelligence: it has this nasty penchant of getting all existential on you.

On this episode of The Vergecast, we get all existential about AI. The Verge’s Kylie Robison joins the show to discuss why OpenAI built o1, why it’s launching the way it is, what to make of the folks who are worried about what they’re seeing from the model, and how we should think about this moment in AI as companies pivot toward trying to build “agents” that can do more and more on our behalf. (We recorded this just before Sam Altman published his recent blog post on The Intelligence Age, but it all feels pretty timely.)

Finally, we answer a question on the Vergecast Hotline (call 866-VERGE11, or email [email protected]!) about an issue everybody has: what do you do with all the stuff that accumulates on your devices?

If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with OpenAI:

And on TikTok / Google / Trump:

And a few tools for cleaning up your devices:

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment