Tech giants pledge AI safety commitments — including a ‘kill switch’

A slew of major tech companies including Microsoft, Amazon, and OpenAI, on Tuesday agreed to a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit.

The agreement will see companies from countries including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, make voluntary commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers will each publish safety frameworks laying out how they’ll measure risks of their frontier models, such as examining the risk of misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems which would be considered “intolerable” — these risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

In those sorts of extreme circumstances, companies say they will implement a “kill switch” that would see them cease development of their AI models if they can’t guarantee mitigation of these risks.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the U.K.’s prime minister, said in a statement Tuesday.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he added.

The pact agreed Tuesday expands on a previous set of commitments made by companies involved in the development of generative AI software the U.K.’s AI Safety Summit in Bletchley Park, England, last November.

The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

The commitments agreed Tuesday only apply to so-called “frontier” models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Ever since ChatGPT was first introduced to the world in November 2022, regulators and tech leaders have become increasingly worried about the risks surrounding advanced AI systems capable of generating text and visual content on par with, or better than, humans.

Microsoft's new PCs with AI is a 'thumbs up,' says WSJ's Joanna Stern

The European Union has sought to clamp down on unfettered AI development with the creation of its AI Act, which was approved by the EU Council on Tuesday.

The U.K. hasn’t proposed formal laws for AI, however, instead opting for a “light-touch” approach to AI regulation that entails regulators applying existing laws to the technology.

The government recently said it will consider legislating for frontier models at a point in future, but has not committed to a timeline for introducing formal laws.

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment