First companies sign up to AI safety standards on eve of Seoul summit | Artificial intelligence (AI)

The first 16 companies have signed up to the voluntary artificial intelligence safety standards introduced at the Bletchley Park summit, Rishi Sunak has said on the eve of the event’s follow-up in Seoul.

But the standards have faced criticism for lacking teeth, with signatories committing only to voluntarily “work toward” information sharing, “invest” in cybersecurity and “prioritise” research into societal risks.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Included in the 16 are Zhipu.ai, from China, and the United Arab Emirates’ Technology Innovation Institute. The presence of signatories from countries that have been less willing to bind national champions to safety regulation is a benefit of the lighter touch, the government says.

The UK’s technology secretary, Michelle Donovan, said the Seoul event “really does build on the work that we did at Bletchley and the ‘Bletchley effect’ that we created afterwards. It really had the ripple effect of moving AI and AI safety on to the agenda of many nations. We saw that with nations coming forward with plans to create their own AI safety institutes, for instance.

“And what we’ve achieved in Seoul is we’ve really broadened out the conversation. We’ve got a collection from across the globe, highlighting that this process is really galvanising companies, not just in certain countries but in all areas of the globe to really tackle this issue.”

But the longer the codes remain voluntary, the more risk there is that AI companies will simply ignore them, warned Fran Bennett, the interim director of the Ada Lovelace Institute.

“People thinking and talking about safety and security, that’s all good stuff. So is securing commitments from companies in other nations, particularly China and the UAE. But companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that – that’s problematic.

“It’s great to be thinking about safety and establishing norms, but now you need some teeth to it: you need regulation, and you need some institutions which are able to draw the line from the perspective of the people affected, not of the companies building the things.”

skip past newsletter promotion

Later on Tuesday, Sunak will co-chair a virtual meeting of world leaders on “innovation and inclusivity” in AI with the South Korean president, Yoon Suk Yeol.

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment