Trying to tame AI: Seoul summit flags hurdles to regulation | Artificial intelligence (AI)

The Bletchley Park artificial intelligence summit in 2023 was a landmark event in AI regulation simply by virtue of its existence.

Between the event’s announcement and its first day, the mainstream conversation had changed from a tone of light bafflement to a general agreement that AI regulation may be worth discussing.

However, the task for its follow-up, held at a research park on the outskirts of Seoul this week, is harder: can the UK and South Korea show that governments are moving from talking about AI regulation to actually delivering it?

At the end of the Seoul summit, the big achievement the UK was touting was the creation of a global network of AI safety institutes, building on the British trailblazers founded after the last meeting.

The technology secretary, Michelle Donelan, attributed the new institutes to the “Bletchley effect” in action, and announced plans to lead a system whereby regulators in the US, Canada, Britain, France, Japan, Korea, Australia, Singapore and the EU share information about AI models, harms and safety incidents.

Michelle Donelan, the UK technology minister, said the emerging global network of safety institutes was down to progress made at the Bletchley Park summit last year. Photograph: Lee Jin-man/AP

“Two years ago, governments were being briefed about AI almost entirely by the private sector and academics, but they had no capacity themselves to really develop their own base of evidence,” said Jack Clark, the co-founder and head of policy at the AI lab Anthropic. In Seoul, “we heard from the UK safety institute: they’ve done tests on a range of models, including Anthropic’s, and they had anonymised results for a range of misuses. They also discussed how they built their own jailbreaking attacks, to break the safety systems on all of these models.”

That success, Clark said, had left him “mildly more optimistic” than he was in the year leading up to Bletchley. But the power of the new safety institutes is limited to observation and reporting, running the risk that they are forced to simply sit by and watch as AI harms run rampant. Even so, Clark argued, “there is tremendous power in embarrassing people and embarrassing companies”.

“You can be a safety institute, and you can just test publicly available models. And if you find really inconvenient things about them, you can publish that – same as what happens in academia today. What you see is that companies take very significant actions in response to that. No one likes being in last place on the leaderboard.”

Jack Clark, the co-founder and head of policy at Anthropic, said the toothless safety institutes have ‘tremendous power’ to embarrass firms. Photograph: Anthony Wallace/AFP/Getty Images

Even the act of observing itself can change things. The EU and US safety institutes, for instance, have set “compute” thresholds, seeking to define who comes under the gaze of their safety institutes by how much computing power they corral to build their “frontier” models. In turn, those thresholds have started to become a stark dividing line: it is better to be marginally under the threshold and avoid the faff of working with a regulator than to be marginally over and create a lot of extra work, one founder said. In the US, that limit is high enough that only the most well-heeled companies can afford to break it, but the EU’s lower limit has brought hundreds of companies under its institute’s aegis.

Nonetheless, IBM’s chief privacy and trust officer, Christina Montgomery, said: “Compute thresholds are still a thing, because it’s a very clear line. It is very hard to come up with what the other capacities are. But that’s going to change and evolve quickly, and it should, because given all the new techniques that are popping up around how to tune and train models, it doesn’t matter how large the model is.” Instead, she suggested, governments will start to focus on other aspects of AI systems, such as the number of users that are exposed to the model.

Andrew Ng, the former boss of Google Brain, argued for the applications of AI to be targeted by regulation, rather than the AI systems themselves. Photograph: Anthony Wallace/AFP/Getty Images

The Seoul summit also exposed a more fundamental divide: should regulation target AI at all, or should it focus only on the uses of AI technologies? Former Google Brain boss Andrew Ng made the case for the latter, arguing that regulating AI makes as much sense as regulating “electric motors”: “It’s very difficult to say, ‘How do we make an electric motor safe?’ without just building very very small electric motors.”

Ng’s point was echoed by Janil Puthucheary, the Singaporean senior minister for communications, information and health. “Largely, the use of AI today is not unregulated. And the public is not unprotected,” he said. “If you are applying AI within the healthcare sector, all the regulatory tools of the healthcare sector have to be brought to bear to the risks. If it was then applied in the aviation industry, we already have a mechanism and a platform to regulate that risk.”

But the focus on applications rather than the underlying AI systems risks missing what some think of as the greatest AI safety issue of all: the chance that a “superintelligent” AI system could lead to the end of civilisation. The Massachusetts Institute of Technology professor Max Tegmark compared the release of GPT-4 to the “Fermi moment”, the creation of the first nuclear reactor, which all but guaranteed an atomic bomb wouldn’t be far behind, and said the similar risk of powerful AI systems needed to remain at front of mind.

Donelan defended the shift in focus. “One of the key pillars today is inclusivity, which can mean many things, but it should also mean inclusivity of all the potential risks,” she said. “That is something that we are constantly trying to achieve.”

For Clark, that came as cold comfort. “I would just say that the more things you tried to do, the less likely it is that you’re going to succeed at them,” he said. “If you end up with a kitchen-sink approach, then you’re going to really dilute the ability to get anything done.”

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment