AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield | Artificial intelligence (AI)

A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones’ ability to “maximize lethality and combat tempo”.

While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.

The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces, meanwhile, used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza.

A drone with AI integration used to detect explosive devices in humanitarian de-mining in the Zhytomyr region of Ukraine in 2023. Photograph: Maxym Marusenko/NurPhoto/Shutterstock

Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare, experts say, while making it even more evident how unregulated the nascent field is. The expansion of AI in conflict has shown that national militaries have an immense appetite for the technology, despite how unpredictable and ethically fraught it can be. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world.

The refrain among diplomats and weapons manufacturers is that AI-enabled warfare and autonomous weapons systems have reached their “Oppenheimer moment”, a reference to J Robert Oppenheimer’s development of the atomic bomb during the second world war. Depending on who is invoking the physicist, the phrase is either a triumphant prediction of a new, peaceful era of American hegemony or a grim warning of a horrifically destructive power.

Elbit Systems is developing AI-enabled offensive drones to ‘maximize lethality and combat tempo’ on the battlefield. Photograph: Baz Ratner/Reuters

Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. The flurry of investment and development has also intensified longstanding debates about the future of conflict. As the pace of innovation speeds ahead, autonomous weapons experts warn that these systems are entrenching themselves into militaries and governments around the world in ways that may fundamentally change society’s relationship with technology and war.

Palantir has become involved in AI projects including what it calls the US army’s ‘first AI-defined vehicle’. Photograph: Budrul Chukrut/Sopa Images/Rex/Shutterstock

“There’s a risk that over time we see humans ceding more judgment to machines,” said Paul Scharre, executive vice-president and director of studies at the Center for a New American Security thinktank. “We could look back 15 or 20 years from now and realize we crossed a very significant threshold.”

The AI boom comes for warfare

While the rapid advancements in AI in recent years have created a surge of investment, the move toward increasingly autonomous weapons systems in warfare goes back decades. Advancements had rarely appeared in public discourse, however, and instead were the subject of scrutiny among a relatively small group of academics, human rights workers and military strategists.

What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly “autonomous” has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before.

Composite: The Guardian/Getty Images

The increasing appetite for combat tools that blend human and machine intelligence has led to an influx of money to companies and government agencies that promise they can make warfare smarter, cheaper and faster.

The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.

Demonstrators protest Google’s contract with Israel to provide facial recognition and other technologies amid the Israel-Hamas war, on 14 December 2023. Photograph: Santiago Mejia/AP

Military demand for increased AI and autonomy has been a boon for tech and defense companies, which have won huge contracts to help develop various weapons projects. Anduril, a company that is developing lethal autonomous attack drones, unmanned fighter jets and underwater vehicles, is reportedly seeking a $12.5bn valuation. Founded by Palmer Luckey – a 31-year-old, pro-Trump tech billionaire who sports Hawaiian shirts and a soul patch – Anduril secured a contract earlier this year to help build the Pentagon’s unmanned warplane program. The Pentagon has already sent hundreds of the company’s drones to Ukraine, and last month approved the potential sale of $300m worth of its Altius-600M-V attack drones to Taiwan. Anduril’s pitch deck, according to Luckey, claims the company will “save western civilization”.

Palantir, the tech and surveillance company founded by billionaire Peter Thiel, has become involved in AI projects ranging from Ukrainian de-mining efforts to building what it calls the US army’s “first AI-defined vehicle”. In May, the Pentagon announced it awarded Palantir a $480m contract for its AI technology that helps with identifying hostile targets. The military is already using the company’s technology in at least two military operations in the Middle East.

Helsing was valued at $5.4bn this month after raising almost $500m on the back of its AI defense software. Photograph: Pavlo Gonchar/Sopa Images/Rex/Shutterstock

Anduril and Palantir, respectively named after a legendary sword and magical seeing stone in The Lord of The Rings, represent just a slice of the international gold rush into AI warfare. Helsing, which was founded in Germany, was valued at $5.4bn this month after raising almost $500m on the back of its AI defense software. Elbit Systems meanwhile received about $760m in munitions contracts in 2023 from the Israeli ministry of defense, it disclosed in a financial filing from March. The company reported around $6bn in revenue last year.

“The money that we’re seeing being poured into autonomous weapons and the use of things like AI targeting systems is extremely concerning,” said Catherine Connolly, monitoring and research manager for the organization Stop Killer Robots.

Big tech companies also appear more willing to embrace the defense industry and its use of AI than in years past. In 2018, Google employees protested the company’s involvement in the military’s Project Maven, arguing that it violated ethical and moral responsibilities. Google ultimately caved to the pressure and severed its ties with the project. Since then, however, the tech giant has secured a $1.2bn deal with the Israeli government and military to provide cloud computing services and artificial intelligence capabilities.

Google’s response has changed, too. After employees protested against the Israeli military contract earlier this year, Google fired dozens of them. CEO Sundar Pichai bluntly told staff that “this is a business”. Similar protests at Amazon in 2022 over its involvement with the Israeli military resulted in no change of corporate policy.

A double black box

As money flows into defense tech, researchers warn that many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly, and the classified tendencies of the US national security apparatus means that companies and governments are not obligated to share the details of how these systems work.

When governments take already secretive and proprietary AI technologies and then place them within the clandestine world of national security, it creates what University of Virginia law professor Ashley Deeks calls a “double black box”. The dynamic makes it extremely difficult for the public to know whether these systems are operating correctly or ethically. Often, it appears that they leave wide margins for mistakes. In Israel, an investigation from +972 Magazine reported that the military relied on information from an AI system to determine targets for airstrikes despite knowing that the software made errors in around 10% of cases.

The proprietary nature of these systems means that arms monitors sometimes even rely on analyzing drones that have been downed in combat zones such as Ukraine to get an idea of how they actually function.

“I’ve seen a lot of areas of AI in the commercial space where there’s a lot of hype. The term ‘AI’ gets thrown around a lot. And once you look under the hood, it’s maybe not as sophisticated as the advertising,” Scharre said.

A Human in the loop

While companies and national militaries are reticent to give details on how their systems actually operate, they do engage in broader debates around moral responsibilities and regulations. A common concept among diplomats and weapons manufacturers alike when discussing the ethics of AI-enabled warfare is that there should always be a “human in the loop” to make decisions instead of ceding total control to machines. However, there is little agreement on how to implement human oversight.

Activists from the Campaign to Stop Killer Robots stage a protest at the Brandenburg Gate in Berlin, Germany, on 21 March 2019. Photograph: Annegret Hilse/Reuters

“Everyone can get on board with that concept, while simultaneously everybody can disagree about what it actually means in practice,” said Rebecca Crootof, a law professor at the University of Richmond and an expert on autonomous warfare. “It isn’t that useful in terms of actually directing technological design decisions.” Crootof is also the first visiting fellow at the US Defense Advanced Research Projects Agency, or Darpa, but agreed to speak in an independent capacity.

Complex questions of human psychology and accountability throw a wrench into the high-level discussions of humans in loops. An example that researchers cite from the tech industry is the self-driving car, which often puts a “human in the loop” by allowing a person to regain control of the vehicle when necessary. But if a self-driving car makes a mistake or influences a human being to make a wrong decision, is it fair to put the person in the driver’s seat in charge? If a self-driving car cedes control to a human moments before a crash, who is at fault?

Protesters gather outside the gates of Elbit System’s factory in Leicester, UK, on 10 July 2024. Photograph: Martin Pope/Zuma Press Wire/Rex/Shutterstock

“Researchers have written about a sort of ‘moral crumple zone’ where we sometimes have humans sitting in the cockpit or driver’s seat just so that we have someone to blame when things go wrong,” Scharre said.

A struggle to regulate

At a meeting in Vienna in late April of this year, international organizations and diplomats from 143 countries gathered for a conference held on regulating the use of AI and autonomous weapons in war. After years of failed attempts at any comprehensive treaties or binding UN security council resolutions on these technologies, the plea to countries from Austria’s foreign minister, Alexander Schallenberg, was more modest than an outright ban on autonomous weapons.

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” Schallenberg told the audience.

Organizations such as the International Committee of the Red Cross and Stop Killer Robots have called for prohibitions on specific types of autonomous weapons systems for more than a decade, as well as overall rules that would govern how the technology can be deployed. These would cover certain uses such as being able to commit harm against people without human input or limit the types of combat areas that they can be used in.

A drone with AI integration is used to de-mine in the Zhytomyr region of Ukraine on 20 September 2023. Photograph: Maxym Marusenko/NurPhoto/Shutterstock

The proliferation of the technology has also forced arms control advocates to change some of their language, an acknowledgment that they are losing time in the fight for regulation.

“We called for a preemptive ban on fully autonomous weapons systems,” said Mary Wareham, deputy director of the crisis, conflict and arms division at Human Rights Watch. “That ‘preemptive’ word is no longer used nowadays, because we’ve come so much closer to autonomous weapons.”

Increasing the checks on how autonomous weapons can be produced and used in warfare has extensive international support – except among the states most responsible for creating and utilizing the technology. Russia, China, the United States, Israel, India, South Korea and Australia all disagree that there should be any new international law around autonomous weapons.

Defense companies and their influential owners are also pushing back on regulations. Luckey, Anduril’s founder, has made vague commitments to having a “human in the loop” in the company’s technology while publicly opposing regulation and bans on autonomous weapons. Palantir’s CEO, Alex Karp, has repeatedly invoked Oppenheimer, characterizing autonomous weapons and AI as a global race for supremacy against geopolitical foes like Russia and China.

Soldiers from the British army used an AI engine during an exercise in Estonia on 2 June 2021. Photograph: Mike Whitehurst/Ministry of defence/Crown Copyright/PA

This lack of regulations is not a problem unique to autonomous weapons, experts say, and is part of a broader issue that international legal regimes don’t have good answers for when a technology malfunctions or a combatant makes a mistake in conflict zones. But the concern from experts and arms control advocates is that once these technologies are developed and integrated into militaries, they will be here to stay and even harder to regulate.

“Once weapons are embedded into military support structures, it becomes more difficult to give them up, because they’re counting on it.” Scharre said. “It’s not just a financial investment – states are counting on using it as how they think about their national defense.”

If development of autonomous weapons and AI is anything like other military technologies, there is also the likelihood that their use will trickle down into domestic law enforcement and border patrol agencies to entrench the technology even further.

“A lot of the time the technologies that are used in war come home,” Connolly said.

The increased attention to autonomous weapons systems and AI over the last year has also given regulation advocates some hope that political pressure in favor of establishing international treaties will grow. They also point to efforts such as the campaign to ban landmines, in which Human Rights Watch director Wareham was a prominent figure, as proof that there is always time for states to walk back their use of weapons of war.

“It’s not going to be too late. It’s never too late, but I don’t want to get to the point where we’re saying: ‘How many more civilians must die before we take action on this?’” Wareham said. “We’re getting very, very close now to saying that.”

Source link

Denial of responsibility! NewsConcerns is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment