Tech overlord Sam Altman’s legal skirmish with actor Scarlett Johansson brings the blurred lines between artificial intelligence and the world it seeks to transform into sharper focus.
For those who missed it, Johansson is suing Altman’s OpenAI over claims he ignored her refusal to grant consent to use her voice in its latest ChatGPT release – which was later unveiled with a generated voice using a husky, flirtatious tone Johansson says is unabashedly in the style of her work in the movie Her.
That 2014 film (about a sad and lonely guy who falls in love with his operating system) is said to be Altman’s favourite – although on a recent rewatch, conjuring up a compliant partner to cater to one’s every whim seems more red flag than vision splendid.
As the federal government grapples with this rapidly evolving technology – proposing to criminalise porn deepfakes while simultaneously developing industry standards to enhance AI trust – the Her fracas reinforces the contradiction at its heart.
Training machines to predict and automate based on the patterns of prior human experience can create outputs that border on the magical. Yet the dirty truth is that it is built on the stolen work of those whose behaviour it seeks to replicate. Whether you’re a Hollywood star, a writer, a teacher, a health worker or a truck driver, your labour is both the raw input and the end target of this technology.
According to the latest Guardian Essential report, the public response to AI is almost completely at odds with industry hype, with twice as many of us seeing the risk of AI outweighing the opportunities compared with those who think the inverse.
Decisions on how we manage this tension between risk and opportunity are ultimately political. In their remarkable book Power and Progress, economists Daron Acemoglu and Simon Johnson provide a compelling framework for thinking this through.
Their models show that where a technology simply automates or surveils workers in pursuit of efficiency, it leads to a concentration of wealth and power. It is only when systems are designed through the prism of “machine usefulness” (new tools, new products, new connections or new markets) that they deliver genuine productivity.
Working with UTS’s Human Technology Institute, Essential has had the chance to put this theory to the test, conducting deep dive reflective research with nurses, retail workers and public servants into how AI is being applied.
Rather than just seeking a reflex response to a concept few really understand, we briefed workers or how the technology is currently being deployed, got them to map their own workplaces and then reflect on the opportunities and risks they saw.
The common theme across all groups was that workers are far better at thinking this through than pretty much anyone has given them credit for. They not only have a keen eye for the potential to improve processes that take them away from their key mission, but also a critical eye around where the ethical red lines should lie.
Nurses offered insights around how automation could both improve and undermine patient care; public servants were alert to the mistakes of robodebt and worried that it would overshadow future opportunities for trust building.
As for retail workers, who are the crash test dummies most exposed to a reckless combination of automation and surveillance, there is deep concern about the way that automatic checkouts have undermined the humanity of their work and the experience of their customers.
The message from our research was consistent and compelling across three quite different sets of participants. It’s critical that workers become far more than invisible bystanders in the AI revolution; they have both a right and, they would say, a responsibility to actively design the new technology.
Overwhelmingly, the public agrees across all voting types. By way of context, these numbers are as strong as support for the banning of social media for teenagers, which seems to be the current Band-aid fix to our digital jungle.
What does this mean for regulators eager to save us from technology? When it comes to AI, the best defence is not to simply wrap ourselves in a protective legislative cocoon and demand another tough new law to preempt or repel every risk or act of harm.
Rather, it is about determining who has the power.
If we are going to embrace AI, let’s do so as active participants, not passive subjects. Let’s embed the notion of shared benefits with strong industrial guardrails. Let’s get AI out of the IT department and onto the shop floor. And let’s demand those driving the introduction of this technology do so with us, not to us; shaped by us, not shaping us; augmenting our labour, not automating it.
The lesson of the social media revolution has been that technology is neither innately good nor bad. What seemed like a positive tool to connect people on an open platform has become a threat to our collective wellbeing because of the underlying business model.
Approaching AI with this critical mindset, rather than naively embracing progress as a self-evident good, is the first step.
Thanks to scholars like Acemoglu and Johnson, we now have an economic argument to match the moral one: the adaptation of new technology can make us all richer and happier if we are given the chance to collectively design it and control it.
Scarlett Johansson won’t save us. But if we can build our own Marvel Universe of local heroes who are trained to draw these lines and are granted the right to enforce them, we just might have a chance to harness this new source of power in our interest.