Madhumita Murgia - 'Code Dependent: Living in the Shadow of AI'

Madhumita Murgia - 'Code Dependent: Living in the Shadow of AI'

in

The cover for 'Code Dependent'.

Not long ago, OpenAI, the makers of Chat-GPT, asked for a seven–trillion–U.S.–dollar investment. Why? They can’t really answer the question.

The short version: they need more hardware and power than is currently available through regular sources. Naturally, being a capitalist company that’s 49% owned by Microsoft, they won’t disclose anything about what the potential environmental costs would be. Even if they did say something, it’s likely they would lie about it.

OpenAI is one of the biggest players in the AI-game, but they’re not the only one. More and more are popping up, causing extremely big companies to invest billions of U.S. dollars into a madcap game where the winner takes it…all?

Madhumita Murgia, the AI Editor for the Financial Times, has written a book about the bravest people who work in AI: the people at the bottom of the AI supply chain.

The book starts off from a personal perspective: we follow Murgia’s trip in learning how much companies know about her based on info they’ve bought from the internet. This is information via cookies, Meta, Microsoft, and a plethora of different companies who do this.

I tracked down the profile of someone I was intimately familiar with. Myself. To do this, I found a small adtech start-up called Eyeota, which walked me step-by-step through how I could pull the information being collected about me from my own web browser and then decoded it for me. The afternoon that Eyeota sent me the full report of an ‘anonymized’ version of me, I was on a train to Brighton. It included a report that ran to more than a dozen pages compiled by Experian, a credit-rating agency that doubled as a data broker.

Tech giants like Google and Meta have applied machine learning to target advertising as narrowly as possible and grow their worth up to $1tn. This lucrative business model that monetizes personal data is what American social psychologist and philosopher Shoshana Zuboff has called ‘surveillance capitalism’.

Murgia is not Eliezer Yudkowsky, a scientist/philosopher who believes we should pause training of AI systems that are more powerful than GPT-4; GPT-4 is an AI model created by OpenAI.

Instead, she seems more to believe in regulated AI.

Deepfakes aren’t unintended consequences of AI. They are tools knowingly twisted by individuals trying to cause harm. But their effects are magnified and entrenched by the technology’s ease of use, and institutional callousness: a lack of state regulation and the unwillingness of large online platforms to be accountable for their spread. The stories of Helen Mort and the other women that I spoke to are symbolic of this; our collective indifference to their pain.

My favourite thing about the book is the stories with real persons that Murgia has conducted. She’s spoken with people who are affected by AI companies, who have been beaten down, and fought back.

Couldry points at gig work – app workers for places like Uber, Deliveroo or DoorDash – whose livelihoods and lives are governed by algorithms that determine job allocation, wages and firing, among other things. ‘It’s a tyranny,’ he told me. ‘There are moral questions here about what limits we must have to make lives liveable. This is where solidarity between people around the world is important. There are common struggles between workers in Brazil, in India, in China, in the US – it might not seem urgent in San Francisco right now, but it soon will be.’ For me, this framing was a revelation.

Workers at Sama, the Kenya-based company that trained AI models for OpenAI, were ‘hired to categorize and label tens of thousands of toxic and graphic text snippets – including descriptions of child sexual abuse, murder, suicide and incest’. Most often, these workers are wiped away like chalk from a board and not really listened to: they were ordered to be silent while eating lunch, which kind of says a lot about the work culture. They then recommenced work on training AI. Murgia wanted to know about the quality process.

How does he know if he’s done it right? ‘Sometimes, it’s not clear,’ he tells me. ‘Then you just have to go with how you feel.’

It’s not surprising that AI makes mistakes: it’s just a guessing parrot. AI models can be complicated and very fast at delivering answers, but as most of us know, ‘that sounds like it’s written by AI’ is never a compliment.

Murgia wandered the Sama halls.

On the other side of the doors that I was not permitted to enter, young men and women watched bodies dismembered from drone attacks, child pornography, bestiality, necrophilia and suicides, filtering them out so that we don’t have to. I later discovered that many of them had nightmares for months and years, some were on antidepressants, others had drifted away from their families, unable to bear being near their own children any longer. A few months after my visit, a group of nearly 200 petitioners sued both Sama and its client Meta for alleged human rights violations and wrongful termination of their contracts. The case is one of the largest of its kind anywhere in the world, and one of three being pursued against Meta in Kenya. Together, they have potentially global implications for the employment conditions of a hidden army of tens of thousands of workers employed to perform outsourced digital work for large technology companies.

I guess Mark Zuckerberg sees more of his monetary wealth than worries about what his company does to people.

Murgia goes into some of the extreme problems that are part of AI creation: the wealthiest creators of AI, the real creators, are biased and inject xenophobia, sexism, ageism, and all types of prejudices into their models, based on how they’ve written and trained their AI models.

There’s also the use of AI which most AI companies haven’t cared about:

AI image tools are also being co-opted as weapons of misogyny. According to Sensity AI, one of the few research firms tracking deepfakes, in 2019, roughly 95 per cent of online deepfake videos were non-consensual pornography, almost all of which featured women. The study’s author, Henry Ajder told me that deepfakes had become so ubiquitous in the years since his study that writing a report like that now would be a near-impossible task. However, he said that indications from more recent research continue to show that the majority of deepfake targets are still women, who are hypersexualized by the technology.

That’s only one example. More interesting problems occur when, for example, an eight-month pregnant woman was arrested for carjacking, in spite of zero evidence and owning two cars herself; the arrest was made because of erroneous facial recognition. Murgia goes into similar examples.

Machine-learning algorithms are being tested as tools to predict recidivism in convicted criminals, to guide sentencing decisions and to assist custodial officers in deciding who should make bail. But the jury is out on how well they work. Meanwhile, there is evidence that they can be racist, unconsciously or by design. Even where race is not considered in the algorithm’s decision-making process, proxy variables – previous arrests, witnessing violence, living in a certain neighbourhood or simply being poor – are fed as inputs into AI and other statistical systems, propagating institutional racism. This was highlighted in an investigation from journalism non-profit ProPublica, which analysed a predictive tool called COMPAS that was widely used in the United States to forecast a defendant’s likelihood of re-offending, and therefore whether they should be afforded bail. ProPublica analysed COMPAS predictions for more than 7,000 arrestees in Florida and concluded it was a racist algorithm. Their findings showed that ‘blacks are almost twice as likely as whites to be labelled a higher risk but not actually re-offend.’ Conversely, they wrote, whites were ‘much more likely than blacks to be labelled lower-risk but go on to commit other crimes.’

Her storytelling abilities go a long way.

I also met Pete Fussey, a criminologist who has been studying crime and surveillance in urban spaces for the past twenty years and is a long-time resident of Stratford.

Fussey showed me a small equipment room overlooking the bridge, with a door marked ‘Private’. For a period in 2019, he had spent several hours here embedded with Metropolitan police officers, who were trialling facial-recognition technology on pedestrians. Fussey was given unfettered police access during the trials to study their methodology. He had sat with the officers in this makeshift stakeout for several hours over many weeks, while they attempted to identify criminals from amongst the passers-by. He concluded that the use of the software weakened individual police discretion, while also dulling officers’ observational powers and intuition. He had watched them chase down many unknowing and innocent pedestrians after an incorrect face-match from the software, made by Japanese corporation NEC. Rather than an assistive tool, it looked to Fussey like the human officers were often mindlessly carrying out the orders of the machine.

She considers some ethical aspects from a national level:

It wouldn’t be the first time a government had used sensitive data to harm its own people. After all, from 1976, Argentina had been ruled for seven years by a military dictatorship that collected extensive data about the public through surveys and polls, which was used to craft propaganda and influence citizens.

In that same era, data formed the backbone of the notorious Operation Condor, a United States-backed campaign of terror and repression carried out jointly by several dictatorships in the countries of the Southern Cone, which include Chile, Argentina, Brazil and Bolivia. These countries contributed and shared data about citizens seen as threats to the authoritarian governments, including left-leaning politicians and thinkers, trade unionists and clergy. That information was stored in a shared computer system used by the group to plan abductions, ‘disappearances’, tortures and even assassinations.

The people’s data became a weapon used against them.

Altogether, Murgia presents a story of AI as the future, one that can run rampant at the behest of surveillance capitalists or be sharply regulated. Murgia is a reporter; this book does not really delve into philosophical and ethical thinking into when we should stop AI development if AI could hurt humanity, which it could; theoretically, AI can already, at this point in time, change political election outcomes, fool nearly any human being by creating super-believable deepfake videos and images, and steal your data for nefarious purposes.

This book is well-written and important as an eye-opening tool for people who blindly believe in the good powers of AI; if AI were a friend, we’d react to their continuous stealing and lying, and I personally think that should be how we should treat people like Sam Altman and companies like Microsoft and Amazon like that type of person.