Taming Silicon Valley: How We Can Ensure That AI Works for Us. 
Gary Marcus, MIT Press, £16.99

 

How can we control the AI that is exploiting us? 

Gary Marcus has provided a book that lists the problems, flaws, and threats of AI and what, considering these, our demands should be, and how we might meet them. 

Marcus is an expert on AI and has spoken to the US Senate judiciary subcommittee on AI oversight. He presents a soft-core socialist view: AI should be for everyone, under certain regulations, that are good for all. Considering what is actually happening, this comes across as hard-core radicalism. 

Leaving aside the threat that AI poses to jobs and creativity, as well as the way it lubricates neoliberalism, Marcus enumerates the problems as these: 

•    The particular form of AI technology is use now, Generative AI, is deeply flawed.
•    The AI that companies are building is irresponsible. 
•    Generative AI is wildly overhyped because it is easier to raise money if corporations do so.
•    We are headed toward an AI oligarchy, with way too much power.
    
“Right now, we are building the wrong kind of AI,” says Marcus, “an AI — and an AI industrial complex — that we can’t trust.”

Generative AI is a particular approach to AI that uses large amounts of data to make predictions, typically about what things humans will do in some context. Impressive as it might appear, given the cluster of computers and network at its disposal, it’s no better than guesswork plus Google. 

Large language models, on which Generative AI depends, record the statistics of words, but they don’t understand the concepts they use or the people they describe. Fact and fiction are not distinguished. There is no fact-check. Here we encounter, as we might have already in Philip K Dick, the hallucinations, inconsistent reasoning, and unreliability of AI. 

He then enumerates the 12 biggest immediate threats of Generative AI including deliberate, automated, mass-produced political disinformation; market and stockmarket manipulation; intellectual property taken without consent and environmental costs.

Marcus elaborates carefully on each of these, but overall this points to what has been called the Great Data Heist, a theft of intellectual property that will (unless stopped by government intervention or citizen action) lead to a huge transfer of wealth — from almost all of us — to a tiny number of companies. Which is to say that AI companies steal creative and intellectual work to use for AI under the pretext that AI is good for the future of human. They call it “Techno-salvation.”

What happens to the naysayers? Blacklisted. Marc Andreessen, a fabulously wealthy California-tech ideologue, wrote a “Techno-Optimist Manifesto,” with a McCarthy-like list of enemies: “Our enemy is stagnation. Our enemy is anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness, etc.” He included a call against communism, complaining of the “continuous howling from communists and Luddites.” 

It’s the old complaint, in a new dress, against regulation, taxation, and workers’ rights.

Also, to generate a single image takes roughly as much energy as charging a phone. Generative AI is likely to be used billions of times a day. An International Energy Agency forecast predicts that “Global electricity demand from data centres, cryptocurrencies and AI could more than double over the next three years, adding the equivalent of Germany’s entire power needs.”

No-one votes for the Tech companies that are, by now, bigger than nation states, and beyond the reach of regulation, let alone democracy.

As OpenAI have trained their models on all the data it can scrape — which is to say, massive amounts of content and miscontent, a lot of it copyrighted with no compensation to the artists or writers who created it — and because regulation and appropriate governance is resisted tooth and nail, Marcus proposes a list of “things to insist on” to achieve “governance of AI” including no use of copyrighted work without compensation, no training without consent, no coercive appropriation of data, and standards of transparency, among others.

He concludes with his most provocative suggestions taking the form of suggestions how we, as citizens, can make a difference beginning with the demand that members of civil society be at every table. He advocates action, such as a “digital strike,” and civil organisation that allow citizens — and not just well-endowed political parties — to have a stronger voice.

How can we promote such ideas when intellectual arguments have given way to 140 characters, soundbites and TikTok videos, and a culture of “engagement farming,” with Generative AI–written articles that are starting to pollute science, education, politics, culture and arts, as well as personal relationships? 

The answer, as usual, is with hard work, solidarity, and community. 

Jon Baldwin is Senior Lecturer, School of Computing and Digital Media, London Metropolitan University

artificial intelligence
Ai
Books
Arts JON BALDWIN recommends a well-informed survey of the ills promoted by AI tech corporations, and the measures needed to stop them exploiting us Books
Article

Is old

Alternative byline

Issue

Friday, November 22, 2024

Embedded media node

tech
Rating: 
No rating
Requires subscription: 

News grade

Normal
Paywall exclude: 
0