Robots are learning to ask for help. How do you feel about that?
At Carnegie-Mellon University’s Robotic Institute,
The robots were constantly demonstrating what they couldn’t do, but [there was] a simple thing they could do. . . . [they] could say, ‘Human, can you press the elevator button for me?’ Or ‘Human, can you get out of the way?’
The logical next step: “Human, can you provide me with crucial information about your job?” Despite the fact that Artificial Intelligence (AI) is not my favorite topic (or in my area of study), events compel me to explore the issue.
Last spring, discussions about AI’s impact on college teaching became impossible to ignore. Faculty and students had different concerns. Professors argued that first, reliance on AI applications for course papers creates intellectual deficits. The purpose of those assignments is to develop a student’s critical thinking and writing skills; automating the process subverts this. Second, using ChatGPT encourages plagiarism and cheating. Students countered with the argument that AI tools made them more efficient and allowed them to turn in assignments on time.
These discussions were only a small part of larger debates over AI. The technology’s rapid development has roiled numerous industries and has become a political flashpoint. It didn’t take me long to find a lot of information.
Tech companies were happy, initially, to be free from government oversight. But now they’re scared. The CEOs of Google, Meta and OpenAI voluntarily met with Congressional leaders in September asking for government intervention “to avert the potential pitfalls of the evolving technology.”
Some fear the evolution of a murderous AI, like Skynet in “The Terminator” movies. The popular doomsday plot goes like this: Imagine a world . . . where artificial intelligence has surpassed—and become hostile to—human intelligence. Driven by malevolence towards humans, AI creates a robot army to eliminate humanity and take over the world. . . . “Evil AI” can succeed in suspenseful and thrilling movies—but as a predictive model for real life? It fails.
The Robot Apocalypse is a deflection, it’s the “shiny object” that distracts from what’s really going on. In reality, we are seeing a version of the age-old struggle between the workers and the bosses. Today’s conflict is over power, secrecy, and the future.
The recently ended Writer’s Guild of America (WGA) strike epitomized this struggle. Film and TV writers faced off with the studio moguls over compensation, control of AI and corporate transparency. The five month strike ended with writers extracting major concessions from the Alliance of Motion Picture and Television Producers (AMPTP). The writers pressed for a framework that would allow them to use AI as a tool and prevent executives from using it to replace their employees. From The Guardian:
One of the most closely watched aspects of negotiations was the use of artificial intelligence, amid concerns from both writers and actors that unchecked AI could dramatically reshape Hollywood and undermine their roles, pitting artists against robots in a battle over human creativity.
With terms of AI use finally agreed, some writers are breathing easier – for now – and experts say the guidelines could offer a model for workers in Hollywood and other industries. The writers’ contract does not outlaw the use of AI tools in the writing process, but it sets up guardrails to make sure the new technology stays in the control of workers, rather than being used by their bosses to replace them.
An algorithm can come up with ideas for shows—just not new ideas.
A need for greater corporate transparency from AMPTP media producers was also a concern of striking writers. It’s clear why media companies try to keep their operations secret—it gives them more leverage over their creative collaborators and advantages them vis-à-vis their competitors. Forcing corporations to share what they know about their audiences (us) is an overall good thing. It would be nice if the resolution of the writers’ strike managed to set a precedent. Hollywood Reporter reports:
As for streaming transparency, the compromise was limited. The union will have confidential access to ‘the total number of hours streamed, both domestically and internationally, of self-produced high budget streaming programs (e.g., a Netflix original series)’ and ‘may share information with the membership in aggregated form’ — in other words, there will be somewhat more transparency, but the streaming services aren’t exactly opening up their troves of data for public consumption.
Get that? Media conglomerates have “troves of data” about their audience’s (our) preferences, viewing habits, and . . . more stuff we don’t know.
The WGA’s successful strike demonstrates the effectiveness of the writers’ strategy and shows the way forward. What is required for labor (i.e., the majority of us) to rebalance the relationship with capital (i.e., the bosses) is: clarity regarding the problem, solidarity within the union, sympathizers and allies outside the union, a determination to persevere, effective communication with the public, and . . . a responsive political administration in Washington, D.C.
Other writers have opened a second front, attacking the dishonesty at the core of AI’s advance. Prominent authors have filed a lawsuit challenging the unauthorized use of their copyrighted works for AI algorithmic training.
The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI
New York, N.Y., September 20, 2023—The Authors Guild and 17 authors filed a class-action suit against OpenAI in the Southern District of New York for copyright infringement of their works of fiction on behalf of a class of fiction writers whose works have been used to train GPT. The named plaintiffs include David Baldacci, Mary Bly, Michael Connelly, Sylvia Day, Jonathan Franzen, John Grisham, Elin Hilderbrand, Christina Baker Kline, Maya Shanbhag Lang, Victor LaValle, George R.R. Martin, Jodi Picoult, Douglas Preston, Roxana Robinson, George Saunders, Scott Turow, and Rachel Vail.
‘Without Plaintiffs’ and the proposed class’ copyrighted works, Defendants would have a vastly different commercial product,’ stated [plaintiffs’ lawyer] Rachel Geman. ‘Defendants’ decision to copy authors’ works, done without offering any choices or providing any compensation, threatens the role and livelihood of writers as a whole.’
Two key aspects of the power struggle are targeted by the Authors Guild lawsuit: secrecy and theft. Computer expert and author Alex Reisner writes in The Atlantic:
One of the most troubling issues around generative AI is simple: It’s being made in secret. To produce humanlike answers to questions, systems such as ChatGPT process huge quantities of written material. But few people outside of companies such as Meta and OpenAI know the full extent of the texts these programs have been trained on.
How does the algorithm get “trained”? AI’s large language models (LLM) need a lot of language, and the higher the quality of writing it gets trained on, the better the product. To investigate the extent to which the training materials are simply purloined works by famous authors, Reisner
obtained and analyzed a dataset used by Meta to train LLaMA [Meta’s proprietary LLM AI]. Its contents more than justify a fundamental aspect of the authors’ allegations: Pirated books are being used as inputs for computer programs that are changing how we read, learn, and communicate. The future promised by AI is written with stolen words. (Emphasis added—m.o.)
As a writer and a social scientist—I take piracy personally. If I wanted to use someone’s book as data for a research project, professional ethics require me to obtain formal permission. I would need a signed document from the author (or originator) of the book (or dataset). When I publish my article, the acknowledgements section would include a thank you to the author for permission to use their work, and I would cite the original source in my references. Suffice it to say, none of the AI developers is following this protocol.
Disregarding copyright and using others’ intellectual labor to create new products exemplifies tech company expediency. And this goes without saying: the new applications are going to make some people extremely wealthy. “Some people” do not include the creators of the LLM “training materials.”
So this is where we are. On the one hand, tech and media conglomerates worry about what happens when AI is uncontrolled. On the other hand, regulatory solutions will constrain their corporate power (about which they also worry). The solution lies in rebalancing the power differential to favor those actual humans who can use AI as a creative tool; this means limiting the power of AI owners. Which brings us back to government. And politics.
Cyber-security experts Bruce Schneier and Nathan Sanders address these issues in a New York Times op-ed, The A.I. Wars Have Three Factions, and They All Crave Power:
Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons. . .
Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment . . . or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re [preparing their escape to] . . . Mars? It is critical that we . . . see through the specter of A.I. to stay true to the humanity of our values.
Let’s aim higher than the latest tech start-up’s “value proposition.” True human values are peace, gratitude, empathy, generosity, compassion. Love.
Related Grounded articles:
It’s not the Robots—it’s the rip-off (June 13, 2023)
Keep scrolling down (below Notes) to reach the comments, share, and like buttons.
Dear Readers, could you please hit the “like” button? It helps improve the visibility of Grounded in search results. Thanks.
Follow me on social media:
Post.News Bluesky CounterSocial Facebook
Notes:
Dani Anguiano and Lois Beckett, How Hollywood writers triumped over AI —and why it matters.
Authorsguild.org, The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI (press release).
Katie Kilkenny and Lesley Goldberg, Writers Guild Reveals Details of Tentative Deal With Studios on AI
Julian Mark and Tucker Harris, Could “The Terminator” really happen?
Alex Reisner, Revealed: The authors whose pirated books are powering generative AI.
Bruce Schneier and Nathan Sanders, The A.I. Wars Have Three Factions, and They All Crave Power (unlocked NYT article).
Josh Tyrangiel, Robots need people, too.
One of the most important problems that I fear with AI (and one I rarely see mentioned) is the ability of AI to control what is presented as FACTUAL information in both the academic and public spheres. Teaching critical thinking skills will become next to impossible when "factual" information that informs decisions and opinions is controlled by whomever generates the "facts" that are presented by AI and embraced by various sources. Actual "facts" will be what the AI determnes to be true. The supposed science fiction of "1984" is turning intto reality of 2023.