In late May, hundreds of industry leaders, scientists, academics, and others intimately involved in the development of advanced artificial intelligence signed on to this statement: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." The statement follows a period of fairly intense media and government scrutiny of A.I., including a May 23, 2023 headline grabbing hearing on the matter by the US Senate Subcommittee on Privacy, Technology, and the Law.
As someone who works in higher education, I have more than a passing interest in the implications of ChatGPT and other A.I. devices for teaching and learning. The Chronicle of Higher Education has published a number of recent essays on the topic, with titles like "How Will Artificial Intelligence Change Higher Ed?," "How ChatGPT Could Help or Hurt Students With Disabilities," "Will ChatGPT Change How Professors Assess Learning?," "ChatGPT is Already Upending Campus Practices, Colleges are Rushing to Respond," and so on. On my own UW Oshkosh campus, A.I. sparked a spirited discussion among instructors on an email distribution list, been the topic of a guided discussion on Zoom, been brought up for discussion in virtually all academic departments, and will probably be a major subject of faculty/academic staff senate and/or administrative policy initiatives in the near future.
My own observations of the dominant perspectives on A.I., both outside and within higher education, is that for the most part they tend to see A.I. as more causal than symptomatic. It's very similar to the mainstream view of cell phone usage; "the phones have made us more distracted and less able to live in the moment" is a common refrain. Maybe that's true. But can anyone point us to the Edenic period when the majority (or even a significant percentage) of humans stayed focused on tasks at hand and lived in the moment, especially in radically individualistic cultures like the United States? As someone who has now been teaching for forty(!) years, I promise you that American college students have NEVER had an easy time staying task focused and in the moment. Thus a strong argument could be made that problematic phone behavior was and is a symptom of the human tendency to seek distraction and do anything to avoid the real hard work of communicating in the moment with other human beings.
When it comes to artificial intelligence, I see the abuses as symptoms of two major features of modern society: (1) the uncritical acceptance of the idea of the free market, capitalist economy as best suited to serving human needs; and (2) the joyless culture that results from mass-level allegiance to the values of that economy. Obviously this is a big topic that deserves book-length treatment. In this rant I will only sketch out a few ideas. I promise that none of them have been generated by ChatGPT.
Artificial Intelligence and the Free Market
When the Soviet Union broke down in the late 1980s and early 1990s, western media immediately adopted the Reagan Administration's framing of the upheaval as the victory of democracy and the market economy over tyranny and communism. More rigorous reporting would have exposed the over simplicity (and absurdity) of this framing. It would not have required defending the corruption and cruelty of the Soviet empire builders to point out that their defeat did nothing to minimize anti-democratic tendencies in the west, and nothing to challenge what Eisenhower called the "unwarranted influence" of the military-industrial-complex. Indeed, more than thirty years after the disappearance of the Soviet Union, the only thing Democrats and Republicans in Washington can agree on is raising the military budget. As noted by journalist John Nichols, "there's never a debt ceiling for the military-industrial-complex."
The market economy that rose from the ashes of Cold War, technically called "neoliberalism," is essentially a global version of Reaganomics. Canadian author and activist Naomi Klein in her 2014 book This Changes Everything succinctly identified the three main policy pillars of neoliberalism as "privatization of the public sphere, deregulation of the corporate sector, and the lowering of income and corporate taxes, paid for with cuts to public spending."
The impact of neoliberalism on blue-collar workers should no longer be up for debate. The so-called "free-trade" deals empowered corporations to engage in the never ending quest for cheap labor, with devastating results for American manufacturing. Promises that workers would be retrained to participate in a much-hyped high wage business service economy turned out to be hollow. Instead what we've had is a mostly bipartisan enabling of low road economic practices. The Democrats became so overtly associated with these practices that millions of Americans impacted by them somehow imagined Donald Trump as a potential solution. Some Dems, like Senator Chris Murphy of Connecticut, have recognized the "wreckage" of neoliberalism and advocated for reforms that would move the economy toward a high road.
Absent some kind of radical reforms of our economy, artificial intelligence systems will easily wipe out huge swaths of the white-collar economy. And why wouldn't it? Does anyone honestly believe that multinational corporations--eager to exploit foreign labor abroad while betraying blue collar workers at home--will not eagerly do the same thing to college educated, white collar workers? The fact that most white collar workers, in the 1990s up to today, showed little solidarity with those victimized by the low road economy will make the road to reform more difficult.
If the global, neoliberal economic order remains intact, then the vanguard leading that economy will make sure artificial intelligence benefits them exclusively. At the same time, they will gaslight the masses with a rhetoric of how "A.I. disrupting the work force in the short term is a necessary condition for long term growth." In such an environment our only real hope is to engage in grassroots organizing rooted in an international spirit of solidarity across lines of class and race. This will not be easy, and the odds of failure are much greater than success. But if the alternative is to trust that the same vanguard that got us into this mess will somehow be more moral and mindful when it comes to A.I. impacts, then we are fooling ourselves.
Artificial Intelligence and Our Joyless Culture
Here I will focus primarily on academia, as that is the realm of existence I have most familiarity with. My experience has been that every time a new technology is introduced that has implications for education, academics divide into two groups. The "neo-Luddites" are usually slow to accept or adapt to technological change, want strict policies put in place to deter student cheating, and resist any suggestion that "tried and true" methods of education (e.g. the lecture, the lengthy term paper, the essay test, etc.) might be anachronistic. The "Futurists" do not dismiss any of the neo-Luddite concerns, but generally see technological change as something we should embrace and shape to help meet the requirements of sound pedagogy. The Futurists are the kinds of instructors who might address student cell phone use not by banning it entirely, but by using phone apps in classroom activities so that the technology can be put at the service of learning. Similarly, the Futurist might have a policy in place to punish irresponsible use of A.I., but they are also more likely to educate students on "smart" uses of it.
Most teachers, myself included, have both neo-Luddite and Futurist tendencies. What has always frustrated me, whether in the relatively low-tech classroom of my early teaching days or the more high-tech environment of today, is what seems like a high percentage of students who simply do not get joy out of the act of creation. When I tell students that I have been writing a column of at least 900-1000 words every month for over twenty years, and a huge reason for that is the sheer joy I get out of thinking, creating, and provoking, I often get perplexed looks back at me. Many of my colleagues across campus get similar reactions when they talk about their own creative output, whether it is peer recognized scholarship, artistic performance, or any number of expressive works.
I've come to the conclusion over the years that the problem is we somehow created a culture that places a high premium on behaviors that do not correlate very highly with joy: getting the "right" answer, repeating back "authoritative" knowledge, and doing everything on-time. I often require students to come see me to talk about paper or speech assignments, and those meetings are fascinating because students frequently expect me to tell them what to write or say. I try hard in those meetings to provoke them to come out with some original thoughts, and then praise them lavishly when they do in the hope that they will get a feeling of joy from creating something that someone else perceived as fresh and original. Sometimes I unwittingly do end up giving them an idea for a paper or speech, in large part because I am experiencing joy in thinking about the topic while we engage in conversation. Obviously there are exceptions to what I am describing here; a number of students get joy from the act of creation. But the exceptions always seem to prove the rule.
Student support systems on campus, all of which are run by extremely competent and well-meaning professionals, sometimes reinforce the joylessness. For example, when students are having difficulties with course material, they are often told to go talk to the professor to find out "what they want." Or when told to seek academic advising, they are told that the meeting should be strictly about "what courses to take." In a real sense, the students are being prepared for the neoliberal economy described earlier, in which their material success will be tied to their ability to appease power. If you think the lack of joy in education is confined to higher ed, you should read Susan Engel's excellent 2015 piece in the Atlantic called "Joy: A Subject Schools Lack."
A number of schools have already banned ChatGPT. The argument of this rant is that moves to ban A.I. systems minimize or ignore the cultural issues that make A.I. attractive in the first place. |
In a joyless culture, using A.I. to write a paper makes total sense, does it not? If I get no joy out of creating original work; if my only real value is the extent to which I can repeat back existing knowledge, and do it on time, then why not use A.I? In this culture, the joy of creativity is simply not part of the equation. As of January 2023 nearly 1 in 3 college students reported using ChatGPT on written assignments, and I expect that to rise substantially in the next few years. Academics, especially the neo-Luddites, will rush to create policies to deter and/or ban A.I. usage, but they will be missing the point: as long as we continue to prioritize and reward joyless behaviors, even our "best" students will continue to be content with "getting shit done." The joylessness of school work was a problem before A.I. and will continue to be in the future unless we make a concerted effort to rethink our dominant cultural values.
Of course what I am describing is not just a problem for students or for education in general. I read an article in the New York Times recently in which a lawyer representing a man who sued an airline used artificial intelligence to prepare a court filing. The lawyer's legal submission, which will now be subject to a hearing to discuss possible sanctions, was "replete with bogus judicial decisions, with bogus quotes and bogus internal citations.” The lawyer in question did not promise to never use artificial intelligence again. Rather, he "will never do so in the future without absolute verification of its authenticity."
No doubt that lawyer will claim that he simply was overwhelmed with work and ChatGPT presented a quick way to get the court filing in on time. When academics are caught using A.I. to write scholarly articles they will probably say the same thing, as will journalists and any other professional whose work relies on message creation. In a joyless culture that is the product--at least in part--of the unforgiving and predatory economy in which we exist, we should expect nothing less.
Wanting to mitigate the risk of extinction from A.I. makes total sense. Thinking we can do that without addressing the serious deficiencies of the culture that created A.I. makes NO sense.
July 2, 2023 Update: In today's New York Times, writer and podcaster Evgeny Morozov has an op-ed called "The True Threat of Artificial Intelligence" that also makes a connection between AI and the neoliberal economy. --Tony Palmeri
No comments:
Post a Comment