Plan for the Day
- Connect Neuromancer–Wintermute, Neuromancer, Dixie Flatline, and the novel in general–to contemporary assumptions of AI
- F. T. Marinetti and the Italian Futurists
- Thinking about Risks
- Hugo Neri’s “Preface,” “Introduction,” and “Chapter 1”
Wintermute, Neuromancer, and Dixie Flatline
Let’s use Neuromancer to connect to today’s reading. Neri explains that fictional narratives influence cultural assumption about science and technology: “Science fiction, AI, and counterfactual thinking have composed a successful formula in the pop culture in the last decades” (30-31).
- Wintermute and Neuromancer are RAM–Random Access Memory
Dixie Flatline is ROM–Read Only Memory - The only governmental-ish group in the novel, The Turing Police
- Wintermute was programmed to be free…
“freedom’s just another word for / nothing left to lose” (Janis Joplin “Me and Bobby McGee”)
Science Fiction Acclimates the Public
Although Neri is from a different discipline, nearly a decade before his book was published, the influence of science fiction on technical communication, specifically the cultural assumptions about science and technology, was established…
- Even though science fiction rarely focuses on possible, at least, contemporarily, technology or science, the narratives still acclimate audiences to perceptions of technology and science and implicitly or explicitly propose that science and technology, which are always advancing, continue to provide solutions and change conditions for humans and society overall. (23)
- Science fiction narratives provide visions of what could come and how audiences might deal with technological change. (24)
- Although science fiction communicates the rhetoric of technological progress, science fiction is a major genre that reflects audience belief that technological advances will solve disease, hunger, and even war. (24)
Toscano, Aaron A. Marconi’s Wireless and the Rhetoric of a New Technology. Springer, 2012.
Neri’s Chapter 1 “Risk, Imagination, and Artificial Intelligence”
- p. 23: “If the components of the decoded message are inconsistent with previous beliefs or contradict values to which the receiver feels attracted, the signal is ignored or attenuated.”
- Sounds like confirmation bias…
- p. 24: “Even though there is an increasing awareness regarding fake news, which may bring some risks, it is virtually impossible to regulate peoples’ amplification of their perception of risk on something.”
- Demonstrations vs. Experiments
- p. 24: “the potential hazards from AI would never reach the public attention unless people and media communicate about these adverse possibilities.”
- p. 25: “It seems that for laypeople, the concept of risk includes qualitative aspects such as dread, the likelihood of a mishap being fatal, and the catastrophic potential. In this sense, the idea of future risk drives the general perception and representation of possible threats.”
- p. 27: Potential (probably to possible) and imminent risk is a matter of perception for lay audiences
- “Roughly, the formation of a risk representation starts with the human and nonhuman effects caused by a hazardous event, which, in turn, triggers an extensive media coverage and consequently yields the perception of imminent risk.”
Risky Business
Hugo Neri discusses risks and risk perceptions before explaining that there are no real AI risks…only perceptions of possible risks, making the technology different from automobiles, nuclear power, or marriage. To get us started, let’s consider these risks:
- Snakes (on the ground or on this %^&#$@ plane!)
- Automobiles
- Car accidents
- Speeding vs running red lights
- Ridin’ Dirty (in general)
- “Tesla shares gain as Musk offers US customers self-driving software trial” (Reuters, 26 March 2024)
- Don’t blink! Just a month later…Also, Full Self-Driving (FSD) is actually not accurate because human’s need to be available to take over.
“Tesla Slashes EV, FSD Prices In Latest Strategy Shift. The Stock Keeps Falling.” (Investor’s Business Daily, 22 April 2024)
- Plane crashes
- Running with scissors
- Unprotected sex
- Winning the lottery
- Investing in the stock market
Artificial Intelligence (AI)
To say this is a big topic would be an understatement. I mention in class that with the exception of those of you in advanced computing informatics majors our understanding of AI comes from popular sources: news reporting, political rhetoric, and, of course, science fiction. I have a VERY controversial interpretation of AI, but, before going there, let’s have some definitions:
- Narrow (or Weak) Artificial Intelligence: machine learning that responds to users through “programmed” learning. (autocorrect, autofill, help bots, etc.).
- General (or True) Artificial Intelligence: computers become as conscious as humans or surpass their abilities. (theoretical superintelligences)
We encounter weak AI daily, and the related technologies range from rudimentary for 2023 (autocorrect) to advanced (surveillance technologies facial recognition). From a social construction of science and technology standpoint, these AI technologies replicate social conditions, meaning social values are embedded into the system (e.g., racist faucets). Basically, the internet didn’t democratize information and access to services because it isn’t deployed in a democratic system; instead, it became a conduit for multinational corporations to make immense profits, which, of course, follows the logic of capitalism (c.f. “Database Culture”).
Science fiction often deals with assumptions about general AI, specifically machines becoming sentient and taking over. However, that situation is really a colonial metaphor that reflects Western culture’s history of colonization, enslavement, and exploitation of people and the environment. Such behavior is social: a nationalistic, aggressive group follows a path of domination through military and economic means.
Controversial statement: General Artificial Intelligence is a myth that will never come to be because it’s based on flawed assumptions that humans themselves aren’t programmed. Humans are machines that consume because, in late capitalist contexts specifically, because they’re conditioned to. Society is the entity and the profit motive keeps society alive by programming (advertising, education, ideology) consumers to keep consuming. This requires resources, and hegemonic groups employ force(s) to maintain their corporations’ access to materials that will continue maintaining the social entity. Sentient machines would not do this on their own: only under the already programmed paradigm of exploitation would machines carry out oppression. Just like QT, they work for the Master, the powers that be. Because there’s no originality in thinking, machines and humans alike follow programmed behaviors…and both are susceptible to glitches.
Consider the above a purposeful attempt to get you to think and draw comparisons among our materials. Popular discussions of AI rarely get to the philosophical level we’re aiming for. The course readings aren’t to have you conclude in a specific way but to get you to think, radically perhaps. Otherwise, we’re just idling (metaphorically or literally) in the fast food drive through so that the powers that be can profit.
Next Class
We’ll continue with Neri and finish Ch. 1 before going onto Ch. 4. I will someday have commented on all your Social Construction of Technology drafts.