I think one of the biggest travesties of the current implementation of artificial intelligence is how pliable it makes us.
The Will Smith film I, Robot, is set in a society not unlike ours and only a few years in the future. In this alternate near future Chicago, robots are implemented everywhere. The movie focuses on the newest line of androids rolled out by a major conglomerate that are each driven by one corporate AI. The company promises that the androids are safe and so, naturally, everyone believes them. They become more universally implemented than rudimentary robots. They are fundamentally trusted with every task: from police work to dog walking. Except for one cop, who vehemently distrusts them (enter Will Smith). For the first two acts, his boss, his co-workers, staff at the robot factory, his grandmother, strangers on the street call him a zealot. His grandmother has her android cook him dinner, a recipe of hers that he is intimately familiar with, to prove that they are harmless. He tastes no difference between his grandmother’s food and the android’s. She, just like everyone else in Chicago, has given away task after task to the android until all that is left to do is oversee it and entertain herself. Precisely because of a lack of oversight, at the corporate level, something goes wrong. I won’t spoil the rest of the flick.
In Pixar’s WALL-E, artificial intelligence has been running the show for generations. We’ve graduated past involving AI, robots and androids in everyday tasks. We’ve given up the concept of staying in control all together. The task of human oversight is essentially neglected because the robots prove to be so efficient and trustworthy. In the second act of that film, we see that people have assigned all tasks to various different AI and robots for so long that they are no longer capable of walking. All they do all day is indulge in entertainment and be fed by robots. The logical leap becomes The Matrix: A world in which AI has decided that it’s better off at doing the whole ‘running the world’ thing than we are, and lists several catastrophes on our resume to justify the decision. In that scenario, the entertainment has turned into the only experience available for humans. Their physical bodies are maintained as massive power farms to generate the electricity that the AI consumes. Yes, I’ll spoil that one, you’ve had twenty-five years. There’s another show I’ve watched called Inside Man featuring Stanley Tucci as a psychologist on death row for murdering his wife. When asked why he did it, he replies: “We’re all murderers. All you need is a bad day and a good reason.” I feel like that sums up the flip that AI makes from a WALL-E scenario to a Matrix one.
From the biblical example of Esau giving up his birthright for a hot meal after a long day of hard work, to you and me signing away all kinds of privacy rights just to get next-day delivery: humans seem to be willing to give away more than they should for the prize of convenience. Across these three films, we see a fairly obvious progression take place as humanity gives up involvement, control, oversight to have easier, more convenient lives. Why take the bin out if the robot can? Why cook my grandson dinner when the robot can? Why run that license plate through the system when the robot can? Why run parliament when…
So we’ve been through the Hollywood-ified story, but I want to leave the world of film and jump into the real world for a moment (if you can bear it). My boss is a man in his mid-thirties and runs a business alongside another man in his early thirties. They employ eight people. During the week, we sat in a brainstorming meeting about how we could streamline the company, make more revenue, organise our admin a bit, things of that nature. Naturally, his phone came out and he asked ChatGPT to think of more ideas for us. Some were good. Every week he takes his phone out and asks ChatGPT for help. Whenever he receives an answer, whatever the answer, he believes it. I’ve listened to its frighteningly lifelike voice-to-text function. It’s like talking to a person. Like someone who knows what they’re talking about. Someone I can trust. If the progression takes a turn at I, Robot crescent and walks us down WALL-E lane before ending up at Matrix cul-de-sac, this very much feels like the moment we begin the walk.
I’m not necessarily against AI, though I do resent it coming after my career (or maybe I should resent people for making that okay). I do wonder, though, what our future looks like when we all just believe whatever it tells us. Aren’t these systems built by companies? The very same companies that violate privacy laws? Or, better yet, do what they like while the laws are catching up? How easily shaped are our individual libraries of facts, systems of beliefs, minds and hearts, and by whom? Or, scarier, by what?
This journal will, for the moment at least, remain AI-free. Except for spellcheck. I still can’t do neccasarily. Nessacarily? Necassarily? Necasseraly? Nessacarily? Whatever. If you enjoyed this journal, please subscribe.