A YouTube video of the October 2023 annular solar eclipse shows a small square opening up on the surface of the moon, and for a millisecond, sunlight streams through as if the moon had been punctured. True believers theorize that this is proof that reality is a simulation and that we are living in a matrix. The hole in the moon, they say, is a glitch in an otherwise mostly seamless deception generated by nefarious technology. For most of us, this purported deception isn’t too awful—we have clean water, good food and warm homes—and if the believers are right, on the other side of the illusion is a planet destroyed by nuclear war, cast into perpetual darkness and ruled by tentacle-dangling robots. I’ll take the blue bill, Morpheus, thank you very much.
On this side of the Matrix, we already have enough trouble with machines—mostly of the damn-thing-isn’t-working variety—than to have to worry about a war-of-the-worlds with dictatorial robots. Among our troubles are rising concerns about artificial intelligence or AI, computer systems which are designed to carry out functions that typically require the perception and decision making of a person. The worry, it seems, is a kind of replacement conspiracy theory, in which people fear that we will be replaced by machines. Machine replacement of people has been going on since the advent of the cotton gin, and like the cotton gin, sometimes has profound implications for human societies. In these cases, it is often not so much the machines we worry about, but the people who control them.
As I understand it, AI works by recognizing patterns in very large data sets, which then allows it to make predictions about any new pattern it encounters. For example, a computer program, or neural network, which has studied millions of faces may then be used to identify your face as you rush to get on your plane at the airport. But AI is only as smart as the information it is trained on, and already we have seen that its learning reflects limitations of the data given to it and the biases of its programmers. I do not belittle concerns about the way AI is used to make decisions, for example, in our criminal justice system or in our hospitals. I applaud recent efforts by the Biden administration of set up guidelines for the use of AI; however, it is not artificial intelligence, per se, that is my worry, but human ignorance.
Keep reading with a 7-day free trial
Subscribe to Georgia Writers to keep reading this post and get 7 days of free access to the full post archives.