Artificial Experience
We are eager to ask, “Is this computer human?” but reluctant to ask, “What does it mean to be human?”
My teaching credential is in English and English was the context of my first 10 years of teaching. I spent 10 years working in technology and for the last 8 years, I’ve taught Computer Science. Computer Science is a discipline defined by problem-solving. Humans tend to think of computers as unknowable complexities of digital magic but they aren’t. Computers are just logic machines. A computer is a box of very many, very fast, switches. A computer can only understand 1’s and 0’s, the two positions of a switch, On or Off, True or False. Computers are only complicated. Humans are complex.
Complications are solved by machines; launching a rocket, saving for your retirement, counting thousands of jelly beans in a jar. Humans are not particularly good with complications. Computers make solving complicated problems possible, easy even. Humans think this is great.
Complexity is the ooze of living. Should I buy this beer if I can’t afford rent? Should I buy this yacht because I can? Should I work harder? Do I deserve to be happy?
When humans try to solve problems we have to account for the complex, chimeric, non-binary nature of being alive. We can’t escape the gray areas. Regardless of the childish political extremes we sustain, we live in the gray areas between Trues and Falses.
So I asked a machine, “What will education be like in the future?”
“Virtual and augmented reality technology will also become more prevalent, allowing students to experience immersive learning environments that bring lessons to life.” — ChatGPT
ChatGPT is ingenious code written by inventive and insightful humans that became inventive and insightful without the help of ChatGPT. ChatGPT is ingenious code that scours massive stores of text in search of patterns; patterns like groups of words that appear frequently in the same context. For example, “gear” often appears in the context of “machine” and “joy” often appears in the context of “emotion”. ChatGPT uses these patterns as a sort of template to generate resonant responses. This is not intelligence. This is computation.
Artificial Intelligence is not an evolution of humanity, it is an invention of humanity. Artificial intelligence is intended, designed, built, owned, and sold by humans. These humans are now asking us to give up on understanding the complex condition of being human in favor of living a simplified analogy for life in a reductive digital simulation. We are being asked to view reality in the radically simplified terms that machines can understand. Yes or no, true or false, right or wrong. Those who own these artificial parable machines are asking us to live reductive binary lives so that we can be algorithmically manipulated and economically exploited for their benefit. (Stuart Russel, Human Compatible)
Artificial Intelligence can only look backwards in time through pools of data it can’t understand. Artificial Intelligence is to meaning as a hammer is to a building. Meaning is made by humans when we assimilate layers of experiences. Machines can record facts about layers of dirt in an exposed cliff left behind by millennia or rings in a Redwood tree that are wide during abundance and thin during drought or layers of wallpaper and paint describing time passing in a Home.
Humans can see life in the interstitial spaces between events. Humans can feel life. We are haunted by living. A machine can record that an event occurred and record many of the parameters that describe that event but only as dots on a timeline, separate, distinct, no meaningful whole. Meaning is made by humans.
Artificial Experience, at its very best, is an immersive but limited imitation of life, a blue pill, an Education roofy. This is antithetical to experiencing life, this is the opposite of experience as education, the opposite of experience in education. ChatGPT’s response to “What is the future of education?” is not an imagining of a generative future. It is a projection of patterns from the past.
In 1970 Seymour Papert, possibly the world’s first K-12 Computer Science teacher, started the MIT Artificial Intelligence Lab with Marvin Minsky.
“We started with a big ‘cosmic question’: Can we make a machine to rival human intelligence? Can we make a machine so we can understand intelligence in general? But AI [Artificial Intelligence] was a victim of its own worldly success. People discovered you could make computer programs so robots could assemble cars. Robots could do accounting!” — Seymour Papert
Papert lamented the loss of the big idea, the giving up on understanding of what human intelligence is for the smaller but profitable idea of process automation. What this meant for education is that when computers were brought into the classroom they were used to automate teaching, to hammer students as nails as opposed to facilitating experience, inquiry and problem solving as Papert hoped they would be.
“Computer scientists weren’t supposed to bring computers into classrooms. They were supposed to bring computer science to children in classrooms.”
Several years ago, Robotics and Artificial Intelligence scientists were stuck. They could make a robotic arm that could throw a ball but the arm had no idea what a ball was. Understanding ball-ness doesn’t come from throwing balls but from constructing the idea of ball. Robots needed to be able discover ball-ness in order to apply the utility of ball throwing.
Human children are GREAT at discovering ball-ness so massive resources were poured into early childhood learning so that we could build computers that can discover like children do. Now we want those computers, artificial intelligences, to teach real children to learn artificially. We are being asked to teach children to contort themselves into the shape of machines because only then can machines be human and once machines are human those who own and control the machines, own and control the future. It’s a very old story.
I appreciate this so much. And as a fellow English teacher, I love the exactness and aptness of your metaphors in describing AI and its antithetical relationship w the human purposes of education. It brings to mind a quote that sticks - parrot-like / in my mind after years of teaching Orwell’s ‘1984’ on the future; except that the literal boot of authoritarian power is the blunt technocratic hammer being wielded by teachers who’ve been co-opted or duped.
I thought I knew it before you wrote it, but you made it clear, My enduring understanding (you like that?) is that AI is always looking in the rearview mirror, just like the ACT, the regents, and the SAT. As they are commonly used, all are toxic the K-12.