By Howard Gardner
I was not around when the telegraph, the telephone, the automobile, or the radio were launched—I do remember the early days of television, computers, and the internet. In each case, many observers felt that these inventions would be disruptive—they would change the global landscape, in some ways for the better, but in other ways, seemingly more problematic or even harmful.
ChatGPT falls squarely into that pattern—solving some problems, creating others. The difference is that its power and its possible effects—positive, negative, indeterminate—can be seen almost instantly. Indeed, anyone with access to the internet (and, more recently, a tiny amount of money) can see its power and its consequences directly, powerfully.
I have no special knowledge or expertise. Indeed, as I type these words at an ancient desktop computer, I have yet to play with ChatGPT myself. But no need—everyone around me has experimented, and many have shared their experiences with me.*
That said, I do have some initial thoughts. They draw on three areas where I claim modest expertise: education, cognitive psychology, and the study of intelligence.
Education
Without doubt, students (and not just students) will draw on ChatGPT frequently and for many purposes. That’s fine—no point in outlawing it. The problem arises when work ostensibly done by the student (or even by a group of students) has actually been accomplished simply by giving directions to ChatGPT.
We know that in the United States cheating by students is rampant and, as documented by Wendy Fischman and me (link), most individuals at the college level don’t even see cheating as a significant problem—at least compared to other challenges on campus, such as mental health or interpersonal conflict. But unless we drop any notion of accountability from our educational system, we need to define situations and assessments where students need to submit their own work and not work simply executed by ChatGPT.
The obvious solutions: test students in environments where they are not allowed to use any electronics (or where electronics are disempowered); have only oral, face-to-face testing; or make students sign sworn pledges/statements, with automatic severe consequences if they do not honor that commitment. Educators could also acknowledge that students will be tempted to use ChatGPT and include its use as part of curated assignments.
The better solution: Create environments where cheating is seen as wrong and not tolerated and where assignments or projects are carried out in co-constructive ways. Two helpful examples:
US colleges like Haverford which have a long and storied history of student honesty;
US colleges like Olin College of Engineering where much of the work is group cooperative work, and any effort to undermine that joint work is identified and ostracized.
Wendy Fischman and colleagues on our research team are currently investigating how colleges and universities can prioritize ethics.
There has also been handwringing over whether students will lose the ability to compose their own writing, similar to the fears that students would lose the ability to do math when hand-held calculators became available. Educators will have to decide which competences truly matter and which ones can be allowed to disappear. With respect to cursive handwriting, there are clearly alternative perspectives; on the other hand, I doubt that any responsible educator would endorse illiteracy, ill-numeracy, or agraphia.
Cognitive Psychology/Cognitive Science
I refer here to the amalgam of scholarly disciplines (launched in the middle of the 20th century) that seek to understand cognition, particularly those forms of cognition exhibited by adults or children in the course of development. Initially focused on the developing individual (e.g., Jean Piaget) or the functioning adult (e.g., Amos Tversky and Daniel Kahneman), the field also explores animal cognition, and/or human-computer interface.
For hundreds of years, novelists, science fiction authors, and creators in other media, have sought to clarify the nature of cognition that is not exclusive to humans—and particular creatures from outer space or ones created by engineers (e.g., the machines or organisms envisioned by René Descartes, Julien Offray de La Mettrie, Lady Lovelace, or Goethe’s Faust).
This discussion quickened in the middle of the 20th century. There arose epistemological tensions between the behaviorists (think B F Skinner, as well as his predecessors in Russian and American psychology) and the cognitivists (think Noam Chomsky, but also Herbert Simon and Jerome Bruner).
If one takes a strong behaviorist position, there is no interesting difference between human beings and ChatGPT. So long as these “entities” produce the same responses when given a certain stimulus, to all extent they are equivalent. And indeed, in his experimental novel Walden II, Skinner describes a world that is completely governed by certain stimuli and an appropriate schedule of reinforcement. All done and explained!
In sharp contrast, Chomsky (as well as his predecessors, notably René Descartes and Wilhelm von Humboldt), claims that human cognition is unique—and differs qualitatively and inevitably from cognition in other species, as well as in artifacts, like computers and computational systems. Among the differentiating factors are human evolutionary history, the organization of the nervous system, the way that it develops in various human cultures, and certain human features (like purpose, curiosity, creativity, complex feelings, and emotions) which may be simulated, but are not genuine unless displayed by homo sapiens.
A rough analogy: Plant-based hamburgers may be indistinguishable in taste from those made from animal meat—but that does not mean that the two burgers are identical or can be thought of as identical. (Or consider The Mona Lisa by Leonardo vs. a simulation from a computer system—a non-fungible Leonardo token, as it were.)
Intelligence(s)
If you think that “cognition” is a disputed term, try “intelligence”!
First defined and operationalized in psychology by test maker Alfred Binet in the early 20th century, “intelligence” is now seen as the province of psychometricians—IQ test makers can identify, test for, and decide how intelligent each individual is (and I am confident ChatGPT would do very well on most conventional IQ tests—perhaps performing at genius level!).
Without intending to be disruptive, forty years ago, I challenged this hegemony: I put forth a Theory of Multiple Intelligences. (If I had called my study “An Examination of Human Talents,” no one would have objected and the theory would not have gotten much publicity; using the term intelligences made people either love it or hate it.) But in any case, much of the world now accepts the claim that intellect is not singular. Indeed, a human being may be high in linguistic or mathematical intelligence, or both (the key to scoring well on a conventional intelligence test); but that person can be unpredictably skilled or unskilled with reference to other intelligences (e.g., spatial, musical, bodily, naturalistic, interpersonal, or intrapersonal).
So what of the multiple intelligences of ChatGPT? Although it has been suggested that MI theory could be used as a framework to evaluate AI capabilities (link), one could say that some forms of intelligence are inaccessible to a computational system.
I would nominate bodily-kinesthetic intelligence as a prime example (what would it mean for a computational system to dance, weave, or play hockey?). A few intelligences are tailor-made for ChatGPT. Certainly, however achieved, such systems can score at the top in any assessment of linguistic or logical-mathematical intelligences.
Other mappings are more controversial. ChatGPT may well exhibit musical intelligence, but presumably by paths quite different than those used by human beings (e.g., playing string instruments well, but not through the use of fingers and ears), or interpersonal intelligence by making deductions from previous statements, but not observing the person “live”—see Shinri Furuzawa and my posts on the intelligences of diplomacy (here and here).
And suppose there is such a thing as existential intelligence (I have called this “the intelligence of big questions”). The affects associated with such a use of mind would make no sense to a cognitive or personality psychologist—(what does it mean for a computational system to feel “awe” and to ponder big questions periodically or perennially?). But it would be completely accepted in a behavioristic Skinnerian sense. And so, if you ask, “What is love?” and then, a few minutes later, ask “What’s the relationship between love and passion?” that’s enough to qualify behaviorally for exhibiting existential intelligence.
Moral of the Story
ChatGPT is not the first invention to raise these existential(!) questions: and it won’t be the last. But at long last, these questions will not be ones just for scientists, psychologists, ministers, and members of the chattering classes. They will involve all of us on the planet, and may cause us to rethink who we are, who we have been, as a species.
A Final thought
One of the most formative experiences of my career was the opportunity to work under the direction of cognitive psychologist Jerome Bruner, on a social studies curriculum for fifth graders called “Man: A Course of Study.” The curriculum was organized around three questions:
· What makes human beings human?
· How did they get to be that way?
· How can they be made more so?
If we were updating this curriculum of the 1960s, I might add a fourth question:
What does it mean to be a human being at the end of an Anthropocene era?
And a fifth question:
What comes next?
NOTE:
*(For example, Shinri Furuzawa has asked ChatGPT to explain MI theory in the style of Shakespeare, and Jonathan Frost thought of asking ChatGPT to come up with an MI Assessment.)
APPRECIATION:
For very useful comments and suggestions on this essay, I thank Shinri Furuzawa, my valued colleague, and my wife, Ellen Winner.