Can artificial intelligence be conscious?
Here’s what cognitive science has to say about it
By Sonora Slater
“As long as we lean into one of our most human traits — our ability to learn and adapt — […] we’ll be just fine.”
Humans naturally want to assume that we are simply better or different somehow than artificial intelligence. But when ChatGPT can pass law exams, provide advice, brainstorm creative ideas at top speed and avoid typical human issues like, say, writer’s block, it begs the question, at what point of technological improvement will the gap between the human mind and the machine become indistinguishable?
Well, it turns out this isn’t a new question — cognitive scientists have been asking it, and similar ones about the nature of the mind, for decades.
Cognitive science is the science of the mind, exploring how those with minds take in information, how they use that information to make decisions and what it is like to experience the positive or negative consequences of these decisions. It sprung out of a desire to make a science out of philosophy in a way that is observable and measurable; but it’s a relatively new field, and the best way to do this is still a topic of debate.
Previously, there were two main popular theories. The first is behaviorism, which suggests that the mind could be understood simply by observing behavioral responses to various situations. But these have shortcomings.
For one thing, people act in different ways in the same situations. You might be terrified to find yourself onstage to give a speech, while your friend would be excited to bask in the spotlight. Also, behavior is often influenced by what is going to happen in the future, not only by what is currently happening. You might currently want to hang out with your friends instead of working on your homework, but you sit down to finish your physics problem set anyway because you know that if you don’t, you’ll receive a bad grade.
The other popular theory is identity theory, which suggests that the mind is the brain. But when it comes to artificial intelligence, which doesn’t have a brain, as we currently define it, are these systems automatically incapable of having minds?
These theories have slowly been displaced in favor of the theory of functionalism, although the “correct” conceptual framework is still highly debated. Functionalism suggests that “stimuli” in situations or in our environment affect our internal mental states, which then result in a behavioral response. Because this theory doesn’t rely on the existence of a brain, it opens the door for things without brains — including technology such as AI — to be defined as having a mind.
If this is true, then how do the recent advances in AI, especially via ChatGPT, fit into the cognitive science perception of what a mind is? If these systems can fit into the functionalist view of a mind, is there anything that differentiates us from them?
Jonathan Dorsey, a philosopher and lecturer at UC Davis, argued that our questions surrounding the mind of artificial systems, and their similarity to our own, have a lot to do with the concept of consciousness.
“Whether you think the mind is the brain, or you think the mind is behavior, or even if you try to characterize the brain functionally, consciousness is going to be something that’s very difficult to account for,” Dorsey said. “When you get into these topics of artificial intelligence, you find that you run up against a lot of the same questions. What is a mind? Is consciousness a necessary condition for having a mind? If it is, can you have an artificial consciousness?”
Consciousness is the idea that we have conscious experience and we are aware of our thoughts. It is an awareness of internal and external existence. But it is not unique to humans — your pet dog, cat or fish, or the squirrel that stalks you on the bike path is also conscious of their (often questionable) decisions.
Dorsey referenced a philosophical paper titled, “What is it Like to Be a Bat?,” which explored the idea that no matter how much we learn about bats and their brains, we still won’t know what it’s like to have their conscious experience. We won’t truly know what it is like to be a bat — just as they won’t know what it is like to be human.
“You can study their physiology, but you’re still not going to know what it’s like to be them,” Dorsey said.
This might become the case for artificial systems as well. As it stands, we don’t really understand what it is about our brains that gives us consciousness (“That’s kind of job security for me,” Dorsey said with a laugh); but even if we determine that, and manage to recreate it in an artificial system, there will be something it is like to be an AI system that is different than what it is like to be human. And that difference is probably something we will never fully understand.
The instinct to hold on to some concrete thing that makes humans intrinsically different, and necessary to our world, has roots in economic job security concerns, psychology and even religion. But at the end of the day, people just seem to be worried that they will somehow become unnecessary to our world.
“People worried about ATMs when they first came out, because they worried they would replace teller jobs,” Dorsey said. “But it turns out it just changed the nature of the work. As long as we lean into one of our most human traits — our ability to learn and adapt — the introduction of AI is something we can adjust to, and we’ll be just fine.”