What is consciousness? This question has perplexed great thinkers for millennia and, like many of the most compelling philosophical mysteries, it’s been explored throughout Star Trek history. Often when our favorite androids or holographic programs are fighting for their right to freedom.
My favorite example is in the 1989 Star Trek: The Next Generation episode ‘The Measure of a Man’ (S2, Ep9). Smug cyberneticist Commander Bruce Maddox (Brian Brophy) wants to disassemble Lt. Commander Data (Brent Spiner) so he can learn how he’s made and create more Data-like androids. Data refuses and Captain Picard (Patrick Stewart) supports him, calling for a hearing to argue that Data isn’t Starfleet property.
The hearing that follows is Star Trek at its best. Excellent writing from TNG writer and editor Melinda M. Snodgrass, and a powerful performance from classically-trained Patrick Stewart make for skin-tingling viewing—this scene has been studied at Harvard Law School as an example of great question-asking. Like all of the greatest Trek-isodes, this isn’t a story solely about whether Maddox can disassemble Data or not. Under the surface, themes of freedom, personhood, sentience, and consciousness run deep.
“Data is not sentient,” Maddox insists throughout the hearing. But Picard astutely proves that Data fulfills all of Maddox’s own criteria: intelligence, self-awareness, and consciousness. Captain Louvois (Amanda McBroom), the judge, ultimately rules that Data gets to choose. Does this mean he is conscious? Is consciousness even something we can understand, see and measure? ‘Measure of a Man’ doesn’t attempt to answer these questions, but does focus on the implications of stripping Data, and other sentient androids like him, of their freedoms. “Are you prepared to condemn him and all who come after him to servitude and slavery?” Picard asks Maddox.
Back on Earth in 2022, people have conflicting opinions. Some believe attempting to pin down the concepts of sentience and consciousness is like holding onto water—impossible. Others think that understanding consciousness is a timely and worthwhile debate. In July 2022, ex-Google engineer Blake Lemoine made headlines when he claimed that the A.I. system he’d been working on, called Lamda, was conscious and sentient. Most experts agree that Lemoine’s claims are overblown. But they’ve prompted more conversations about A.I. consciousness, especially how we might begin to test for it and measure it as A.I. systems like Lamda continue to advance.
Why We Need Illusions of Consciousness
Can we measure consciousness? The answer depends on who you speak to. We asked Dr. Paul Smart, a senior research fellow in Electronics and Computer Science at the University of Southampton. He explores philosophical issues around cognition as part of his research and says we must begin by considering why we’re conscious.
We’re social animals that need to know what’s going on in each other’s minds, and language enables us to communicate. “The function of consciousness is, in short, to provide narrative fodder for linguistic exchange,” Smart tells The Companion. He likens the conscious mind to a movie. “The cinematic rendering of reality is simply the brain’s attempt to provide a common interface—a common protocol—between you and me,” he says.
But what is consciousness? And how do we know if we truly possess it? The short answer is we don’t, which is why Smart believes we need to change how we think about it. “There is no such thing as consciousness,” he says. Instead, he believes what you and I think of as consciousness is an illusion. “You no doubt believe you are experiencing something right now, but merely believing something is not the same as knowing something,” Smart says. Remember in The Matrix (1999) when Cypher acknowledges the steak he’s eating isn’t real? It’s an illusion, but he eats it anyway. Because “ignorance is bliss”, right?
“Merely believing that something is real is not enough for it to actually be real,” Smart reminds us. To put this into perspective, he encourages us to consider the way some people believe in God. They think God exists, but this is just a belief. The “supernatural phenomenon” of consciousness is the same, a belief.
To create consciousness, it makes sense to look for it in ourselves and build a version of what we find. But if consciousness is a belief, not a place or a thing or a state, there’s nothing to be found—even if you opened up your own brain. “All we’d see is a load of ‘wires’ communicating bioelectric signals,” Smart says. “We see a machine, not God, nor consciousness. Nothing mystical, magical, or mysterious. Just circuits and stuff.” Smart says all we have is our subjective sense that we’re conscious and know that must, somehow, be tied to “the whirrings and grindings of the biological brain.”
Does that mean creating a conscious machine, like Data, is a futile mission? Surprisingly, no. Like you and me, a conscious machine would only have to think it’s conscious. “One that believes it is seeing, feeling, imagining, reminiscing, or responding to someone’s questions about consciousness,” he says. To do that, we’d need to understand what experiences are responsible for our illusion and recreate them. “The machine opens its eyes, and there is light. The movie plays,” Smart says. “The machine feels, or at least it thinks it does.” Although we can’t create a machine we know for sure is conscious, we may be able to build one that’s prone to the same illusion of consciousness that we are.