Are AI Devices Human?

As humans march toward creating more sophisticated robotics, computers, and AI technology, the question has always lurked: Are these techno-devices human? Will they ever be? 
Image by Gerd Altmann from Pixabay

So important (and likely) a question was this that mathematician and computer scientist Alan Turning devised the Turing test to determine if a machine's ability to exhibit intelligent behavior could be indistinguishable from that of a human. He introduced his “imitation teste” in his 1950 paper "Computing Machinery and Intelligence.

"Turing held, most fully and confidently, that computers would, in time, be programmed to acquire abilities rivaling human intelligence."

In Turing's test, a human being and a computer would be interrogated with textual messages; if the interrogator could not distinguish them by questioning, then it would be reasonable to call the computer intelligent because we judge other people's intelligence from external observation in just this way.

The importance of language as the determining feature of humanity has been explored in science fiction (Galatea 2.2 by Richard Powers) and philosophy (Maimonides as far back as the middle ages) alike. Distinguishing old-fashioned humans from the synthetic variety has also themed other sci-fi books (Do Androids Dream of Electric Sheep?, Philip K. Dick).

Maimonides’ concept of humanity is more nuanced than merely relying on speech as the distinguishing factor of the human. According to Maimonides scholar Dr. David Gillis, "what really distinguishes human beings…is the possession of an intellect capable of abstract thought.” [1] By this definition, ChatGPT fails the “humanity” test - at least, so far. Nothing produced by ChatGPT is abstract (or original) but rather a compilation or parroting of thoughts reduced to concrete formulations.

However, the Greek translation of Maimonides’ Arabic works provides a different version. Here, per Maimonides, the power or faculty of speech is used interchangeably with intellect [2]. Perhaps this is a translation issue, as the Greek ‘logos’ can mean both “word” and “reasoned logic.” In any event, in this rendition, ChatGPT clearly passes the human-barrier test.

Or perhaps our use of language determines the outcome to the ChatGPT-human question? The Greek language itself, seems to signify the identity of speech with reasoned logic. Indeed, an individual’s native language can shape their thinking, illustrating different brain wiring and structure patterns of native-learned speech in differing languages.

Of course, the AI machines all speak/think in the same language, and likely most of their programmers do. But the philosophers/lawyers/laity analyzing the issue come from different linguistic backgrounds and brain wiring, both structural and in terms of connectivity. Would a native Greek speaker look at the question of the humanity of AI the same way as an American? So far as I know, that question hasn’t been raised.

By most accounts, ChatGPT and its analogs fulfill Turing’s “humanness” test – at least for intelligence and as considered by those whose native language is English. Our own Dr. Josh Bloom depicted its use in chemistry. Dr. Dinerstein showed how it could misrepresent or overstate - just like humans.  Legal evidence professor Roger Park offered ChatGPT a chance at a hypothetical law school exam. It didn’t get it right, but it surely sounded good.  

“It’s either 20 years out of date, or very prescient in predicting the future. AI will probably improve rapidly and soon produce answers as good as the  flesh and blood variety.”

- Professor Park [3]

Is ChatGPT an “individual” under the law?

So, ChatGPT has passed the Turning test- but does that make ChatGPT or related AI human for legal purposes? Legal philosophers have claimed that  AI algorithms could be considered human, even without a physical corpus, just as corporations are treated as individuals under the law. But that analysis reeks of non-worldliness. Corporations have money. If you sue them, they can pay. If you sue the ChatGPT for giving bad advice – who pays? Its creators, i.e., regular, good old-fashioned humans. AI isn’t going to be legally human until it can open a bank account and be amenable to paying a damage award in a lawsuit.

But that’s not the real question. The real question is can we shut it off? Extinguish it? Kill it? Or would that be the murder of a humanoid creature, as we witness as HAL was turned off by Dave in 2001: A Space Odyssey

Some say that consciousness is what makes us human, not speech. After all, a recent study from Tel Aviv University showed that plants make sounds commensurate with communication, and surely dolphins and other creatures communicate with each other. But other studies show that the plants exhibit signs of what we would consider, if not self-consciousness, at least conscious awareness of the universe.

Maybe consciousness isn’t the sine qua non of personhood. Certainly, people in a coma or even under anesthesia are not conscious in the conventional sense of the word. And even if conscious reactions can be elicited from some, the consciousness response is diminished asymptotically to zero for those in deep coma.

And then we have the question of chimeras, human-animal creations. Part monkey-part human embryos have been created with the goal of creating organ donors. [4] While the neural systems haven’t yet developed to rival humans, who’s to say it won’t be done? (5) What would one do with a creature like Margaret Atwood’s fictional Pigoon, a transgenic creature bred originally for its replacement organs which assumes an independent identity in Atwood’s fiction - who are surely conscious but cannot communicate with humans?

The rigid demarcation between human and animal or human and computer demands another definition for being human before the law – such that it becomes a legal, moral, ethical, or religious “crime” to “take their life.”

How does the Chat GPT differ from human conversation? Maybe in its responsive nature. The program answers questions. It doesn’t initiate them or create a tome or the legal hypothetical. Perhaps the imaginative and creative power of humans differentiates us from all other creatures, beings, and constructs.

Maimonides also puts great emphasis on human friendships or relationships.

So, until the ChatGPT writes the next War and Peace – on its own, perhaps we’re still safe in saying it might be a smart machine- but it isn’t a human one. And yes, you can still pull the plug.

But maybe once these Chat-creatures become friends with others Chat-GPTs, we might want to reconsider. On the other hand, some doomsayers are predicting the dangerous havoc AI might create. In this case, killing the AI creature might be considered self-defense.

 

[1] Laws of the Foundations of the Torah 4:8

[2] Maimonedes Guide to the Perplexed Translation by Michael Schwartz

[3] Evidence Listserve  Jan 21, 2023

[4] The creation of part-human, part monkey embryos will discomfit many, but research should be encouraged The Economist The process is ethically fraught, and in America, federal funding of such research is forbidden.

[5] For now, there is an ethical moratorium on fostering advances using stem cell organs assembled from human pluripotent stem cells, called simbryos, an ethical dilemma of its own.

Category