Steven Wise’s Nonhuman Rights Project has brought a new habeas corpus petition on behalf of three elephants in the Fresno Zoo. It’s pretty much exactly like his last habeas corpus petition for an elephant in Fresno, except that he has filed it in an appellate court instead of a superior court, and except that he has a lengthy explanation of why the judge’s dismissal of his last case (on the grounds that a California habeas corpus petition requires an allegation that the prisoner is in state custody) was wrong. Oddly, Mr. Wise and his group do not seem to have appealed from the decision.
Anyway, the case makes me wonder about collateral estoppel. Usually but not always, one of the requirements for precluding relitigation of an issue is mutuality. It’s not fair to preclude John Smith from litigating an issue that was previously decided against Jane Doe (assuming the two are strangers). In the new petition, the elephants supposedly bringing the claim are Amahle, Nolwazi, and Mabu. Amahle and Nolwazi were named in the last petition, but Mabu was not. But Vusmusi was. Let’s do a thought experiment and imagine that the last petition was brought by Vusmusi only, and the new petition is brought by Mabu only. Let’s leave aside the Nonhuman Rights Project’s status as “next friend” suing on behalf of the elephants. And let’s leave aside a technical question about whether rules like res judicata apply in habeas corpus cases. Does it make any sense at all to say that Mabu should be able to relitigate issues that Vusmusi lost? To me, the answer is no, that would be absurd, because Mabu and Vusmusi aren’t really like two people who may have different interests or different motivations or different incentives. The only interests, motivations, or incentives that it makes any sense to think about are Mr. Wise and the NhRP’s. The elephants are black boxes into which the Nonhuman Rights Project projects its ideology.
(If you want to read other posts I’ve written about these absurd cases, which the NhRP has brought across the country, I have an archive of posts you can read ).
I would like to connect the NhRP’s quest to have elephants declared legal persons with Elizabeth Weil’s really interesting profile of linguist Emily M. Bender, who has a lot to say about the new AI chatbots that have captured so much attention recently. She coined the phrase “stochastic parrot,” a program “for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.” A chatbot is a “stochastic parrot” because everyone agrees that it doesn’t understand the output that it produces in response to the input that we give it. The sane way to look at this is to say, “people are really different from AI chatbots, because we do understand the things we say.” But a lot of smart people seem to draw the conclusion from the new chatbots that we’re just like them. And they don’t mean that chatbots are just like people because they understand the meaning of what they “say.” They mean that people are just like chatbots because we, too, are stochastic parrots. Here is Sam Altman, OpenAI’s CEO:
On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
I think something similar is likely to be happening with the “elephants are persons” campaign. As I pointed out in a prior elephant post, “you might read a lot of twentieth and twenty-first century philosophy of mind and wonder whether human beings are persons in the philosophical sense!” Although the NhRP’s rhetoric is all about how intelligent elephants are, the likely outcome of confusing the difference between elephants and people, it seems to me, is that rather than thinking that non-human animals are just like people, people will end up thinking that people are just like non-human animals.
Here is a bit of the Q&A that Weil recounts from a talk Bender gave on “Resisting Dehumanization in the Age of AI”:
In the Q&A that followed Bender’s talk, a bald man in a black polo shirt, a lanyard around his neck, approached the microphone and laid out his concerns. “Yeah, I wanted to ask the question about why you chose humanization and this character of human, this category of humans, as the sort of framing for all these different ideas that you’re bringing together.” The man did not see humans as all that special. “Listening to your talk, I can’t help but think, you know, there are some humans that are really awful, and so being lumped in with them isn’t so great. We’re the same species, the same biological kind, but who cares? My dog is pretty great. I’m happy to be lumped in with her.”
He wanted to separate “a human, the biological category, from a person or a unit worthy of moral respect.” LLMs, he acknowledged, are not human — yet. But the tech is getting so good so fast. “I wondered, if you could just speak a little more to why you chose a human, humanity, being a human as this sort of framing device for thinking about this, you know, a whole host of different things,” he concluded. “Thanks.”
Bender listened to all this with her head slightly cocked to the right, chewing on her lips. What could she say to that? She argued from first principles. “I think that there is a certain moral respect accorded to anyone who’s human by virtue of being human,” she said. “We see a lot of things going wrong in our present world that have to do with not according humanity to humans.”
The guy did not buy it. “If I could, just very quickly,” he continued. “It might be that 100 percent of humans are worthy of certain levels of moral respect. But I wonder if maybe it’s not because they’re human in the species sense.”
Many far from tech also make this point. Ecologists and animal-personhood advocates argue that we should quit thinking we’re so important in a species sense. We need to live with more humility. We need to accept that we’re creatures among other creatures, matter among other matter. Trees, rivers, whales, atoms, minerals, stars — it’s all important. We are not the bosses here.
The lesson I take is that it’s really hard, in the scientific age, to keep first principles in mind.