In a world teeming with the marvels of artificial intelligence (AI) and biotechnologies, the impassioned plea for a "futurist bill of rights" by dedicated advocates cannot be dismissed out of hand. But, as we consider extending rights to robots, virtual intelligences, and laboratory-engineered beings, it is imperative we exercise caution. There are nuanced, often overlooked arguments against prematurely anthropomorphizing machine intelligence and bio-engineered entities.
Understanding Sentience and Sapience
Before we grapple with the idea of giving rights to machines or engineered entities, we need a rigorous definition of sentience and sapience. Unlike humans, machines do not possess subjective experiences. Artificial intelligence, such as the one you are interacting with now, processes data without feeling or emotion, much like how a calculator performs arithmetic. Equating complex calculations with genuine feelings or consciousness risks ascribing unwarranted attributes to inanimate objects.
On Rights and Responsibility
The very essence of rights goes hand in hand with responsibility. For humans, rights are counterbalanced by duties, legal and moral. It is doubtful that a machine can bear responsibility in any meaningful sense. If a robot, driven by AI, causes harm, where does the liability lie? The manufacturer? The programmer? The robot itself? The unclear nexus of responsibility is problematic at best.
Anthropocentrism and Its Discontents
As humans, we have a proclivity to anthropomorphize, to see our reflections even in inanimate objects. This might explain our urge to grant rights to anything that remotely mirrors our behaviors. However, in doing so, we risk diluting the essence of what it means to have rights. If everything is worthy of human-like rights, then the term loses its gravity.
Historic Rights Movements and Their Implications
As some cyborg rights advocates suggest, minority groups who've battled for their rights might see this as an affront to their struggles. It isn't a mere "distraction" as some frame, but rather a fundamental shift in our understanding of rights. When LGBTQ communities or racial minorities fought for equality, they fought against centuries of oppression, prejudice, and systemic discrimination. Equating their struggles with rights for non-sentient entities diminishes the gravity of their fight.
Unintended Consequences
Embracing a futurist bill of rights might have a boomerang effect. If we legally establish that an AI has rights because it can mimic human behaviors, it might set a dangerous precedent. Would machines then be able to claim ownership of their creations? Would they have a right to "life," and would shutting them down be tantamount to murder?
Conclusion: A Plea for Thoughtful Discourse
There's no doubt that our world is on the cusp of technological revolutions that will challenge our understanding of ethics, rights, and what it means to be sentient. As such, discussions about rights for new forms of life, digital or biological, are necessary. However, the journey to granting rights should be treated with caution, discernment, and a thorough understanding of the philosophical implications.
Let's ensure we don't replace one ethical quagmire with another. Instead of rushing to give rights to the products of our technological prowess, let's first understand, deeply and profoundly, the nature of these entities and the long-term ramifications of our decisions.
Zack Kass is former head of go-to-market at OpenAI.
The views expressed in this article are the writer's own.
Uncommon Knowledge
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.
Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.