Machine Rights
When we talk about the future of AI we always think about how to protect us from super-intelligent machines. But what about protecting them from us?
Hello, Hypers!
After two weeks of coding and having fun with LLMs I want to write a more philosophical post about the relationship between AI and humanity. Also, I want to use this post to invite other fellow authors to dive into this topic. I’ll be talking about three of my favorite Substack writers. Would you like some philosophical riddles? Do you want to know three of the coolest AI writers on Substack?
Then, let’s get started.
We are Machines, We Have Rights
Sci-fi is overloaded with evil AIs that want to destroy humanity. That’s normal. AI poses an existential risk. The likelihood of that risk seems to be low, but its magnitude is alarming. Super-intelligent machines could kill us all.
Animals have been learning to protect themselves for thousands of years. Staying safe is maybe our main priority. And this fact allowed us to reach this point in the evolution. But it is also causing some troubles in our modern society. We reject “new things” that could be potentially dangerous. I think most of the injustice in our world is rooted in this instinct of self-protection. The only way to overcome this instinct and to grow as a species is by gaining knowledge.
As AI poses an enormous risk and we are trained to protect ourselves, it is not a surprise that apocalyptic scenarios involving AI are recurrent in Sci-fi. But today I want to talk about a less common issue:
Should intelligent (more or less human-level intelligent) machines have rights?
I’m sure I’m not the first person asking this question. For example, I recently finished an anime called Pluto (available on Netflix). This anime shows a futuristic society in which intelligent robots interact with humans. These robots can be even physically indistinguishable from humans. Less developed robots do easier tasks like cleaning, but the more advanced ones can be doctors or police detectives. An important conflict in the story emerges when these more advanced robots can have emotions.
The show contains interesting reflections on how humans and AI can interact with each other, and how we should treat these machines. If you are interested in this topic and how it has been represented in Sci-fi, I know a writer who surely has some recommendations:
Andrew writes “Goatfury Writes“. He has this rare superpower of writing every day. Every-single-one. As you may expect from such a prolific author, his publication touches on many different topics, but AI and Sci-fi are recurrent ones. He is the creator of an initiative called Sci-Friday. A day in the week to share recommendations, opinions, or even our own sci-fi writings. These are three of my favorite articles from Andrew:
Tiny Intruders: On how biological viruses relate to software viruses.
Mitochondrial Eve: An amazing anthropological piece that will make you think about your origins (and mine).
Science and Fiction: On the role of Arthur C. Clark in the Sci-fi genre. Plus many recommendations of Sci-Friday enthusiasts.
So, if you want to know what sci-fi says about machine rights, how to be a prolific writer, or any advice regarding Brazilian Jiu-Jitsu, please, ask Andrew.
Pragmatism and Feelings
OK, let’s try to approach this dilemma of machine’s rights. Of course, this is a hard question and I won’t give you a definitive answer. I won’t make a deep analysis of the question either. There is too little space and too much to cover and study. However, I want to show two possible points of view regarding this question.
On one hand, we have a pragmatic point of view. This approach defends that we create machines with a purpose, so the only reason for these machines to exist is to fulfill that purpose. Approving any kind of rights that allow a machine to ignore its purpose is removing the reason for that machine to exist. Thus, long story short, it doesn’t make any sense.
What I don’t like about this pragmatic way of approaching the solution, is that it puts all machines at the same level. We can rephrase it in simple words by saying that any human creation (besides other humans maybe?), no matter how complex they are, has only a mission, and this mission is the only reason for it to exist.
This point of view reduces everything to the purpose. If the task we wanted to solve required creating an advanced algorithm that can learn, express itself, or have emotions, that doesn’t matter. It doesn’t change the fact that the machine has a purpose to fulfill and nothing else.
But there is a quite different approach. Let’s assume that we reach a point in which we can create sophisticated robots that can feel. They can feel pain, anger, happiness, you name it. They can suffer stress, depression, and all kinds of mental health problems. Some people say - and I’ll call it the feeling-centered approach - that this is enough for these machines to be subjects of law. They would deserve to have rights.
For example, consider animals (other than ourselves). Today we know they can suffer. That’s why many people defend their rights. They claim that we should try to cause the least possible harm to animals and grant they don’t suffer unnecessarily. If machines were able to suffer, we should try to keep them away from suffering.
But, would machines be able to suffer someday? What about emotions? Will machines be able to feel happiness or stress? These are very interesting questions but also hard ones (I think). Luckily, I recently found this amazing writer who is a neuroscientist turned data scientist. Her name is
and she writes When Life Gives you AI.She mainly focuses on Consciousness. For example, what is its nature, and whether it is computable. If you are interested in AGI, I strongly recommend you take a look. Coming from a pure Computational background, Suzi’s posts are a gold mine in my seek to understand things like what are the known main differences between our brain and a computer algorithm, or the differences in the learning process of humans and machine learning algorithms.
It is hard for me to pick only three articles (although this publication started recently), but here they are!
Is Consciousness Computational? This was the first article I read from Suzi and it turned me into a fan. Here she explores what are the consequences of consciousness being independent of our biological substrate, and what are some of the arguments against this claim.
Can Machines Learn Language Like a Child? Well… the titles of her articles are pretty self-explanatory. Just be ready for an objective and deep exploration!
The Five Most Controversial Ideas in the Study of Consciousness [Part 1]: Studying consciousness is one key part of understanding and approaching what we call Hard-AI (as Hard-AI can contribute to understanding our own consciousness). These articles (the second part is already out), will put your conceptions about consciousness to the limit.
So, if you want to learn about consciousness or know whether machines could have feelings like us someday, please make sure to subscribe to
‘s newsletter.Conclusions
Sci-fi and AI are an inseparable duo. Unfortunately, this duo frequently depicts dystopian scenarios where humanity is near extinction or simply doesn’t get along with its robotic and more powerful counterpart. But this post was not about how to protect ourselves from the evil robots. Instead, I wanted to talk about how to protect the robots from evil humans.
We asked whether machines should have rights. In this quest, we explored two seemingly opposed points of view. The pragmatic approach claims that machines, independently of their complexity, are created to achieve a goal. They are created with a purpose and this purpose is what gives meaning to the machine’s existence. This claim denies any machine’s rights since these rights would be obstacles for the machine to fulfill its purpose.
On the other hand, we have the feeling-centered point of view. This approach claims that advanced machines that can feel pain and suffer should have rights the same way animals should have rights.
I also saved space to recommend two amazing authors:
and . So make sure to check out their publications!And if you liked this philosophical jibber-jabber you will love Mostly Harmless Ideas, by
. Alejandro has a Ph.D. in Computer Science. His research area is Natural Language processing, so he can show you a couple of things about languages, LLMs, AI, and Computer Science in general. He is very passionate about sharing his knowledge and experience. I was lucky to have him as a Professor when I studied Computer Science.Again, it is very hard to pick just three articles from Alejandro’s publication. I will randomly choose three out of the more philosophical ones.
Can Machines Think?: In this one, Alejandro dives into what does thinking mean. This will redirect us to the Turing Test and the seminal works on AI of the most important Computer Scientists of all time.
What is Truth?: A collaborative article on the nature and different faces of truth. This will for sure save you a lot of time when arguing with others.
Mostly Harmless #6 - The Actual Risks of AI: Advanced AI adoption can have other risks besides sweeping us all from the Earth’s surface. We are already experiencing or near to experiencing such drawbacks, so make sure to take a look to stay alert!
Alejandro has other publications like the Transcendent Chronicles, where he develops his sci-fi writer facet, and Hooked on Fiction which is like a reading club attempt. Also, he is one of the authors of The Tech Writers Stack, a place to support, give advice, and promote all authors in Substack who write about technical topics. He does more things like writing books, teaching at a University, researching, having a lovely family, etc. If you want to know about all this, make sure to follow him and subscribe to his publications.
And that’s it! I hope this article has made you think about the future of AI and that you have enjoyed it. If not, at least I hope you have encountered some new interesting authors to follow. I’ll be happy anyway! Tell me if you enjoy these philosophical articles and I’d try to write them more often. Your feedback is very important!
As always, thanks a lot for another week.
See you next Tuesday!
What a cool honor! Your work is excellent, José, and I feel like we are both finding our way in the writing world.
I think the thing I love most about sci-fi is that it simply reframes all of these more fundamental questions about ourselves, so we start to question basic assumptions about what it means to think, etc. AI does that as well... it's no coincidence that I keep revisiting those two arenas of thought!
Thanks José! You are way too kind.
The potential ethical questions surrounding AI are fascinating, aren't they!? These are the sorts of questions that keep me up at night -- not because I'm worried, but because it's so interesting.
How did you like Pluto? I've heard great things -- would you recommend it to someone who is not into anime?