Saturday, October 24, 2009

Can robots ever feel pain?

Can robots ever feel pain? Can they ever love or experience sadness? These things are not possible today but they might be possible in the future if they're only a matter of technical engineering. So the question in this article is: are they only a matter of technical engineering?

There is an explanatory gap between the subjective sensations in our minds and the objective nature of the physical reality but is that gap merely physical in nature or is there an actual metaphysical difference between them? In other words, is that a difference only in degree or in their very nature?

The fact that there is even a problem here seem to elude most people, it's hard to realize what it is and even harder to explain it. There is this default position that consciousness is, in principle, knowable and explainable in the framework of modern neurology and that there are no reasons to think otherwise.

So are there good reasons to think otherwise? I'll try to show them by using robots as an example.

Building the robot.

For this to work we need to establish an assumption. Let's presume that everything that makes up a human being can (in principle only) be constructed in a robot. I think this is the default position and it basically means that, if we're just chemistry, then there's no reason why the same chemical principles can't apply in a robot. Where there's a group of nerve cells transmitting electrical signals, there can be an electrical wire. Where there's muscle there can be a small engine. Where there is skin there can be an organic compound that behaves like skin.

This is obviously an over-simplification but the actual materials and techniques are not important for the purpose of this article. The only important thing to presume is that each part we choose for our robot will maintain the same behavior as the human counterpart.

Building the feeling of pain.

Now let's say that we build this robot in a way that will enable it to experience pain.

Beneath the surface of a robot's skin there are pressure sensors. When pressure increases beyond a certain threshold, where further pressure could be threatening, the sensor (nerve) emits an electrical signal through a set of wires (nervous system) that are connected to a central processing unit (the brain). When the signal reaches that CPU, a procedure is fired so that the head and eyes track the source location of the signal to get more information, at the same time another procedure is fired that emits a laud noise and another procedure is fired to attempt to withdraw the arm away from the source of the pressure.

So by hammering the finger of that poor robot, the robot would turn it's head to you, scream and then withdraw the arm away from you.

The result.

Let's presume that all of this was so perfectly simulated that you could not distinguish between the behavior of that robot and a human being's behavior. The result is that we have just succeeded in simulating pain!

Or have we?

The truth is that we've only succeeded in simulating the behavior of pain but not the feeling of pain and it's important to distinguish both. Is that robot experiencing the feeling of pain? Not at all. Would it be immoral to torture that robot only because it behaves as if it was feeling pain? Now what if we threaten the robot to hammer his finger again, could we program it to feel fear? Again, it doesn't seem possible. Sure, he could have the logical pathways to recognize potential danger and avoid it by running away from it. It could effectively SEEM as if it was feeling fear but it still wouldn't be experiencing the feeling of fear.

The gap.

In a way, those sensations can be said not to exist at all in physical reality and they don't seem to logically follow from it. Would it be impossible for life to have developed into creatures like that robot? Creatures that, through the course of evolution by natural selection, have acquired the necessary programming to survive and replicate, while behaving as we behave, yet being devoid of subjective sensations? Is there a difference between us and that description of those robots if we're just biological machinery anyway? There's no difference in behavior but you'll probably recognize that the robot of our little thought experiment is not a person since it doesn't feel anything.

So what would it take for the robot to feel? There just doesn't seem to be a way since all we can construct are behaviors and not feelings. The problem is that what we feel is in the subjective realm of our mind and not on an objective physical reality, it's surely correlated with it but it's still unexplained by it and more importantly, it's seemingly unexplainable by it alone, which is what I mean by there being a difference in the nature of those things and that would entail that the assumption we started out with is probably wrong.

As a final note, this is one of those subjects that defy our language. It's so difficult to find the words to express this problem, but if the article was unclear, maybe its main concepts can be better understood by reading this other article: The metaphysics of color, which distinguishes the subjectiveness of our feelings from the objective phenomena that it relates to, in the world, using color as an example.

8 comments:

  1. Hello,

    I loved this post, but I did find one small flaw in your logic:

    ~You're assuming that humans really feel pain.~

    At first you may say that assuming otherwise is nonsence, but consider the masochist; this is a person who perceives pain signals as pleasurable. Some consider this to be a neurological disorder, but it is more likley that such behaviour is learned. After all, it is the same nerves that represent both pain and pleasure, but the transition point between the two sensations varies from person to person (for example, I enjoy showers at temperatures that my wife would consider scalding, but when I was younger I couldn't endure such tempuratures so the enjoyment is learned; the sensation is still the same (with the exception of a small loss in sensitivity), but most of all it is the reaction to the sensation that has changed).

    Therefore, pain may be nothing more than learned behaviour. After all, it is my personal experience that the rougher the parents, the less sensitive the child is to pain. I had a Muay Thai trainer that used to always say, "Pain is just a signal" and it is my experience that this is true and that we can train ourselves to overcome such sensations (some more than others depending on how engraved the notion of pain is in the individual).

    So if a robot has the sensory apparatus to observe and translate a signal, who are we to say that such an interpretation is incorrect? It's worth noting here that this is exactly how scientists used to justify animal cruely by saying that the animal only emulates an emotional response, but cannot actually feel it (this notion stemmed from the idea that humans were special creations of God and therefore were the only beings capable of true feelings).

    You said it, "what we feel is in the subjective realm of our mind." By this very definition, if a robot says it can feel pain and if we have no evidence to suggest otherwise then we must assume (at least until a deeper understanding can be realized) that it is telling the truth. Anything less would be... inhuman.

    Eric Patton from Woodland California

    ReplyDelete
  2. Hi Eric, thank you for your comment.

    You are right in pointing out my assumption but I think you're misinterpreting it. I'm indeed assuming that humans really feel pain, it's an extrapolation that can't be verified in anyone but myself and by myself but I think I am reasonable in making it. You seem to concede that towards the end, even in the case of other animals, and you raise a good point in the process: Should we assume that a robot is feeling pain if it behaves as if it's feeling pain, much in the same way that we presume that a bull in a bullfight is feeling pain because it behaves as if it's feeling pain?

    I think that, if we accept the impossibility of constructing a feeling with objective foundations, as is apparently the case of the robot in my post, then the analogy loses its meaning in trying to prove that such a robot would actually be experiencing pain. However, a different problem could arise in reverse. Why do we assume that the bull feels pain if we're assuming that the robot couldn't? And even, why assume that other humans feel pain? However, the animal's behavior is not the only cause of our assumption, my reasoning for thinking that the bull actually experiences pain is based on the fact that I feel pain, coupled with my beliefs about what the bull and I share in our origins, which gives me no reason to think that I'm special. I'm then left with no good reason to think that other animals don't experience pain given the fact that they behave as if they did. This doesn't hold true for the robot as the nature of its existence is of a different kind.

    Can these apparently incoherent beliefs be consistent with one another? Intuitively, it seems that's unlikely, but if we have good reasons to make both assumptions, then they must be consistent.

    Now, I think you were mainly disagreeing with the idea that pain is the same feeling for everyone, that pain feels the same way for everyone but about that, I haven't argued one way or another. It's still a very interesting discussion though. Is the masochist feeling something different that is not pain or is he merely finding pleasure in the feeling of pain? Is the difference between your feeling of pain in a hot shower and your wife's, a qualitative difference or merely a quantitative difference (desensitization)?

    One problem with that discussion is that objectifying something that is subjective in its nature doesn't seem possible and it might be the wrong approach when discussing something of that nature. I mean, there's simply no way to know whether we're both seeing the same color when we're talking about red. If I see the color spectrum all reversed (shorter wavelengths for reds and longer wavelengths for violets) we'd never know that we were seeing the world in different colors, as long as we could point to an object and agree on the name of that color. I would still stop at a red traffic light, even though my red was actually your violet. It wouldn't matter. In fact, there might not even be any correct sensation of redness, it can be random as long as it remains coherent in the spectrum.

    ReplyDelete
  3. But my point was not dependent on an objective, absolute standard of pain. It suffices to assume that humans have a feeling of pain that reside in the subjective realm of the mind which I find impossible to reduce to some objective physical framework of particles or waves.

    This connects with the part where you said that "So if a robot has the sensory apparatus to observe and translate a signal, who are we to say that such an interpretation is incorrect?". I think this is the central part of my post and I think you're ignoring the existence of the sensation and concentrating on the behavior alone. How would you define "observing and translating a signal"? Because my calculator does just that and yet I don't think that it can experience any kind of sensation. It's only behavior we're observing. I was trying to show that difference with my robot; simply interpreting a signal from a mechanical pressure sensor and behave in response to it doesn't entail the existence of the sensation that we experience just like a camera that interprets the wavelength of what we'd call a red photon doesn't experience "redness".

    ReplyDelete
  4. Tell me how will it be for an robot to feel pain???

    ReplyDelete
  5. If (and it's a big if) we come from the assumption that such a thing is possible, I don't think we'd ever know if the robot were feeling pain or what were it like for the robot to feel pain. Much in the same way we don't know how it is is for a fly to feel pain.

    ReplyDelete
  6. I liked the post. but, here the critics would say that this argument is circular.

    ReplyDelete
  7. I guess empirical knowledge cannot get us anywhere. Neither will any amount of systematic logical argument(even though one may enter into the most esoteric of topics). We have to understand that we as human beings are very, very, very, very limited. We have imperfect senses (many other creatures on earth are equipped with better senses than us). We have limited intelligence. The instruments or aids we manufacture using our senses also will be imperfect. We just will not be able to understand the real nature of the whole universe, and the purpose of our existence.

    Can we by deep speculation ourselves be able to understand what is consciousness and the origin of consciousness?

    Or we would have to turn to a very reliable, authentic source of knowledge, a perfect authority on these topics. We all are conscious, but exactly what is consciousness?

    Suppose I and a robot are given a book to read.
    What happens when a person reads a book? When a person reads, he becomes aware of various thoughts and ideas corresponding to higher-order abstract properties of the arrangement of ink on the pages. Yet none of these abstract properties actually exists in the book itself, nor would we imagine that the book is conscious of what it records. I may find the content interesting/boring/thrilling/amusing/horrifying whatever... I may enjoy reading or I may dislike what I read. Will a robot who scans each and every letter, word, or sentence ever be able to experience the book the way i did?

    We have to find out whether there is an absolute authority who knows (or an absolute standard by which we can say) whether the colour you see (or the pain you feel when pinched hard) and the colour I see (or the pain I feel) is one and the same or not.

    Is there such an authority?

    Mayurvg

    ReplyDelete
  8. From a experimental point of view we can start with modelling Melzac and Walls Gate Control theory possibly based Britton and Skevington model or Prince et al model

    ReplyDelete