Researchers in the U.S. and Japan successfully synched up a monkey’s brain with a robot across the world, and after about an hour of practice the monkey could control the robot’s legs while it walked on a treadmill.
First the scientists trained the monkey to walk on a treadmill, and electrodes monitored her brain signals during the activity. The brain signals predicted her leg movement in such a way that they could translate the signals into instructions for a bipedal robot in Japan on a similar treadmill.
The monkey was shown a live video of the robot’s legs while both walked on their own treadmill, and the monkey’s brain soon ‘tuned in’ to the robot’s leg movements. In fact, when they turned off her treadmill and she stopped walking, she continued to concentrate on the video screen, and sure enough, her neurons kept firing, controlling the robot’s movement. The robot kept walking, controlled from across the seas by a stationary monkey’s brain.
The visual feedback (and feedback in the form of treats) had been quickly incorporated into the neural system. If they can do this with humans (and there is no obvious barrier), then people with limb injuries will soon be able to control prosthetics with their intentions. For that matter, people will be able to control any machine built with an appropriate interface. It is a much more refined extension of earlier biofeedback technology (e.g. therapeutic games played via physiological measures where winning requires training yourself to relax).
An interesting side question: would we say the monkey was intentionally controlling the robot’s movement? Did she in some sense understand that she was in control?
If the robot is moving along according to the monkeys’ brain signals, let’s say we suddenly make it act contrary to those brain signals (go left when the monkey signal directs it right). Will this disturb the monkey, even if she continues getting treats regardless? At the least, that result would suggest she expected the robot to move a certain way, despite no outward cues on which to predict its behavior.
If the subject expected the robot to go a certain direction, would we say the monkey understood she was in control? Did she intend the robot to go in one direction and felt thwarted when it didn’t? Or did she just have inexplicable expectations (not realizing the source) and nothing more than that?
Whether animals can have intentions (whatever that means exactly) and understand their actions is a contentious issue. No matter how intelligent, flexible and intentional their behavior seems, there is often some alternate explanation at the purely mechanistic, behavioral level which could predict the same results, no matter how elegant the experimental setup.
Perhaps we have to accept that these are not always competing hypotheses so much as competing levels of explanation. It is possible that both are true — or perhaps I should say it is possible that both types of explanation can lead to useful new predictions and models.
We attribute inner mental states to other humans not because we have absolute certainty that they have them, but because it is such a useful assumption. It is possible that it’s all an elaborate ruse by an evil demon manipulating our experience, or they could be philosophical zombies, or we could be inventing them in our own solipsistic mind. But we don’t take these explanations seriously — not because we have ruled them out as possibilities, but because they don’t do much good as explanations. They add an extra layer to what we observe (the illusion of mind), but because our observations are so consistent, we can get by just fine by assuming that our neighbors do in fact have minds, do experience the mental states they claim and appear to experience.
So if we eventually find, after much more testing, that attributing some form of inner experience or mental states to some animals is a useful and parsimonious way to frame our observations of their behavior, then so be it. That is, if crediting animals with minds allows us to make useful predictions and meshes with our broader models of the world, then it is reasonable to give them such credit, even if we cannot rule out a purely mindless, algorithmic explanation.
After all, at some level our own behavior can be put in those terms: the predictable physics ruling the movement of atoms in our body, brain and environment can explain all of our behavior without resorting to mental state attribution. We allow that both explanations are valid because they are two levels of the same system, neither more ‘true’ than the other.
Sometimes it is useful to frame things from the reductionist perspective, and sometimes it is useful to consider things from the higher up, integrative perspective. After all, we survive better if we interpret the world as macro-level phenomena (a hungry tiger lurking nearby, or a car headed right for us) than if we try to see the world only as the mindless, intentionless, algorithmic interacting of countless invisible particles.
So there may come a point where it is similarly useful to assume that animals have certain mental states or inner experience. While this risks a slippery slope (what if a robot demonstrates just as much intelligence, flexibly or whatever in its responses?), that slippery slope might just end up demonstrating the problem with our assumption that mental states are some binary black-and-white, all-or-none thing, and somehow different from normal physical explanations. There is likely some continuum of feedback and flexibility in various systems (animal, human, robotic, etc.), and humans at one end are obviously different from rocks or very simple animals at the other end, but things bleed into fuzzy gray somewhere along the middle.
If, for systems of enough complexity of the right sort, we find it useful to label those physical events as mental states (as a shorthand for some properties that physical system demonstrates), then why not for animals — even if we can’t 100% prove they have inner experience any more than we can prove it for humans?