In 2014 billions of viewers worldwide may remember the opening game of the World Cup in Brazil for more than just the goals scored by the Brazilian national team and the red cards given to its adversary. On that day my laboratory at Duke University, which specializes in developing technologies that allow electrical signals from the brain to control robotic limbs, plans to mark a milestone in overcoming paralysis.
If we succeed in meeting still formidable challenges, the first ceremonial kick of the World Cup game may be made by a paralyzed teenager, who, flanked by the two contending soccer teams, will saunter onto the pitch clad in a robotic body suit. This suit—or exoskeleton, as we call it—will envelop the teenager’s legs. His or her first steps onto the field will be controlled by motor signals originating in the kicker’s brain and transmitted wirelessly to a computer unit the size of a laptop in a backpack carried by our patient. This computer will be responsible for translating electrical brain signals into digital motor commands so that the exoskeleton can first stabilize the kicker’s body weight and then induce the robotic legs to begin the back-and-forth coordinated movements of a walk over the manicured grass. Then, on approaching the ball, the kicker will visualize placing a foot in contact with it. Three hundred milliseconds later brain signals will instruct the exoskeleton’s robotic foot to hook under the leather sphere, Brazilian style, and boot it aloft.
This scientific demonstration of a radically new technology, undertaken with collaborators in Europe and Brazil, will convey to a global audience of billions that brain control of machines has moved from lab demos and futuristic speculation to a new era in which tools capable of bringing mobility to patients incapacitated by injury or disease may become a reality. We are on our way, perhaps by the next decade, to technology that links the brain with mechanical, electronic or virtual machines. This development will restore mobility, not only to accident and war victims but also to patients with ALS (also known as Lou Gehrig’s disease), Parkinson’s and other disorders that disrupt motor behaviors that impede arm reaching, hand grasping, locomotion and speech production. Neuroprosthetic devices—or brain-machine interfaces—will also allow scientists to do much more than help the disabled. They will make it possible to explore the world in revolutionary ways by providing healthy human beings with the ability to augment their sensory and motor skills.
In this futuristic scenario, voluntary electrical brain waves, the biological alphabet that underlies human thinking, will maneuver large and small robots remotely, control airships from afar, and perhaps even allow the sharing of thoughts and sensations of one individual with another over what will become a collective brain-based network.
The lightweight body suit intended for the kicker, who has not yet been selected, is still under development. A prototype, though, is now under construction at the lab of my great friend and collaborator Gordon Cheng of the Technical University of Munich—one of the founding members of the Walk Again Project, a nonprofit, international collaboration among the Duke University Center for Neuroengineering, the Technical University of Munich, the Swiss Federal Institute of Technology in Lausanne, and the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil. A few new members, including major research institutes and universities all over the world, will join this international team in the next few months.
The project builds on nearly two decades of pioneering work on brain-machine interfaces at Duke—research that itself grew out of studies dating back to the 1960s, when scientists first attempted to tap into animal brains to see if a neural signal could be fed into a computer and thereby prompt a command to initiate motion in a mechanical device. Back in 1990 and throughout the first decade of this century, my Duke colleagues and I pioneered a method through which the brains of both rats and monkeys could be implanted with hundreds of hair-thin and pliable sensors, known as microwires. Over the past two decades we have shown that, once implanted, the flexible electrical prongs can detect minute electrical signals, or action potentials, generated by hundreds of individual neurons distributed throughout the animals‘ frontal and parietal cortices—the regions that define a vast brain circuit responsible for the generation of voluntary movements.
This interface has for a full decade used brain-derived signals to generate movements of robotic arms, hands and legs in animal experiments. A critical breakthrough occurred last year when two monkeys in our lab learned to exert neural control over the movements of a computer-generated avatar arm that touched objects in a virtual world but also provided an “artificial tactile” feedback signal directly to each monkey’s brain. The software allowed us to train the animalsto feel what it was like to touch an object with virtual fingers controlled directly by their brain.
The Walk Again consortium—assisted by its international team of neuroscientists, roboticists, computer scientists, neurosurgeons and rehabilitation professionals—has begun to take advantage of these animal research findings to create a completely new way to train and rehabilitate severely paralyzed patients in how to use brain-machine interface technologies to regain full-body mobility. Indeed, the first baby steps for our future ceremonial kicker will happen inside an advanced virtual-reality chamber known as a Cave Automatic Virtual Environment, a room with screens projected on every wall, including the floor and ceiling. After donning 3-D goggles and a headpiece that will noninvasively detect brain waves (through techniques known as electroencephalography—EEG—and magnetoencephalography), our candidate kicker—by necessity a lightweight teenager for this first iteration of the technology—will become immersed in a virtual environment that stretches out in all directions. There the youngster will learn to control the movements of a software body avatar through thought alone. Little by little, the motions induced in the avatar will increase in complexity and will ultimately end with fine-motor movements such as walking on a changing terrain or unscrewing a virtual jelly jar top.
Plugging into Neurons
The mechanical movements of an exoskeleton cannot be manipulated as readily as those of a software avatar, so the technology and the training will be more complicated. It will be necessary to implant electrodes directly in the brain to manipulate the robotic limbs. We will need not only to place the electrodes under the skull in the brain but also to increase the number of neurons to be “read” simultaneously throughout the cortex. Many of the sensors will be implanted in the motor cortex, the region of the frontal lobe most readily associated with the generation of the motor program that is normally downloaded to the spinal cord, from which neurons directly control and coordinate the work of our muscles. (Some neuroscientists believe that this interaction between mind and muscle may be achieved through a noninvasive method of recording brain activity, like EEG, but that goal has yet to be practically achieved.)
Gary Lehew in my group at Duke has devised a new type of sensor: a recording cube that, when implanted, can pick up signals throughout a three-dimensional volume of cortex. Unlike earlier brain sensors, which consist of flat arrays of microelectrodes whose tips record neuronal electrical signals, Lehew’s cube extends sensing microwires up, down and sideways throughout the length of a central shaft.
The current version of our recording cubes contains up to 1,000 active recording microwires. Because at least four to six single neurons can be recorded from each microwire, every cube can potentially capture the electrical activity of between 4,000 to 6,000 neurons. Assuming that we could implant several of those cubes in the frontal and parietal cortices—areas responsible for high-level control of movement and decision making—we could obtain a simultaneous sample of tens of thousands of neurons. According to our theoretical software modeling, this design would suffice for controlling the flexibility of movement required to operate an exoskeleton with two legs and to restore autonomous locomotion in our patients.
To handle the avalanche of data from these sensors, we are also moving ahead on making a new generation of custom-designed neurochips. Implanted in a patient’s skull along with the microelectrodes, they will extract the raw motor commands needed to manipulate a whole-body exoskeleton.
Of course, the signals detected from the brain will then need to be broadcast to the prosthetic limbs. Recently Tim Hanson, a newly graduated Ph.D. student at Duke, built a 128-channel wireless recording system equipped with sensors and chips that can be encased in the cranium and that is capable of broadcasting recorded brain waves to a remote receiver. The first version of these neurochips is currently being tested successfully in monkeys. Indeed, we have recently witnessed the first monkey to operate a brain-machine interface around the clock using wireless transmission of brain signals. We filed in July with the Brazilian government for permission to use this technology in humans.
For our future soccer ball kicker, the data from the recording systems will be relayed wirelessly to a small computer processing unit contained in a backpack. Multiple digital processors will run various software algorithms that translate motor signals into digital commands that are able to control moving parts, or actuators, distributed across the joints of the robotic suit, hardware elements that adjust the positioning of the exoskeleton’s artificial limbs.
Force of Brainpower
The commands will permit the exoskeleton wearer to take one step and then another, slow down or speed up, bend over or climb a set of stairs. Some low-level adjustments to the positioning of the prosthetic hardware will be handled directly by the exoskeleton’s electromechanical circuits without any neural input. The space suit–like garment will remain flexible but still furnish structural support to its wearer, a surrogate for the human spinal cord. By taking full advantage of this interplay between brain-derived control signals and the electronic reflexes supplied by the actuators, we hope that our brain-machine interface will literally carry the World Cup kicker along by force of willpower.
The kicker will not only move but also feel the ground underneath. The exoskeleton will replicate a sense of touch and balance by incorporating microscopic sensors that both detect the amount of force from a particular movement and convey the information from the suit back to the brain. The kicker should be able to feel that a toe has come in contact with the ball.
Our decade-long experience with brain-machine interfaces suggests that as soon as the kicker starts interacting with this exoskeleton, the brain will start incorporating this robotic body as a true extension of his or her own body image. From training, the accumulated experience obtained from this continuous feeling of contact with the ground and the position of the robotic legs should enable movement with fluid steps over a soccer pitch or down any sidewalk. All phases of this project require continuous and rigorous testing in animal experiments before we begin in humans. In addition, all procedures must pass muster with regulatory agencies in Brazil, the U.S. and Europe to ensure proper scientific and ethical review. Even with all the uncertainties involved and the short time required for the completion of its first public demonstration, the simple idea of reaching for such a major milestone has galvanized Brazilian society’s interest in science in ways rarely seen before.
The opening kickoff of the World Cup—or a similar event, say, the 2016 Olympic and Paralympic Games in Rio de Janeiro, if we miss the first deadline for any reason—will be more than just a one-time stunt. A hint of what may be possible with this technology can be gleaned from a two-part experiment already completed with monkeys. As a prelude, back in 2007, our research team at Duke trained rhesus monkeys to walk upright on a treadmill as the electrical activity of more than 200 cortical neurons was recorded simultaneously. Meanwhile Gordon Cheng, then at ATR Intelligent Robotics and Communication Laboratories in Kyoto, built an extremely fast Internet protocol that allowed us to send this stream of neuronal data directly to Kyoto, where it fed the electronic controllers of CB1, a humanoid robot. In the first half of this across-the-globe experiment, Cheng and my group at Duke showed that the same software algorithms developed previously for translating thoughts into control of robotic arms could also convert patterns of neural activity involved in bipedal locomotion to make two mechanical legs walk.
The second part of the experiment yielded a much bigger surprise. As one of our monkeys, Idoya, walked on the treadmill in Durham, N.C., our brain-machine interface broadcast a constant stream of her brain’s electrical activity through Cheng’s Internetconnection to Kyoto. There CB1 detected these motor commands and began to walk as well, almost immediately. CB1 first needed some support at the waist, but in later experiments it began to move autonomously in response to the brain-derived commands generated by the monkey on the other side of the globe.
What is more, even when the treadmill at Duke stopped and Idoya ceased walking, she could still control CB1’s leg movements in Kyoto by merely observing the robot’s legs moving on a live video feed and imagining each step CB1 should take. Idoya continued to produce the brain patterns required to make CB1 walk even though her own body was no longer engaged in this motor task. This transcontinental brain-machine interface demonstration revealed that it is possible for a human or a simian to readily transcend space, force and time by liberating brain-derived commands from the physical limits of the biological body that houses the brain and broadcasting them to a man-made device located far from the original thought that generated the action.
These experiments imply that brain-machine interfaces could make it possible to manipulate robots sent into environments that a human will never be able to penetrate directly: our thoughts might operate a microsurgical tool inside the body, say, or direct the activities of a humanoid worker trying to repair a leak at a nuclear plant.
The interface could also control tools that exert much stronger or lighter forces than our bodies can, thereby breaking free of ordinary constraints on the amount of force an individual can exert. Linking a monkey’s brain to a humanoid robot has already done away with constraints imposed by the clock: Idoya’s mental trip around the globe took 20 milliseconds—less time than was required to move her own limb.
Along with inspiring visions of the far future, the work we have done with monkeys gives us confidence that our plan may be achievable. At the time of this writing, we are waiting to see whether the International Football Association (FIFA), which is in charge of organizing the ceremony, will grant our proposal to have a paraplegic young adult participate in the opening ceremony of the inaugural game of the 2014 World Cup. The Brazilian government—which is still awaiting FIFA’s endorsement—has tentatively supported our application.
Bureaucratic difficulties and scientific uncertainties abound before our vision is realized. Yet I cannot stop imagining what it will be like during the brief but historic stroll onto a tropical green soccer pitch for three billion people to witness a paralyzed Brazilian youth stand up, walk again by his or her own volition, and ultimately kick a ball to score an unforgettable goal for science, in the very land that mastered the beautiful game.
By Miguel A. L. Nicolelis