17-04-2010
Jayne Wallace
Journeys Between Ourselves (2007)
Keywords: digital jewelry, emotional significance, meaningfulness
Journeys... are a pair of digital neckpieces custom made for a mother and daughter participating in Jayne's research. The forms of the neckpieces are influenced in part by a Kay Nielsen illustration that is cherished by the couple. The neckpices are responsive to touch; the touch of one causes the second to tremble gently. This interaction is a tactile echo that reflects their closeness and feelings for each other.
Materials: porcelain, paper, felt, light sensors, motors, motes, accelerometers, batteries.
Inspirations: A. Dunn 'Hertzian Tales: Electronic products, aesthetic experience and critical design', Malin Lindmark Vrijman.
Monday 17 May 2010
Philips Body and Soul
Body and Soul
Why, at the dawn of a new millennium, do we find that young people have resorted to the most "primitive" forms of bodyadornment? For many, it is a way to break away from existing norms, from stereotypical behaviour, for others it is the resultof contact with other cultures or simply another way to express their identity. Tatoos, body percing, scarifications, pocketing and implants are as personal a statement as enyone can possibly make. While conventional society might tend to consider these as new and extreme forms of body adornment, an expressive medium used mainly by young people, they have existed for centuries in many different cultures as a traditional form of cultural or religious expression.
The integration of fashion and style into the human body is a precursor to a new medium that encompasses other functionalities: the body as a local area network. This theme has already been explored by institutions such as The Media Lab, and many research projects have examined the possibilities of harnessing human power. The passing of data through the body and sensing of biometric feedback have also been extensively covered. Medical biosensing, for example, could have extremely practical applications in the treatment of chronic and acute conditions, e.q. for reading insulin levels in the case of diabetes. Exploration into "swallowables" that pass through the body or implants in the form of small electronics and mechanical devices has shown interesting potential. And though for many people this immediately raises the spectre of mankind being turned into a race of cyborgs, we should remember that we are in fact already using such devices on a fairly large scale for medical purposes (consider, for example, the pacemaker).
Today, on the threshold of a new era, the question facing companies operating at the interface of clothing and electronics is: How do we anticipate and develop clothing applications and solutions that adress people's socio-cultural, emotional and physical needs, enabling them to express all the facets of their personality? Integrated technology clearly has a major role to play. Taking this exciting pioneering development to its ultimate, logical conclusion, the challenge facing us, in the end, is to extend the human nervous system to the seventh sense.
The piercing glow and give a pulsating sensation when the wearer is paged.
17-05-2010
Why, at the dawn of a new millennium, do we find that young people have resorted to the most "primitive" forms of bodyadornment? For many, it is a way to break away from existing norms, from stereotypical behaviour, for others it is the resultof contact with other cultures or simply another way to express their identity. Tatoos, body percing, scarifications, pocketing and implants are as personal a statement as enyone can possibly make. While conventional society might tend to consider these as new and extreme forms of body adornment, an expressive medium used mainly by young people, they have existed for centuries in many different cultures as a traditional form of cultural or religious expression.
The integration of fashion and style into the human body is a precursor to a new medium that encompasses other functionalities: the body as a local area network. This theme has already been explored by institutions such as The Media Lab, and many research projects have examined the possibilities of harnessing human power. The passing of data through the body and sensing of biometric feedback have also been extensively covered. Medical biosensing, for example, could have extremely practical applications in the treatment of chronic and acute conditions, e.q. for reading insulin levels in the case of diabetes. Exploration into "swallowables" that pass through the body or implants in the form of small electronics and mechanical devices has shown interesting potential. And though for many people this immediately raises the spectre of mankind being turned into a race of cyborgs, we should remember that we are in fact already using such devices on a fairly large scale for medical purposes (consider, for example, the pacemaker).
Today, on the threshold of a new era, the question facing companies operating at the interface of clothing and electronics is: How do we anticipate and develop clothing applications and solutions that adress people's socio-cultural, emotional and physical needs, enabling them to express all the facets of their personality? Integrated technology clearly has a major role to play. Taking this exciting pioneering development to its ultimate, logical conclusion, the challenge facing us, in the end, is to extend the human nervous system to the seventh sense.
The piercing glow and give a pulsating sensation when the wearer is paged.
17-05-2010
Sunday 9 May 2010
Magazine "NewScientist" 10 April 2010
"NewScientist" 10 April 2010
Animations sense real world
POWERPOINT presentations are about to get a sprinkle of fairy dust. A hand-held projector can now create virtual characters and objects that interact with the real world.
The device - called Twinkle - projects animated graphics that respond to patterns, shapes or colours on a surface, or even 2D objects such as your hand. It uses a camera to track relevant elements - say a line drawn on a wall - in the scene illuminated by the projector and an accelerometer ensures it can sense the projector's rapid motion and position.
Software then matches up the pixels detected by the camera with the animation, making corrections for the angle of projection and distance from the surface.
The device could eventually fit inside a cellphone, says Takumi Yoshida of the University of Tokyo.
A prototype which projects a cartoon fairy that bounces off or runs along paintings on a wall or even the surface of a bottle was presented at the recent Virtual Reality 2010 meeting in Waltham, Massachusetts.
Yoshida and his colleagues are also developing a way for graphics from several projectors to interact, which could be used for gaming.
Anthony Steed of University College London is impressed. Many researchers have been attempting to create virtual graphics that can interact with a real surface, he says, but Twinkle can cope with a much greater range of environments.
Animations sense real world
POWERPOINT presentations are about to get a sprinkle of fairy dust. A hand-held projector can now create virtual characters and objects that interact with the real world.
The device - called Twinkle - projects animated graphics that respond to patterns, shapes or colours on a surface, or even 2D objects such as your hand. It uses a camera to track relevant elements - say a line drawn on a wall - in the scene illuminated by the projector and an accelerometer ensures it can sense the projector's rapid motion and position.
Software then matches up the pixels detected by the camera with the animation, making corrections for the angle of projection and distance from the surface.
The device could eventually fit inside a cellphone, says Takumi Yoshida of the University of Tokyo.
A prototype which projects a cartoon fairy that bounces off or runs along paintings on a wall or even the surface of a bottle was presented at the recent Virtual Reality 2010 meeting in Waltham, Massachusetts.
Yoshida and his colleagues are also developing a way for graphics from several projectors to interact, which could be used for gaming.
Anthony Steed of University College London is impressed. Many researchers have been attempting to create virtual graphics that can interact with a real surface, he says, but Twinkle can cope with a much greater range of environments.
Magazine "NewScientist" 17 April 2010
"NewScientist" 17 April 2010
Robots with skin enter our touchy-feely world.
If humanoid robots are ever to move among us, they will first need to get in touch with the world - and learn to interpret our fuzzy human language.
BEAUTY may be only skin deep, but for humanoid robots a fleshy covering is about more than mere aesthetics, it could be essential to making them socially acceptable. A touch-sensitive coating could prevent such machines from accidentally injuring anybody within their reach.
In May, a team at the Italian Institute of Technology (IIT) in Genoa will dispatch to labs across Europe the first pieces of touchsensing skin designed for their nascent humanoid robot, the iCub. The skin IIT and its partners have developed contains flexible pressure sensors that aim to put robots in touch with the world.
"Skin has been one of the big missing technologies for humanoid robots," says roboticist Giorgio Metta at IIT. One goal of making robots in a humanoid form is to let them interact closely with people. But what will only be possible if a robot is fully aware of what its powerful motorised limbs are in contact with.
Roboticists are trying a great variety of ways to make a sensing skin. Early examples, such as the CB2 robot, built at Osaka University in Japan, placed a few hundred sensors in silicone skin. But now "many, many sensing methods are emerging", says Richard Walker of Shadow Robot, London. Until a lot of robots are using them, it is going to be hard to say which are best suited for particular applications.
What's more, there are many criteria the skin has to meet, says Metta: it must be resilient, able to cover a large surface area and be able to detect even light touches anywhere on that surface. "Many of these factors conflict with each other," he says.
The iCub is a humanoid robot the size of a child of three-and-a-half years old. Funded by the European Commission, it was designed to investigate cognition and how awareness of our limbs, muscles, tendons and tactile environment of intelligence. The Icub's technical specifications are open-source and some 15 labs across Europe have already "cloned" their own, so IIT's skin design could find plenty of robots to enwrap.
The skin is made up of triangular, flexible printed circuit boards which act as sensors, and it covers much of iCub's body. Each bendy triangle is 3 centimeters to a side and contains 12 capacitive copper contacts. A layer of silicone rubber acts as a spacer between those boards and an outer layer of Lycra that carries a metal contact above each copper contact. The Lycra layer and flexible circuits constitute the two sides of the skin's pressure-sensing capacitors. This arrangement allows 12 "tactile pixels" - or taxels - to be sensed per triangle. This taxel resolution is enough to recognise patterns such as a hand grasping the robot's arm. The skin can detect a touch as light as 1 gram across each taxel, says Metta. It is also peppered with semiconductor-based temperature sensors. This version of the skin will be released in May.
Later, IIT plans to add a layer of a piezoelectric polymer called PVDF to the skin. While the capacitance sensors measure absolute pressure, the voltage produced by PVDF as a result of its deformation when touched can be used to measure the rate of change of pressure. So if the robot runs its fingertip along a surface, the vibrations generated by friction give it clues about what that surface is made of. Such sensitivity might help it establish te level of grip needed to pick up, say, a slippery porcelain plate.
Philip Taysom, CEO of British company Peratech of Richmond, North Yorkshire, is not a fan of sensing skins based on capacitors, which he says can lose sensitivity with repeated use. Peratech's answer is a stretchy, elastic material it calls quantum tunnelling composite (QTC). This comprises a polymer such as silicone rubber that is heavily loaded with spiky nickel nanoparticles. A voltage is applied across the skin, and when it is pressed, the distance between the nanoparticles within the polymer diminishes, which results in electrons flowing, or "tunnelling", from one nanoparticle spike to the next in the area being touched. Crucially, the material's electrical resistance drops dramatically and in proportion to the force applied, so the touch can be interpreted.
At the Massachusetts Institute of Technology's Media Lab, Adam Whiton is developing a QTC-based sensing skin for a commercial robot-maker which he declines to name. Instead of a tight, conforming skin, Whiton uses a looser covering, more akin to clothing. "We cover ourselves with textiles when we interact with people, so clothing may be a better metaphor as a humanoid's pressure-sensitive surface covering," he says.
Natural gestures, like tapping a humanoid on the back to get its attention, or leading it by the arm, can be easily interpreted because QTC boasts high sensitivity, he says. But novel skin capabilities could be on the way, too. For example, QTC can also act as an elctronic nose. Careful choice of the material's base polymer, says Taysom, means telltale resistance changes can be induced by reactions between volatile chemicals in the air - so it can become an e-nose as well as a touch sensor, able to detect, for example, a gas leak in your home. "This shows we can probably build into robots a lot of things that our skin can't do. It's another reason not to stick rigidly to the human skin metaphor," says Whiton.
That's not to say our skin isn't a great influence. Shadow Robot will soon start testing a novel human-like touch-sensing fingertip from Syntouch, a start-up based in California. Its fingertip comprises a rubbery fluid-filled sac that squishes just like a real fingertip, and is equipped with internal sensors that measure vibration, temperature and pressure.
Whichever of the emerging technologies prevail, sensing robot skins should help us get along with our future humanoid assistants, says Whiton. "Right now, robots are about as friendly as photocopiers. The interactions skins encourage will make them much friendlier."
Robots with skin enter our touchy-feely world.
If humanoid robots are ever to move among us, they will first need to get in touch with the world - and learn to interpret our fuzzy human language.
BEAUTY may be only skin deep, but for humanoid robots a fleshy covering is about more than mere aesthetics, it could be essential to making them socially acceptable. A touch-sensitive coating could prevent such machines from accidentally injuring anybody within their reach.
In May, a team at the Italian Institute of Technology (IIT) in Genoa will dispatch to labs across Europe the first pieces of touchsensing skin designed for their nascent humanoid robot, the iCub. The skin IIT and its partners have developed contains flexible pressure sensors that aim to put robots in touch with the world.
"Skin has been one of the big missing technologies for humanoid robots," says roboticist Giorgio Metta at IIT. One goal of making robots in a humanoid form is to let them interact closely with people. But what will only be possible if a robot is fully aware of what its powerful motorised limbs are in contact with.
Roboticists are trying a great variety of ways to make a sensing skin. Early examples, such as the CB2 robot, built at Osaka University in Japan, placed a few hundred sensors in silicone skin. But now "many, many sensing methods are emerging", says Richard Walker of Shadow Robot, London. Until a lot of robots are using them, it is going to be hard to say which are best suited for particular applications.
What's more, there are many criteria the skin has to meet, says Metta: it must be resilient, able to cover a large surface area and be able to detect even light touches anywhere on that surface. "Many of these factors conflict with each other," he says.
The iCub is a humanoid robot the size of a child of three-and-a-half years old. Funded by the European Commission, it was designed to investigate cognition and how awareness of our limbs, muscles, tendons and tactile environment of intelligence. The Icub's technical specifications are open-source and some 15 labs across Europe have already "cloned" their own, so IIT's skin design could find plenty of robots to enwrap.
The skin is made up of triangular, flexible printed circuit boards which act as sensors, and it covers much of iCub's body. Each bendy triangle is 3 centimeters to a side and contains 12 capacitive copper contacts. A layer of silicone rubber acts as a spacer between those boards and an outer layer of Lycra that carries a metal contact above each copper contact. The Lycra layer and flexible circuits constitute the two sides of the skin's pressure-sensing capacitors. This arrangement allows 12 "tactile pixels" - or taxels - to be sensed per triangle. This taxel resolution is enough to recognise patterns such as a hand grasping the robot's arm. The skin can detect a touch as light as 1 gram across each taxel, says Metta. It is also peppered with semiconductor-based temperature sensors. This version of the skin will be released in May.
Later, IIT plans to add a layer of a piezoelectric polymer called PVDF to the skin. While the capacitance sensors measure absolute pressure, the voltage produced by PVDF as a result of its deformation when touched can be used to measure the rate of change of pressure. So if the robot runs its fingertip along a surface, the vibrations generated by friction give it clues about what that surface is made of. Such sensitivity might help it establish te level of grip needed to pick up, say, a slippery porcelain plate.
Philip Taysom, CEO of British company Peratech of Richmond, North Yorkshire, is not a fan of sensing skins based on capacitors, which he says can lose sensitivity with repeated use. Peratech's answer is a stretchy, elastic material it calls quantum tunnelling composite (QTC). This comprises a polymer such as silicone rubber that is heavily loaded with spiky nickel nanoparticles. A voltage is applied across the skin, and when it is pressed, the distance between the nanoparticles within the polymer diminishes, which results in electrons flowing, or "tunnelling", from one nanoparticle spike to the next in the area being touched. Crucially, the material's electrical resistance drops dramatically and in proportion to the force applied, so the touch can be interpreted.
At the Massachusetts Institute of Technology's Media Lab, Adam Whiton is developing a QTC-based sensing skin for a commercial robot-maker which he declines to name. Instead of a tight, conforming skin, Whiton uses a looser covering, more akin to clothing. "We cover ourselves with textiles when we interact with people, so clothing may be a better metaphor as a humanoid's pressure-sensitive surface covering," he says.
Natural gestures, like tapping a humanoid on the back to get its attention, or leading it by the arm, can be easily interpreted because QTC boasts high sensitivity, he says. But novel skin capabilities could be on the way, too. For example, QTC can also act as an elctronic nose. Careful choice of the material's base polymer, says Taysom, means telltale resistance changes can be induced by reactions between volatile chemicals in the air - so it can become an e-nose as well as a touch sensor, able to detect, for example, a gas leak in your home. "This shows we can probably build into robots a lot of things that our skin can't do. It's another reason not to stick rigidly to the human skin metaphor," says Whiton.
That's not to say our skin isn't a great influence. Shadow Robot will soon start testing a novel human-like touch-sensing fingertip from Syntouch, a start-up based in California. Its fingertip comprises a rubbery fluid-filled sac that squishes just like a real fingertip, and is equipped with internal sensors that measure vibration, temperature and pressure.
Whichever of the emerging technologies prevail, sensing robot skins should help us get along with our future humanoid assistants, says Whiton. "Right now, robots are about as friendly as photocopiers. The interactions skins encourage will make them much friendlier."
Magazine "NewScientist" 17 October 2010
"NewScientist" 17 October 2010
Next step for touchscreens.
IMAGINE entering your living room and sliding your foot purposefully over a particular stretch of floor. Suddenly your hi-fi system springs into life and begins playing your favourite CD.
Floors you can use like a gaint touchscreen could one day be commonplace thanks to a "touch floor" developed by Patrick Baudisch at the Hasso Plattner Institute in Potsdam, Germany. His prototype, named Multi-toe, is made up of thin layers of silicone and clear acrylic on top of a rigid glass sheet. Light beams shone into the acrylic layer bounce around inside until pressure from above allows them to escape. A camera below captures the light and registers an image of whatever has pressed down upon the floor.
Some touchscreens already employ this technique, but the new version offers greater resolution, allowing the pattern of the tread on someone's shoes to be detected. Baudisch has already adapted it for the video game Unreal Tournament, with players leaning in different directions to move on screen, and tapping their toes to shoot. A virtual keyboard on the floor can also be activated with the feet.
Baudisch presented the work at the Conference on Human Factors in Computing Systems in Atlanta, Georgia, this week. He admits the system cannot easily be used on existing floors due to the need for underfloor cavities to house the cameras, but says future versions will address this.
Next step for touchscreens.
IMAGINE entering your living room and sliding your foot purposefully over a particular stretch of floor. Suddenly your hi-fi system springs into life and begins playing your favourite CD.
Floors you can use like a gaint touchscreen could one day be commonplace thanks to a "touch floor" developed by Patrick Baudisch at the Hasso Plattner Institute in Potsdam, Germany. His prototype, named Multi-toe, is made up of thin layers of silicone and clear acrylic on top of a rigid glass sheet. Light beams shone into the acrylic layer bounce around inside until pressure from above allows them to escape. A camera below captures the light and registers an image of whatever has pressed down upon the floor.
Some touchscreens already employ this technique, but the new version offers greater resolution, allowing the pattern of the tread on someone's shoes to be detected. Baudisch has already adapted it for the video game Unreal Tournament, with players leaning in different directions to move on screen, and tapping their toes to shoot. A virtual keyboard on the floor can also be activated with the feet.
Baudisch presented the work at the Conference on Human Factors in Computing Systems in Atlanta, Georgia, this week. He admits the system cannot easily be used on existing floors due to the need for underfloor cavities to house the cameras, but says future versions will address this.
Magazine "NewScientist" 24 April 2010
"NewScientist" 24 April 2010
Humanoid robot set for space.
NASA is preparing to send its first humanoid robot into space. Robonaut first twitched to life in September 1999 and, after a decade of tests, the 140-kilogram R2 model will finally be launched to the International Space Station on the space shuttle Discovery's last mission in September.
With continual maintenance work needed on the ISS, the idea is to give the crew an assistant that never tires of undertaking mundane mechanical tasks - initially inside the craft but later outside it too.
R2 comprises a humanoid head and torso with highly dexterous arms and hands. It was developed by NASA in conjunction with roboticists at General Motors. After being bolted to a piece of ISS infrastructure, R2 can use the same tools, such as screwdrivers and wrenches, as the astronauts.
One reason for the mission, NASA says, is to see how Robonaut copes with the cosmic radiation and electromagnetic interference inside the space station.
The main challenge, though, will be to ensure the robot is safe to work with, as tools can fly off easily in microgravity, says Chris Melhuish of the Bristol Robotics Laboratory in the UK. "Robots have to be both physically and behaviourally safe," he says.
"That means torque control of limbs and tools, but also an ability to recognise human gestures to safely achive shared goals. These are serious hurdles NASA will need to overcome."
Humanoid robot set for space.
NASA is preparing to send its first humanoid robot into space. Robonaut first twitched to life in September 1999 and, after a decade of tests, the 140-kilogram R2 model will finally be launched to the International Space Station on the space shuttle Discovery's last mission in September.
With continual maintenance work needed on the ISS, the idea is to give the crew an assistant that never tires of undertaking mundane mechanical tasks - initially inside the craft but later outside it too.
R2 comprises a humanoid head and torso with highly dexterous arms and hands. It was developed by NASA in conjunction with roboticists at General Motors. After being bolted to a piece of ISS infrastructure, R2 can use the same tools, such as screwdrivers and wrenches, as the astronauts.
One reason for the mission, NASA says, is to see how Robonaut copes with the cosmic radiation and electromagnetic interference inside the space station.
The main challenge, though, will be to ensure the robot is safe to work with, as tools can fly off easily in microgravity, says Chris Melhuish of the Bristol Robotics Laboratory in the UK. "Robots have to be both physically and behaviourally safe," he says.
"That means torque control of limbs and tools, but also an ability to recognise human gestures to safely achive shared goals. These are serious hurdles NASA will need to overcome."
Magazine "NewScientist" 6 February 2010
"NewScientist" 6 February 2010
A voice for the voiceless
It is now possible to "talk" to people who seem to be unconscious, by tapping into their brain activity.
THE inner voice of people who appear unconscious can now be heard. For the first time, researchers have struck up a conversation with a man diagnosed as being in a vegetative stae. All they had to do was monitor how his brain responded to specific questions. This means that it may now be possible to give some individuals in the same state a degree of autonomy.
"They can now have some involvement in their destiny," says Adrian Owen of the University of Cambridge, who led the team doing the work.
In an earlier experiment, published in 2006, Owen's team asked a woman previously diagnosed as being in a vegetative state (VS) to picture herself carrying out one of two different activities. The resulting brain activity suggested she understood the commands and was therefore conscious.
Now Owen's team has taken the idea a step further. A man also diagnosed with VS was able to answer yes and no to specific questions by imagining himself engaging in the same activities.
The results suggest that it is possible to give a degree of choice to some people who have no other way of communicating with the outside world. "We are not just showing they are conscious, we are giving them a voice and a way to communicate," says neurologist Steven Laureys of the University of Liege in Belgium, Owen's collaborator.
When someone is in a VS, they can breathe unaided, have intact reflexes but seem completely unaware. But it is becoming clear that some people who appear to be vegetative are in fact minimally conscious. They are in a kind of twilight state in which they may feel some pain, experience emotion and communicate to a limited extent. These two states can be distinguished from each other via bedside behavioural tests - but these tests are not perfect and can miss patients who are aware but unable to move. So researchers are looking for ways to detect consciousness with brain imaging.
In their original experiment, Owen and his colleagues used functional MRI to detect wheteher a woman could respond to two spoken commands, which were expected to activate different brain areas. On behavioural tests alone her diagnosis was VS but the brain scan result were astounding. When asked to imagine playing tennis, the woman's supplementary motor area (SMA), which is concerned with complex sequences of movements, lit up. When asked to imagine moving around her house, it was the turn of the parahippocampal gyrus, which represents spatial locations.
Because the correct brain areas lit up at the correct time, the team concluded that the woman was modulating her brain activity to cooperate with the experiment and must have had a degree of consciousness.
In the intervening years, Owen, Laureys and their team repeated the experiment on 23 people in Belgium and the UK diagnosed as being in a VS. Four responded positively and were deemed to possess a degree of consciousness.
To find out whether a simple conversation was possible, the researchers selected on eof the four - a 29-year-old man who had been in a car crash. They asked him to imagine playing tennis if he wanted to answer yes to questions such as: Do you have any sisters? Is you father's name Thomas? Is your father's name Alexander? And if the answer to a question was no, he had to imagine moving round his home.
The man was asked to think of the activity that represented his answer, in 10-second bursts for up to 5 minutes, so that a strong enough signal could be detected by the scanner. His family came up with the questions to ensure that the researchers did not know the answers in advance. What's more, the brain scans were analysed by a team that had never come into contact with the patient or his family.
The team found that either the SMA or the parahippocampal gyrus lit up in response to five of the six questions (see diagram). When the team ran these answers by his family, they were all correct, indicating that the man had understood the task and was able to form an answer. The group also asked healthy volunteers similar questions relating to their own families and found that their brains responded in the same way.
"I think we can be pretty confident that he is entirely conscious," says Owen. "He has to understand instructions, comprehend speech, remember what tennis is and how you do it. So many of his cognitive faculties have to have been intact."
That someone can be capable of all this while appearing completely unaware confounds existing medical definitions of consciousness, Laureys says. "We don't know what to call this; he just doesn't fit a definition."
Doctors traditionally base these diagnoses on how someone behaves: if for example, whether or not they can glance in different directions in response to questions. The new results show that you don't need behavioural indications to identify awareness and even a degree of cognitive proficiency. All you need to do is tap into brain activity directly.
The work "changes everything", says Nicholas Schiff, a neurologist at Weill Cornell Medical College in New York, who is carrying out similar work on patients with consciousness disorders.
"Knowing that someone could persist in a state like this and not show evidence of the fact that they can answer yes/no questions should be extremely disturbing to our clinical pratice."
One of the most difficult questions you might want to ask someone is whether they want to carry on living. But as Owen and Laureys point out, the scientific, legal and ethical challenges for doctors asking such questions are formidable. Ïn purely practical terms, yes, it is possible," says Owen. "But it is a bigger step than one might immediately think."
One problem is that while the brain scans do seem to establish consciousness, there is a lot they don't tell us. "Just because they can answer a yes/no question does not mean they have the capacity to make complex decisions," Owen says.
Even assuming there is a subset of people who cannot move but have enough cognition to answer tough questions, you would still have to convince a court that this is so. "There are many ethical and legal frameworks that would need to be revised before fMRI could be used in this context," says Owen.
There are many challenges. For example, someone in this state can only to respond to specific questions; they can't yet start a conversation of their own. There is also the prospect of developing smaller devices to make conversation more frequent, since MRI scans are expensive and take many hours to analyse.
In the meantime, you can ask someone whether they are in pain or would like to try new drugs that are being tested for their ability to bring patients out of a vegetative state. "For the minority of patients that this will work for, just for them to exercise some autonomy is a massive step forward - it doesn't have to be at the life or death level," Owen says.
A voice for the voiceless
It is now possible to "talk" to people who seem to be unconscious, by tapping into their brain activity.
THE inner voice of people who appear unconscious can now be heard. For the first time, researchers have struck up a conversation with a man diagnosed as being in a vegetative stae. All they had to do was monitor how his brain responded to specific questions. This means that it may now be possible to give some individuals in the same state a degree of autonomy.
"They can now have some involvement in their destiny," says Adrian Owen of the University of Cambridge, who led the team doing the work.
In an earlier experiment, published in 2006, Owen's team asked a woman previously diagnosed as being in a vegetative state (VS) to picture herself carrying out one of two different activities. The resulting brain activity suggested she understood the commands and was therefore conscious.
Now Owen's team has taken the idea a step further. A man also diagnosed with VS was able to answer yes and no to specific questions by imagining himself engaging in the same activities.
The results suggest that it is possible to give a degree of choice to some people who have no other way of communicating with the outside world. "We are not just showing they are conscious, we are giving them a voice and a way to communicate," says neurologist Steven Laureys of the University of Liege in Belgium, Owen's collaborator.
When someone is in a VS, they can breathe unaided, have intact reflexes but seem completely unaware. But it is becoming clear that some people who appear to be vegetative are in fact minimally conscious. They are in a kind of twilight state in which they may feel some pain, experience emotion and communicate to a limited extent. These two states can be distinguished from each other via bedside behavioural tests - but these tests are not perfect and can miss patients who are aware but unable to move. So researchers are looking for ways to detect consciousness with brain imaging.
In their original experiment, Owen and his colleagues used functional MRI to detect wheteher a woman could respond to two spoken commands, which were expected to activate different brain areas. On behavioural tests alone her diagnosis was VS but the brain scan result were astounding. When asked to imagine playing tennis, the woman's supplementary motor area (SMA), which is concerned with complex sequences of movements, lit up. When asked to imagine moving around her house, it was the turn of the parahippocampal gyrus, which represents spatial locations.
Because the correct brain areas lit up at the correct time, the team concluded that the woman was modulating her brain activity to cooperate with the experiment and must have had a degree of consciousness.
In the intervening years, Owen, Laureys and their team repeated the experiment on 23 people in Belgium and the UK diagnosed as being in a VS. Four responded positively and were deemed to possess a degree of consciousness.
To find out whether a simple conversation was possible, the researchers selected on eof the four - a 29-year-old man who had been in a car crash. They asked him to imagine playing tennis if he wanted to answer yes to questions such as: Do you have any sisters? Is you father's name Thomas? Is your father's name Alexander? And if the answer to a question was no, he had to imagine moving round his home.
The man was asked to think of the activity that represented his answer, in 10-second bursts for up to 5 minutes, so that a strong enough signal could be detected by the scanner. His family came up with the questions to ensure that the researchers did not know the answers in advance. What's more, the brain scans were analysed by a team that had never come into contact with the patient or his family.
The team found that either the SMA or the parahippocampal gyrus lit up in response to five of the six questions (see diagram). When the team ran these answers by his family, they were all correct, indicating that the man had understood the task and was able to form an answer. The group also asked healthy volunteers similar questions relating to their own families and found that their brains responded in the same way.
"I think we can be pretty confident that he is entirely conscious," says Owen. "He has to understand instructions, comprehend speech, remember what tennis is and how you do it. So many of his cognitive faculties have to have been intact."
That someone can be capable of all this while appearing completely unaware confounds existing medical definitions of consciousness, Laureys says. "We don't know what to call this; he just doesn't fit a definition."
Doctors traditionally base these diagnoses on how someone behaves: if for example, whether or not they can glance in different directions in response to questions. The new results show that you don't need behavioural indications to identify awareness and even a degree of cognitive proficiency. All you need to do is tap into brain activity directly.
The work "changes everything", says Nicholas Schiff, a neurologist at Weill Cornell Medical College in New York, who is carrying out similar work on patients with consciousness disorders.
"Knowing that someone could persist in a state like this and not show evidence of the fact that they can answer yes/no questions should be extremely disturbing to our clinical pratice."
One of the most difficult questions you might want to ask someone is whether they want to carry on living. But as Owen and Laureys point out, the scientific, legal and ethical challenges for doctors asking such questions are formidable. Ïn purely practical terms, yes, it is possible," says Owen. "But it is a bigger step than one might immediately think."
One problem is that while the brain scans do seem to establish consciousness, there is a lot they don't tell us. "Just because they can answer a yes/no question does not mean they have the capacity to make complex decisions," Owen says.
Even assuming there is a subset of people who cannot move but have enough cognition to answer tough questions, you would still have to convince a court that this is so. "There are many ethical and legal frameworks that would need to be revised before fMRI could be used in this context," says Owen.
There are many challenges. For example, someone in this state can only to respond to specific questions; they can't yet start a conversation of their own. There is also the prospect of developing smaller devices to make conversation more frequent, since MRI scans are expensive and take many hours to analyse.
In the meantime, you can ask someone whether they are in pain or would like to try new drugs that are being tested for their ability to bring patients out of a vegetative state. "For the minority of patients that this will work for, just for them to exercise some autonomy is a massive step forward - it doesn't have to be at the life or death level," Owen says.
Subscribe to:
Posts (Atom)