Monday, 17 May 2010
Jayne Wallace
Jayne Wallace
Journeys Between Ourselves (2007)
Keywords: digital jewelry, emotional significance, meaningfulness
Journeys... are a pair of digital neckpieces custom made for a mother and daughter participating in Jayne's research. The forms of the neckpieces are influenced in part by a Kay Nielsen illustration that is cherished by the couple. The neckpices are responsive to touch; the touch of one causes the second to tremble gently. This interaction is a tactile echo that reflects their closeness and feelings for each other.
Materials: porcelain, paper, felt, light sensors, motors, motes, accelerometers, batteries.
Inspirations: A. Dunn 'Hertzian Tales: Electronic products, aesthetic experience and critical design', Malin Lindmark Vrijman.
Philips Body and Soul
Why, at the dawn of a new millennium, do we find that young people have resorted to the most "primitive" forms of bodyadornment? For many, it is a way to break away from existing norms, from stereotypical behaviour, for others it is the resultof contact with other cultures or simply another way to express their identity. Tatoos, body percing, scarifications, pocketing and implants are as personal a statement as enyone can possibly make. While conventional society might tend to consider these as new and extreme forms of body adornment, an expressive medium used mainly by young people, they have existed for centuries in many different cultures as a traditional form of cultural or religious expression.
The integration of fashion and style into the human body is a precursor to a new medium that encompasses other functionalities: the body as a local area network. This theme has already been explored by institutions such as The Media Lab, and many research projects have examined the possibilities of harnessing human power. The passing of data through the body and sensing of biometric feedback have also been extensively covered. Medical biosensing, for example, could have extremely practical applications in the treatment of chronic and acute conditions, e.q. for reading insulin levels in the case of diabetes. Exploration into "swallowables" that pass through the body or implants in the form of small electronics and mechanical devices has shown interesting potential. And though for many people this immediately raises the spectre of mankind being turned into a race of cyborgs, we should remember that we are in fact already using such devices on a fairly large scale for medical purposes (consider, for example, the pacemaker).
Today, on the threshold of a new era, the question facing companies operating at the interface of clothing and electronics is: How do we anticipate and develop clothing applications and solutions that adress people's socio-cultural, emotional and physical needs, enabling them to express all the facets of their personality? Integrated technology clearly has a major role to play. Taking this exciting pioneering development to its ultimate, logical conclusion, the challenge facing us, in the end, is to extend the human nervous system to the seventh sense.
The piercing glow and give a pulsating sensation when the wearer is paged.
17-05-2010
Sunday, 9 May 2010
Magazine "NewScientist" 10 April 2010
Animations sense real world
POWERPOINT presentations are about to get a sprinkle of fairy dust. A hand-held projector can now create virtual characters and objects that interact with the real world.
The device - called Twinkle - projects animated graphics that respond to patterns, shapes or colours on a surface, or even 2D objects such as your hand. It uses a camera to track relevant elements - say a line drawn on a wall - in the scene illuminated by the projector and an accelerometer ensures it can sense the projector's rapid motion and position.
Software then matches up the pixels detected by the camera with the animation, making corrections for the angle of projection and distance from the surface.
The device could eventually fit inside a cellphone, says Takumi Yoshida of the University of Tokyo.
A prototype which projects a cartoon fairy that bounces off or runs along paintings on a wall or even the surface of a bottle was presented at the recent Virtual Reality 2010 meeting in Waltham, Massachusetts.
Yoshida and his colleagues are also developing a way for graphics from several projectors to interact, which could be used for gaming.
Anthony Steed of University College London is impressed. Many researchers have been attempting to create virtual graphics that can interact with a real surface, he says, but Twinkle can cope with a much greater range of environments.
Magazine "NewScientist" 17 April 2010
Robots with skin enter our touchy-feely world.
If humanoid robots are ever to move among us, they will first need to get in touch with the world - and learn to interpret our fuzzy human language.
BEAUTY may be only skin deep, but for humanoid robots a fleshy covering is about more than mere aesthetics, it could be essential to making them socially acceptable. A touch-sensitive coating could prevent such machines from accidentally injuring anybody within their reach.
In May, a team at the Italian Institute of Technology (IIT) in Genoa will dispatch to labs across Europe the first pieces of touchsensing skin designed for their nascent humanoid robot, the iCub. The skin IIT and its partners have developed contains flexible pressure sensors that aim to put robots in touch with the world.
"Skin has been one of the big missing technologies for humanoid robots," says roboticist Giorgio Metta at IIT. One goal of making robots in a humanoid form is to let them interact closely with people. But what will only be possible if a robot is fully aware of what its powerful motorised limbs are in contact with.
Roboticists are trying a great variety of ways to make a sensing skin. Early examples, such as the CB2 robot, built at Osaka University in Japan, placed a few hundred sensors in silicone skin. But now "many, many sensing methods are emerging", says Richard Walker of Shadow Robot, London. Until a lot of robots are using them, it is going to be hard to say which are best suited for particular applications.
What's more, there are many criteria the skin has to meet, says Metta: it must be resilient, able to cover a large surface area and be able to detect even light touches anywhere on that surface. "Many of these factors conflict with each other," he says.
The iCub is a humanoid robot the size of a child of three-and-a-half years old. Funded by the European Commission, it was designed to investigate cognition and how awareness of our limbs, muscles, tendons and tactile environment of intelligence. The Icub's technical specifications are open-source and some 15 labs across Europe have already "cloned" their own, so IIT's skin design could find plenty of robots to enwrap.
The skin is made up of triangular, flexible printed circuit boards which act as sensors, and it covers much of iCub's body. Each bendy triangle is 3 centimeters to a side and contains 12 capacitive copper contacts. A layer of silicone rubber acts as a spacer between those boards and an outer layer of Lycra that carries a metal contact above each copper contact. The Lycra layer and flexible circuits constitute the two sides of the skin's pressure-sensing capacitors. This arrangement allows 12 "tactile pixels" - or taxels - to be sensed per triangle. This taxel resolution is enough to recognise patterns such as a hand grasping the robot's arm. The skin can detect a touch as light as 1 gram across each taxel, says Metta. It is also peppered with semiconductor-based temperature sensors. This version of the skin will be released in May.
Later, IIT plans to add a layer of a piezoelectric polymer called PVDF to the skin. While the capacitance sensors measure absolute pressure, the voltage produced by PVDF as a result of its deformation when touched can be used to measure the rate of change of pressure. So if the robot runs its fingertip along a surface, the vibrations generated by friction give it clues about what that surface is made of. Such sensitivity might help it establish te level of grip needed to pick up, say, a slippery porcelain plate.
Philip Taysom, CEO of British company Peratech of Richmond, North Yorkshire, is not a fan of sensing skins based on capacitors, which he says can lose sensitivity with repeated use. Peratech's answer is a stretchy, elastic material it calls quantum tunnelling composite (QTC). This comprises a polymer such as silicone rubber that is heavily loaded with spiky nickel nanoparticles. A voltage is applied across the skin, and when it is pressed, the distance between the nanoparticles within the polymer diminishes, which results in electrons flowing, or "tunnelling", from one nanoparticle spike to the next in the area being touched. Crucially, the material's electrical resistance drops dramatically and in proportion to the force applied, so the touch can be interpreted.
At the Massachusetts Institute of Technology's Media Lab, Adam Whiton is developing a QTC-based sensing skin for a commercial robot-maker which he declines to name. Instead of a tight, conforming skin, Whiton uses a looser covering, more akin to clothing. "We cover ourselves with textiles when we interact with people, so clothing may be a better metaphor as a humanoid's pressure-sensitive surface covering," he says.
Natural gestures, like tapping a humanoid on the back to get its attention, or leading it by the arm, can be easily interpreted because QTC boasts high sensitivity, he says. But novel skin capabilities could be on the way, too. For example, QTC can also act as an elctronic nose. Careful choice of the material's base polymer, says Taysom, means telltale resistance changes can be induced by reactions between volatile chemicals in the air - so it can become an e-nose as well as a touch sensor, able to detect, for example, a gas leak in your home. "This shows we can probably build into robots a lot of things that our skin can't do. It's another reason not to stick rigidly to the human skin metaphor," says Whiton.
That's not to say our skin isn't a great influence. Shadow Robot will soon start testing a novel human-like touch-sensing fingertip from Syntouch, a start-up based in California. Its fingertip comprises a rubbery fluid-filled sac that squishes just like a real fingertip, and is equipped with internal sensors that measure vibration, temperature and pressure.
Whichever of the emerging technologies prevail, sensing robot skins should help us get along with our future humanoid assistants, says Whiton. "Right now, robots are about as friendly as photocopiers. The interactions skins encourage will make them much friendlier."
Magazine "NewScientist" 17 October 2010
Next step for touchscreens.
IMAGINE entering your living room and sliding your foot purposefully over a particular stretch of floor. Suddenly your hi-fi system springs into life and begins playing your favourite CD.
Floors you can use like a gaint touchscreen could one day be commonplace thanks to a "touch floor" developed by Patrick Baudisch at the Hasso Plattner Institute in Potsdam, Germany. His prototype, named Multi-toe, is made up of thin layers of silicone and clear acrylic on top of a rigid glass sheet. Light beams shone into the acrylic layer bounce around inside until pressure from above allows them to escape. A camera below captures the light and registers an image of whatever has pressed down upon the floor.
Some touchscreens already employ this technique, but the new version offers greater resolution, allowing the pattern of the tread on someone's shoes to be detected. Baudisch has already adapted it for the video game Unreal Tournament, with players leaning in different directions to move on screen, and tapping their toes to shoot. A virtual keyboard on the floor can also be activated with the feet.
Baudisch presented the work at the Conference on Human Factors in Computing Systems in Atlanta, Georgia, this week. He admits the system cannot easily be used on existing floors due to the need for underfloor cavities to house the cameras, but says future versions will address this.
Magazine "NewScientist" 24 April 2010
Humanoid robot set for space.
NASA is preparing to send its first humanoid robot into space. Robonaut first twitched to life in September 1999 and, after a decade of tests, the 140-kilogram R2 model will finally be launched to the International Space Station on the space shuttle Discovery's last mission in September.
With continual maintenance work needed on the ISS, the idea is to give the crew an assistant that never tires of undertaking mundane mechanical tasks - initially inside the craft but later outside it too.
R2 comprises a humanoid head and torso with highly dexterous arms and hands. It was developed by NASA in conjunction with roboticists at General Motors. After being bolted to a piece of ISS infrastructure, R2 can use the same tools, such as screwdrivers and wrenches, as the astronauts.
One reason for the mission, NASA says, is to see how Robonaut copes with the cosmic radiation and electromagnetic interference inside the space station.
The main challenge, though, will be to ensure the robot is safe to work with, as tools can fly off easily in microgravity, says Chris Melhuish of the Bristol Robotics Laboratory in the UK. "Robots have to be both physically and behaviourally safe," he says.
"That means torque control of limbs and tools, but also an ability to recognise human gestures to safely achive shared goals. These are serious hurdles NASA will need to overcome."
Magazine "NewScientist" 6 February 2010
A voice for the voiceless
It is now possible to "talk" to people who seem to be unconscious, by tapping into their brain activity.
THE inner voice of people who appear unconscious can now be heard. For the first time, researchers have struck up a conversation with a man diagnosed as being in a vegetative stae. All they had to do was monitor how his brain responded to specific questions. This means that it may now be possible to give some individuals in the same state a degree of autonomy.
"They can now have some involvement in their destiny," says Adrian Owen of the University of Cambridge, who led the team doing the work.
In an earlier experiment, published in 2006, Owen's team asked a woman previously diagnosed as being in a vegetative state (VS) to picture herself carrying out one of two different activities. The resulting brain activity suggested she understood the commands and was therefore conscious.
Now Owen's team has taken the idea a step further. A man also diagnosed with VS was able to answer yes and no to specific questions by imagining himself engaging in the same activities.
The results suggest that it is possible to give a degree of choice to some people who have no other way of communicating with the outside world. "We are not just showing they are conscious, we are giving them a voice and a way to communicate," says neurologist Steven Laureys of the University of Liege in Belgium, Owen's collaborator.
When someone is in a VS, they can breathe unaided, have intact reflexes but seem completely unaware. But it is becoming clear that some people who appear to be vegetative are in fact minimally conscious. They are in a kind of twilight state in which they may feel some pain, experience emotion and communicate to a limited extent. These two states can be distinguished from each other via bedside behavioural tests - but these tests are not perfect and can miss patients who are aware but unable to move. So researchers are looking for ways to detect consciousness with brain imaging.
In their original experiment, Owen and his colleagues used functional MRI to detect wheteher a woman could respond to two spoken commands, which were expected to activate different brain areas. On behavioural tests alone her diagnosis was VS but the brain scan result were astounding. When asked to imagine playing tennis, the woman's supplementary motor area (SMA), which is concerned with complex sequences of movements, lit up. When asked to imagine moving around her house, it was the turn of the parahippocampal gyrus, which represents spatial locations.
Because the correct brain areas lit up at the correct time, the team concluded that the woman was modulating her brain activity to cooperate with the experiment and must have had a degree of consciousness.
In the intervening years, Owen, Laureys and their team repeated the experiment on 23 people in Belgium and the UK diagnosed as being in a VS. Four responded positively and were deemed to possess a degree of consciousness.
To find out whether a simple conversation was possible, the researchers selected on eof the four - a 29-year-old man who had been in a car crash. They asked him to imagine playing tennis if he wanted to answer yes to questions such as: Do you have any sisters? Is you father's name Thomas? Is your father's name Alexander? And if the answer to a question was no, he had to imagine moving round his home.
The man was asked to think of the activity that represented his answer, in 10-second bursts for up to 5 minutes, so that a strong enough signal could be detected by the scanner. His family came up with the questions to ensure that the researchers did not know the answers in advance. What's more, the brain scans were analysed by a team that had never come into contact with the patient or his family.
The team found that either the SMA or the parahippocampal gyrus lit up in response to five of the six questions (see diagram). When the team ran these answers by his family, they were all correct, indicating that the man had understood the task and was able to form an answer. The group also asked healthy volunteers similar questions relating to their own families and found that their brains responded in the same way.
"I think we can be pretty confident that he is entirely conscious," says Owen. "He has to understand instructions, comprehend speech, remember what tennis is and how you do it. So many of his cognitive faculties have to have been intact."
That someone can be capable of all this while appearing completely unaware confounds existing medical definitions of consciousness, Laureys says. "We don't know what to call this; he just doesn't fit a definition."
Doctors traditionally base these diagnoses on how someone behaves: if for example, whether or not they can glance in different directions in response to questions. The new results show that you don't need behavioural indications to identify awareness and even a degree of cognitive proficiency. All you need to do is tap into brain activity directly.
The work "changes everything", says Nicholas Schiff, a neurologist at Weill Cornell Medical College in New York, who is carrying out similar work on patients with consciousness disorders.
"Knowing that someone could persist in a state like this and not show evidence of the fact that they can answer yes/no questions should be extremely disturbing to our clinical pratice."
One of the most difficult questions you might want to ask someone is whether they want to carry on living. But as Owen and Laureys point out, the scientific, legal and ethical challenges for doctors asking such questions are formidable. Ïn purely practical terms, yes, it is possible," says Owen. "But it is a bigger step than one might immediately think."
One problem is that while the brain scans do seem to establish consciousness, there is a lot they don't tell us. "Just because they can answer a yes/no question does not mean they have the capacity to make complex decisions," Owen says.
Even assuming there is a subset of people who cannot move but have enough cognition to answer tough questions, you would still have to convince a court that this is so. "There are many ethical and legal frameworks that would need to be revised before fMRI could be used in this context," says Owen.
There are many challenges. For example, someone in this state can only to respond to specific questions; they can't yet start a conversation of their own. There is also the prospect of developing smaller devices to make conversation more frequent, since MRI scans are expensive and take many hours to analyse.
In the meantime, you can ask someone whether they are in pain or would like to try new drugs that are being tested for their ability to bring patients out of a vegetative state. "For the minority of patients that this will work for, just for them to exercise some autonomy is a massive step forward - it doesn't have to be at the life or death level," Owen says.
Magazine "NewScientist" 20 February 2010
Even in the virtual world, men judge women on looks.
HOW is a female avatar supposed to get a fair treatment in the virtual world? They should rely on human females - men can't help but be swayed by looks.
Thanks to video games and blockbuster movies, people are increasingly engaging with avatars and robots. So Karl MacDorman of Purdue School of Engineering and Technology in Indianapolis, Indiana, decided to find out how people treated avatars when faced with an ethical dilemma. Does an avatar's lack of humanity mean people fail to empathise with them? The answer seems to depend on gender.
He presented 682 volunteers with a dilemma modified from a medical ethics training programme. Playing the role of the doctor, they were faced with the female avatar, Kelly Gordon, pleading with them not to tell her husband at his next check-up that she had contracted genital herpes. The dilemma is intended to make medical students consider issues like doctor-patient confidentiality, not to produce a right or wrong answer, says MacDorman.\Gordon was presented to the volunteers in one of four different ways, either as an actress superimposed on a computer generated (CG) background (pictured) - and then either edited to move smoothly or in a jerky, unnatural way.
Overall, women responded more sympathetically to Gordon, with 52 per cent acceding to her request compared with 45 per cent of men. But whereas women's attitudes were consistent however Gordon was presented, the male volunteers' attitudes swung sharply. The two human versions got a far more sympathetic hearing than their avatar counterparts. "Clearly, presentational factors influence people's decisions of moral and ethical consequence," says MacDorman. "The different response from volunteers could suggest men showed more empathy towards characters that they see as a potential mate," he says.
However, Jesse Fox, a human-computer interaction researcher at Stanford University in California, who has studied female chracterisation in virtual environments, believes the less favourable attitude shown by men towards the CG Gordon may be explained by the fact that the avatar was more sexualised than the human one - with a bare midriff and fuller breasts.
"Sexualised representations of women are often judged to be dishonest, or 'loose', and more so by men than by women. This could explain the finding, especially in a situation in which you're talking about sexually transmitted diseases," she says.
The study will be published in a forthcoming edition of the journal Presence.
Magazine "NewScientist" 3 Januari 2010
Microsoft ready to make games controllers obsolete.
A LONG-lived videogaming skill could be on the way out this year as Microsoft hones an add-on to its Xbox 360 console aimed at making button-studded games controllers obsolete. The device, called Natal after the city in northern Brazil, allows players to control a game using only their body movements and voice.
Microsoft unveiled Natal in June 2009 at the E3 games industry expo in Los Angeles, but revealed little about how it works. Now the company has allowed New Scientist access to the device and its creators to discover more details.
A player standing anywhere between 0.8 and 4 metres from Natal is illuminated with infrared light. A monochrome video camera records how much of that light they reflect, using the brightness of the signal to approximate their distance from the device and capture their movements in 3D.
This means Natal doesn't require users to wear markers on their body - unlike the technology used by movie studios to animate CGI figures.
Motion capture normally requires massive processing power, and paring down the software to run on an everyday games concole was a serious challenge, says Natal's lead developer, Alex Kipman. "Natal has to work on the existing hardware processing away from the games developers.
Microsoft collected "terabytes" of data of people in poses likely to crop up during game play, both in motion capture studios and their own homes. Frames from the home videos were manually labelled to identify key body parts, and the data was then fed into "expert system" software running on a powerful cluster of computers. The result was a 50-megabyte software package that can recognise 31 different body parts in any video frame.
"When we train this 'brain' we are telling it: this is the head, this is the shoulder. And we're doing that over millions of frames," says Kipman. "When it sees a new image it can tell you the probability it's seeing a certain body part based on that historical information."
Natal also includes software that has a basic understanding of human anatomy. Using its knowledge that, for example, hands are connected to arms, which are attached to shoudlers, it can refine its guesses about body pose to recognise where body parts are even when they are hidden from Natal's camera.
"It correctly positions your hand even if it's held behind your back," Kipman says. "It knows the hand can only be in one place." That's important because during multiplayer games there won't always be a clear view of both players at all times.
He says Natal consumes just 10 to 15 per cent of the Xbox's computing resources and it can recognise any pose in just 10 milliseconds. It needs only 160 milliseconds to latch on to the body shape of a new user stepping in front of it.
The system locates body parts to within a 4-centimetre cube, says Kipman. That's far less precise than lab-based systems or the millimetre precision of Hollywood motion capture. But Douglas Lanman, who works on markerless 3D interaction at Brown University in Providence, Rhode Island, and is not involved with Natal, says that this will likely be accurate enough for gamers.
Lanman is watching closely to see what kind of games Natal makes possible, and how they are received. "Will users find them as compelling as they found Wii games? Is it important to have physical buttons? We'll know soon.
Those kind of questions, and a desire to move away from the controller-focused interaction that has dominated for decades, are central to Natal, Kipman says. "We think input using existing controllers is the barrier, and by erasing that we can realistically say: all you need to play is life experience."
Saturday, 8 May 2010
Magazine "NewScientist" 9 Januari 2010
Consciousness, not yet explained.
We won't crack that mystery any time soon, argues Ray Tallis, because physical science can only do its work by discarding the contents of counsciousness.
MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mstery of human consciousness in terms of the ectivity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences.
This was well captured in a 2009 article in Perspectives on Phychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued:"...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations wee obtained.
Belivers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness.
This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demontrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.
Many neuroscpetics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be inditinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.
This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H2O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels.
We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of slef, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would left with the insuperable problem of explaining how intracranial nerve impulses, which are materials vents, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object.
Biophysical science explains how the light gets in but now how the gaze looks out.
Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona and Roger Penrose at the University of Oxford), electromagnetic discharges in the brain (the late Francis Crick).
These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct.
My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate.
And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicity past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion".
The are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.
I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurements, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now chracterise the table in a way that is less beholden to personal experience.
Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms.
To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being noisy, colourful, smelly place we live in, is colourness, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically.
In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/quakia, the redness of red wine or the snell of a smelly dog.
Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and towards quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.
Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no contraints on the appearance of objects once they are objects of consciousness.
Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the selfcontradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that either appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.
Thursday, 6 May 2010
Magazine "NewScientist" 31 October 2009
They know what you're thinking
What you look at or recall can now be "read"from a brain scan in real time, but is it mind reading?
WHAT are you thinking about? Which memory are you reliving right now? You may think that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.
In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering.
Last week at the Society for Neuroscience meeting in Chicago, Jack gallant, a leading "neural decoder" at the University of California, Berkeley, presented one of the fiel's most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity. Others at the same meeting claimed that such neural decoding could be used to read memories and future plans - and even to diagnose eating disorders.
Understandably, such developments are raising concerns about "mind reading" technologies, which might be exploited by advertisers or oppressive governments (see "The risks of open-mindedness", right). Yet despite - or perhaps because of - the recent progress in the field, most researchers are wary of calling their work mind-reading. Emphasising its Limitations, they call it neural decoding.
They are quick to add that it may lead to powerful benefits, however. These include gaining a better understanding of the brain and improved communication with people who can't speak or write, such as stroke victims or people with neurodegenerative diseases. There is also excitement over the possibility of being able to visualise something highly graphical that someone healthy, perhaps an artist, is thinking.
So how does neural decoding work? Gallant's team drew international attention last year by showing that brain imaging could predict which of a group of pictures someone was looking at, based on activity in their visual cortex. But simply decoding still images alone won't do, says Nishimoto. "Our natural visual experience is more like movies.
Nishimoto and Gallant started their most recent experiment by showing two lab members 2 hours of video clips culled from DVD trailers, while scanning their brains. A computer program then mapped different patterns of activity in the visual cortex to different visual aspects of the movies such as shape, colour and movement. The program was then fed over 200 days' worth of YouTuve clips, and used the mappings it had gathered from the DVD trailers to predict the brain activity that each YouTube clip would produce in the viewers.
Finally, the same two lab members watched a third, fresh set of clips which were never seen by the computer program, while their brains were scanned. The computer program compared these newly captured brain scans with the patterns of predicted brain activity it had produced from the YouTube clips. For each second of brain scan, it chose the 100 YouTube clips it considered would produce the most similar brain activity - and then merged them. The result was continuous, very blurry footage, corresponding to a crude "brain read-out" of the clip that the person was watching.
In some cases, this was more successful than others. When one lab member was watching a clip of the actor Steve Martin in a white shirt, the computer program produced a clip like a moving, human-shaped smudge, with a white "torso", but the blob bears little resemblance to Martin, with nothing corresponding to the moustache he was sporting.
Another clip revealed a quirk of Gallant and Nishimoto's approach: a reconstruction of an aircraft flying directly towards the camera - and so barely seeming to move - with a city skyline in the background omitted the plane but produced something akin to a skyline. That's because the algorithm is more adept at reading off brain patterns evoked by watching movement than those produced by watching apparently stationary objects.
ït's going to get a lot better," says Gallant. The pair plan to improve the reconstruction of movies by providing the program with additional information about the content of the videos.
Team member Thomas Naselaris demontrated the power of this approach on still images at the conference. For every pixel in a set of images shown to a viewer and used to trein the program, researchers indicated whether it was part of a human, an animal, an artificial object or a natural one. The software could then predict where in a new set of images these classes of objects were located, based on brain scans of the picture viewers.
Movies and pictures aren't the only things that can be discerned from brain activity, however. A team led by Eleanor Maguire and AMrtin Chadwick at University College London presented results at the Chicago meeting showing that our memory isn't beyond the reach of brain scanners.
A brain srtucture called the hippocampus is critical for forming memories, so Maguire's team focused its scanner on this area while 10 volunteers recalled videos they had watched of different women performing three banal tasks, such as throwing away a cup of coffee or posting a letter. When Maguire's team got the volunteers to recall one of these three memories, the researchers could tell which the volunteers was recalling with an accuracy of about 50 per cent.
That's well above chance, says Maguire, but it is not mind reading because the program can't decode memories that it hasn't already been trained on.
"You can't stick somebody in a scanner and know what they're thinking." Rather, she sees neural decoding as a way to understand how the hippocampus and other brain regions form and recall a memory.
Maguire could tackle this by varying key aspects of the clips - the location or the identity of the protagonist, for instance - and see how those changes affect their ability to decode the memory.
She is also keen to determine how memory encoding changes over the weeks, months or years after memories are first formed.
Meanwhile, decoding how people plan for the future is the hot topic for John-Dylan Haynes at the Bernstein Centre for Computational Neuroscience in BErlin, Germany. In work presented at the coference, he and colleague Ida Momennejad found the could use brain scans to predict intentions in subjects planning and peforming simple tasks. What's more, by showing people, including some with eating disorders, images of food, Haynes's team could determine which suffered from anorexia or bulimia via brain activity in one of the brain's "reward centres".
Another focus of neural decoding is language. Marcel Just at Carnegie Melon University in Pittsburgh, Pennsylvania, and his collegue Tom Mitchell reported last year that they could predict which of two nouns - such as "celery" and äirplane" - a subject is thinking of, at rates well above chance. They are now working on two-word phrases.
Their ultimate goal of turning brain scans into short sentences is distant, perhaps impossible. But as with the other decoding work, it's an idea that's as tantalising as it is creepy.
Magazine "NewScientist" 31 October 2009
Smart walls control the room
WHO says wallfloers don't grab people's attention? A new type of electronically enhanced wallpaper promises not only eye-pleasing designs, but also the ability to activate lamps and heaters - and even control music systems.
Interactive walls are nothing new, but most designs rely on expensive sensors and power-hungry projectors to make a wall come alive. Now the Living Wall project, led by Leah Buechley at the Massachusetts Institute of Technlogy's Media Lab, offers an alternative by using magnetic and conductive paints to create circuitry in attractive designs.
When combined with cheap temperature, brightness and touch sensors, LEDs and Bluetooth, the wall becomes a control hub able to talk to nearby devices. Touch a flower to turn on a lamp, for example, or set heaters to fire up when the room gets cold.
Öur goal is to make technologies that users can build on and change without needing a lot of technical skill,"says Buechley.
To create the wallpaper, the team start with wafer-thin steel foil sandwiched between layers of paper which are coated with magnetic paint - acrylic infused with iron particles. On top of this base they paint motifs such as flowers and vines using conductive paint, which uses copper particles rather than iron. The designs form cicuitry onto which sensors, lights, and other elements can be attached.
Magazine "NewScientist" 3 October 2009
Virtual cities get real bustle
WHILE virtual globes such as Google Earth or Microsoft Visual Earth provide great bird's-eye views of urban landscapes, they show ghost towns - empty streets free of traffic or people.
Now a system that can draw on real-time video from traffic and surveillance cameras, and weather sensors, is set to change that. It fills virtual towns with cars and people and could even let online spectators zoom in on live sports events.
Computer scientists at Georgia Institute of Technology in Atlanta use video feeds from cameras around their city. Their augmented version of Google Earth incorporates sports scenes, traffic flows, the march of pedestrians and weather.
The system looks out for specific categories of moving objects in a video feed. Any vehicle moving along a street is classified as a car and replaced with a randomly chosen 3D car model. Pedestrians are replaced with human figures animated with stock motion-capture data to make them walk.
Although surveillance cameras are used, no one's privacy is at stake because the models obscure identifying details such as a car's colour and licence plates, says Kihwan Kim, who led the research.
"Every moving object is rendered symbolically,"says Kim.
Sports action can be recreated with less regard to privacy, using multiple camera views to create 3D models of the players.
Magazine "NewScientist" 3 October 2009
Second Life gets a reality check
With Eye-popping suns at stake in the virtual economy, intellectual property disputes are heading for the courts
WHILE global economies have endured a torrid time of late, business is booming in the virtual economies of Second Life, Facebook and Everquest. As the economic boundaries between virtual and real worlds continue to blur, the supposedly liberated virtual worlds are now running up against some very real-world legal problems.
Financial analyst Piper Jaffray estimates that US citizens will spend $621 million in 2009 in virtual worlds; estimates of the Asian market are even larger. Research firm Plus Eight Star puts spending at $5 billion in the last year.
Over in Second Life, trade remains robust. The value of transactions between residents in the second quarter of this year was $144 million, a year-on-year increase of 94 per cent. With its users swapping virtual goods and services worth around $600 million per year, Second Life has the largest economy of any virtual world - which exceeds the GDP of 19 countries, including Samoa.
Thousands of users make money selling virtual goods from clothing and furniture to art and gazebos, as well as services such as virtual wedding planning, translation or architecture. Several hundred make thousands of dollars from the trade; the most successful have become millionaires.
Yet all is not rosy in the virtual Garden of Eden (see "Trouble breaks out in paradise"). Just as the digital revolution has facilitated piracy and copyright theft in other spheres, those who make a living running businesses in Second Life have seen their profits eroded by users who have found ways to copy their intellectual property (IP).
The Second Life case is believed to be the first time residents of a large virtual world have sued its owner for alleged IP rights violations by other users. But as the dollar value of virtual economies climb, it seems likely tohers will head to real-world courts to settle disputes, says James Grimmelmann, associate professor at the New York Law School. "As virtual worlds are becoming more and more important, and sites and games become more immersive, these kinds of cases are going to matter more," he says.
The case will also test the US Digital Millennium Copyright Act (DMCA) which grants the providers of online services with some degree of immunity to prosecution for copyright infringements perpetrated by their users. Similar exemptions are provided in Europe under the Electronic Commerce Directive.
"The law in this area is pretty good and should be protecting people who've got (intellectual property) or who are writing unique code, but the problem is policing it,"says Mark Stephens, a partner at London-based law firm Finers Stephens Innocent. "So increasingly people are trying to pin liability on the gatekeepers.
The lawsuit forms part of a group of related cases in which those who host online content are being targeted for the misdemeanours of their users.
Last month as US federal district court dismissed a complaint filed by record company gaint Universal Music Group, ruling that the DMCA did provide video site Veoh with immunity from liability for copyright violations committed by its users.
Online service providers such as Second Life's parent company Linden Lab, are likely to argue they have little control over or knowledge about user's activities, says Grimmelmann. "My general expectation is that they probably do have immunity under the ac.
Linden LAb has already taken some steps towards protecting the IP of its users. In August it issued a "content management roadmap", including plans for improvements to the Second Life IP complaints process, new tools, industry-standard tools for copying content to prevent IP infringement, a trusted seller scheme and more IP outreach work.
Speaking in a panel discussion at last week's Virtual Goods conference in San Jose, California, Tom Hale, chief product officer at Linden Lab, said: "Rest assured we feel very strongly about the rights of our IP creators and holders and want to protect them as much as we can in the virtual world. We have a challenge between our desire to have an open platform, and also our obligation to our residents, whether they be merchants or consumers, or creating for their own interest.
Only time will tell whether Linden will implement enough changes to placate its critics or whether the issue will be settled in court.
What is clear is that with so much money at stake, the case will be watched very carefully by a great many people.
Saturday, 1 May 2010
Magazine " NewScientist" 14 November 2009
Me, Myself, and my avatar.
WHEN you slip into the skin of an avatar what does your brain make of your virtual self?
To find out, Kristina Caudle at Dartmouth College in Hanover, New Hampshire, asked 15 veterans of the online game World of Warcraft to rate how well words such as "intelligent" and "jealous" described themselves, their avatars, their best friends and their WoW guild leader, while having their brains scanned.
The two regions that lit up most during thoughts of self showed similar activity when people thought of their avatar to when they thought of their real selves. This may be why virtual worlds are so riveting, says Caudle, who presented the work at a Society for Neuroscience meeting in Chicago.
Magazine " NewScientist" 21 November 2009
TECHNOLOGY
Built-in circuits turn contact lenses into graphics displays.
PERSONAL electronic devices tend to get ever smaller, which is great until they become so small that their screens are impossible to read. That problem could be solved by doing away with the screen, and instead projecting information into the user's eye.
That's the goal of Babak Parviz and colleagues at the University of Washington in Seattle, who hit on the idea of generating images from within a contact lens. Parviz's research involves embedding nanoscale and microscale electronic devices in materials like paper or plastic. He also happens to wear contact lenses. "It was a matter of putting the two together," he says. The polymer used in lenses cannot withstand the chemicals or temperatures typically used to manufacture microchips, but Parviz has nevertheless previously managed to embed nanoscale electronic circuits into contact lenses. Now he has managed to power those circuits by harvesting radio waves.
Parviz and his team first embedded the circuitry for a micro light-emitting diode (LED) into a contact lens by encasing it in a biocompatible material and then placing it into crevices carved into the lens. The 330 micro-LED is then fed in via a loop antenna that picks up power beamed from a nearby radio source.
The team has successfully tested the lens by fitting it to a rabbit, to demonstrate that it can be worn without damaging the wearer's eye or the circuitry- although the lens was not powered up in the test. Th components can be integrated into the lens without obscuring the wearer's view, the researchers claim.
Parviz says that future versions will be able to harvest power from a user's cellphone as it beams information to the lens. The will also have more pixels and an array of microlenses to focus the images so that they appear suspended half a metre in front of the wearer's eyes.
The device could be used to display many kinds of images, Parviz says, including subtitles when conversing with a foreign-language speaker, directions traveling in unfamiliar territory and captioned photographs. The lens could also serve as a head-updisplay for pilots or gamers, he adds.
"A contact lens that allows virtual graphics to be seamlessly overlaid on the real world could provide a compelling augmented reality experience," says Mark Billinghurst, director of the Human Interface Technology Laboratory in Christchurch, New Zealand. He sees this prototype as an important first step, though he warns that it may be years before the lens becomes commercially available.
The team will present their prototype at the Biomedical Circuits and Systems conference at Beijing, China, this month.
Magazine "Fiberarts" Nov/Dec 2009
Imagination Activist
Joyful, fantastic, mesmerizing: all words that leap to mind when attempting to describe the wearable - and audible - creations of Nick Cave.
SOUNDSUIT (static and active), 2009; human hair, metal armatre; 108"x36"x12".
SOUNDSUIT, 2009; found abacus, fabric, buttons, metal armature; appliqued; 84"x40"x18".
SOUNDSUIT (with detail), 2006, spinning tops, noise makers, embellished fabrics, metal armature; pieced, appliqued; 8"x38"x36".
Magazine "Fiberarts" Sep/Oct 2009
Unsolved Mysteries
Several years ago at an estate sale, bead artist Teresa Sullivan purchased a 1920s-era necklace made of jet-glass beads. The piece, in need of repair, came with an envelope that contained extra beads and a mysterious note that read: "Lenore, inside are beads from the tassel. Maybe you can fix it. This is that one of Aunt Bess's you liked. I have plenty of other jewelry of hers for the kids. I do not want Fred to know I sent it. So don't mention it... H.B." This distant request is now rooted in time in Sullivan's Don't Tell Fred (2008).
"When I reread the note," Sullivan says, "I wondered: Did Fred ever find out? Did the kids get the rest of Aunt Bess's jewelry? What other little white lies was this woman telling Fred?" Intrigued, Sullivan decided to make a new piece around the note, gradually collecting beads and other found objects. Nearly all of Sullivan's beaded jewelry, sculpture, and wall pieces tell stories. Repairing and reinventing this piece was a way for her to elaborate on a existing tale of secrecy and deceit.
Sullivan transformed the original necklace by adding narrative elements, such as a small tin clown. Bound into a cagelike form, it evokes the idea of the tarot's "holy fool" that is connected to random events. A hollow, beaded bluebird figure, perched higher up, symbolizes air and truth. At the centre, a luminous 100-year-old glass trade bead punctuates the passages of time. And the original note, carefully beaded into a protective case of clear plastic, serves as a reminder of life's eternal mysteries.
Teresa Sullivan, Don't Tell Fred (with details), 2008; glass beads, found objects; sculptural peyote stitch, beadwork; 11"x9"x2".
Magazine "Fiberarts" Jan/Feb 2009
Being Awed by WOW
For twenty years artists from around the globe have flocked to New Zealand to compete in a fashion and creativity extravaganza known as the Montana World of WearableArt Awards Show.
Margarete Palz (Germany), High Societies Visit Bill Hammond's Paradise, 2008; photo paper, fabric. 2008 Winner American Express Open Section and Bio Paints Runner-Up to Supreme WOW Award.
Karen Gurney (New Zealand), Synchronized Silliness, 2008; dyed calico, Lycra, Dacron, sequins, paint. 2008 Commended HP Children's Section.
Hannah Gibbs and Stephen Loy (New Zealand), Perfect Pins, 2008; plywood, plastic sheet, fabric, dowel, paint, cotton wadding. 2008 Booker Spalding First Time Entrant Award and Commended American Express Open Section.
Nadine Jaggi (New Zealand), Ornitho-Maia (with detail), 2008; leather; wet-molded, embrossed, carved, hand-dyed, copper foiled, handsewn. 2008 Winner Supreme Montana WOW Award and Winner Air NZ South Pacific Section.
Magazine "Craft Arts International No. 74"
Organic wearable forms
As profoundly as Nora Fok experiences the inspiration of the natural world with all its romantic lyricism, it is the sciences of biology and mathematics that exert the most powerful influence on her perception of the universe.
"Walking Onion", 2006, headpiece, knitted and knotted clear nylon
"Bubble Bath", 2001, head and neckpiece, clear nylon monofilament
"Mathemagic", 2005, neckpiece and convertible wristpiece, looped dyed nylon
"Physalis", 2006, earrings, knitted nylon, skeletonised physalis shells
"Princess Pagoda", 2006, headpiece, woven clear nylon, ht 45cm
"Calculator", 2002, neckpiece, woven, knitted and pigmented nylon
"TD Moth", 2008, mango seeds, date stone, artichoke bracts, 11x10x6cm
"White Lace Fly", 2006, quinoa grains, grass seed pod, physalid, 5x3cm
"Armadillo 3", 2006, hood/neckpiece, woven clear nylon, ht 30cm
"Leaf Insect" (side view), 2006, corn cups, avocado stone, tamarix, leaves, w 10cm
"Fountain", 2004, neckpiece, beaded clear nylon with pearls, diam. 56cm
"Sweet Cherries", 2007-8, neck piece, knitted and knotted dyed nylon
"Red Hot Chillies", 2006, earrings, knitted dyed nylon, chilli seeds
"Three Magnolia Flower Buds", 2006, pseudo flower buds, goose egg shells, dried artichoke bracts and poppyseed pods, length 18cm
"Golden Glow", 2004, wristpiece, dyed nylon, beads, diam. 25,5 cm
Magazine "Craft Arts International No.72"
Sculpture for the body
Taking the human body as her central theme, American artist Marjorie Schick creates and explores a wide range of dramatic, theatrical forms, uniting graphic appearance and complex wearable constructions.
"Schiaparelli's Circles", 2005, painted wood, canvas, thread stitched and painted, 76x76x3,8cm
"Helmet Mask", 1968, painted papier mache and leather, formed and painted, 66x40,6x30,5cm
"Amenhotep I", 2002, pair of collars; dowel collar, painted wood and cord, 64x64x2,5cm; feather collar, painted papier mache, wood, thread, 69x16x1,5cm
"Fifty States", 2000, commemorative body sculpture with earrings, painted canvas, wood and cord, stitched, riveted and painted, size from shoulder to bottom, 138x81x2cm
"Chagall's Circles", 2006, necklace, painted canvas, wood and thread, stitched, painted and tied, 104x104x2,5cm. With cords extended 198cm
"Tribute to Elsa Schiaparelli", 2005, sash-shaped chatelaine, painted papier mache, wood, felt, leather, nylon cord, nickel wire, painted and tied, 76x76x3,8cm
Monday, 1 March 2010
Project RCA "Bare"
"Bare" is a conductive ink that is applied directly onto the skin allowing the creation of custom electronic circuitry. This innovative material allows users to interact with electronics through gesture, movement and touch. Bare can be applied with a brush, stamp or spray and is nontoxic and temporary. Application areas include dance, music, computer interfaces, communication and medical devices. Bare is an intuitive and non-invasive technology which will allow users to bridge the gap between electronics and the body.
Haptic technology
Haptic technology
Haptic technology has made it possible to investigate in detail how the human sense of touch works by allowing the creation of carefully controlled haptic virtual objects. These objects are used to systematically probe human haptic capabilities, which would otherwise be difficult to achieve. These new research tools contribute to the understanding of how touch and its underlying brain functions work.
The word haptic, from the Greek ἁπτικός (haptikos), means pertaining to the sense of touch and comes from the Greek verb ἅπτεσθαι haptesthai meaning to “contact” or "touch”.
|
History
One of the earliest forms of haptic devices is used in large modern aircraft that use servomechanism systems to operate control systems. Such systems tend to be "one-way" in that forces applied aerodynamically to the control surfaces are not perceived at the controls, with the missing normal forces simulated with springs and weights. In earlier, lighter aircraft without servo systems, as the aircraft approached a stall the aerodynamic buffeting was felt in the pilot's controls, a useful warning to the pilot of a dangerous flight condition. This control shake is not felt when servo control systems are used. To replace this missing cue, the angle of attack is measured, and when it approaches the critical stall point a "stick shaker" (an unbalanced rotating mass) is engaged, simulating the effects of a simpler control system. This is known as haptic feedback. Alternatively the servo force may be measured and this signal directed to a servo system on the control. This method is known as force feedback. Force feedback has been implemented experimentally in some excavators. This is useful when excavating mixed materials such as large rocks embedded in silt or clay, as it allows the operator to "feel" and work around unseen obstacles, enabling significant increases in productivity.
Current Applications of Haptic Technology
Teleoperators and Simulators
Teleoperators are remote controlled robotic tools, and when contact forces are reproduced to the operator, it is called "haptic teleoperation". The first electrically actuated teleoperators were built in the 1950s at the Argonne National Laboratory in the United States, by Raymond Goertz, to remotely handle radioactive substances. Since then, the use of "force feedback" has become more widespread in all kinds of teleoperators such as underwater exploration devices controlled from a remote location.
When such devices are simulated using a computer (as they are in operator training devices) it is useful to provide the force feedback that would be felt in actual operations. Since the objects being manipulated do not exist in a physical sense, the forces are generated using haptic (force generating) operator controls. Data representing touch sensations may be saved or played back using such haptic technologies.
Haptic simulators are currently used in medical simulators and flight simulators for pilot training (2004).
Computer and Video Games
Some simple haptic devices are common in the form of game controllers, in particular of joystick and steering wheels. At first, such features and/or devices used to be optional components (like the Nintendo 64 controller's Rumble Pak). Now many of the newer generation console controllers and some joysticks feature built in devices (such as Sony's DualShock technology). An example of this feature is the simulated automobile steering wheels that are programmed to provide a "feel" of the road. As the user makes a turn or accelerates, the steering wheel responds by resisting turns or slipping out of control. Another concept of force feedback is that of the ability to change the temperature of the controlling device. This would prove especially efficient for prolonged usage of the device. However, due to the high cost of such a technology and the power drainage it would cause, the closest many manufacturers have come to realizing this concept has been to install air holes or small fans into the device to provide the user's hands with ventilation while operating the device.
In 2007, Novint released the Falcon, the first consumer 3D touch device with high resolution three-dimensional force feedback, allowing the haptic simulation of objects, textures, recoil, momentum, physical presence of objects in games.
Mobile Consumer Technologies
Tactile haptic feedback is becoming common in cellular devices. Handset manufacturers like LG and Motorola are including different types of haptic technologies in their devices. In most cases this takes the form of vibration response to touch. Alpine Electronics uses a haptic feedback technology named PulseTouch on many of their touch-screen car navigation and stereo units.
Haptics in Virtual Reality
Haptics are gaining widespread acceptance as a key part of virtual reality systems, adding the sense of touch to previously visual-only solutions. Most of these solutions use stylus-based haptic rendering, where the user interfaces to the virtual world via a tool or stylus, giving a form of interaction that is computationally realistic on today's hardware. Systems are also being developed to use haptic interfaces for 3D modeling and design that are intended to give artists a virtual experience of real interactive modeling. Researchers from the University of Tokyo have developed 3D holograms that can be "touched" through haptic feedback using "acoustic radiation" to create a pressure sensation on a user's hands. The researchers, led by Hiroyuki Shinoda, currently have the technology on display at SIGGRAPH 2009 in New Orleans.
Research
Some research has been done into simulating the different kinds of tactition by means of high-speed vibrations or other stimuli. One device of this type uses a pad array of pins, where the pins vibrate to simulate a surface being touched. While this does not have a realistic feel, it does provide useful feedback, allowing discrimination between various shapes, textures, and resiliencies.
Several haptics APIs have been developed for research applications, such as Chai3D, OpenHaptics and H3DAPI (Open Source).
Medicine
Various haptic interfaces for medical simulation may prove especially useful for training of minimally invasive procedures (laparoscopy/interventional radiology)and remote surgery using teleoperators. A particular advantage of this type of work is that the surgeon can perform many more operations of a similar type, and with less fatigue. It is well documented that a surgeon who performs more procedures of a given kind will have statistically better outcomes for his patients. Haptic interfaces are also used in Rehabilitation robotics.
In ophthalmology, "haptic" refers to a supporting spring, two of which hold an artificial lens within the lens capsule (after surgical removal of cataracts).
A 'Virtual Haptic Back' (VHB) is being successfully integrated in the curriculum of students at the Ohio University College of Osteopathic Medicine. Research indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems via touch). The VHB simulates the contour and compliance (reciprocal of stiffness) properties of human backs, which are palpated with two haptic interfaces (SensAble Technologies, PHANToM 3.0).
Robotics
The Shadow Dextrous Robot Hand uses the sense of touch, pressure, and position to reproduce the human grip in all its strength, delicacy, and complexity. The SDRH was first developed by Richard Greenhill and his team of engineers in Islington, London,as part of The Shadow Project, (now known as the Shadow Robot Company) an ongoing research and development program whose goal is to complete the first convincing humanoid. An early prototype can be seen in NASA's collection of humanoid robots, or robonauts. The Dextrous Hand has haptic sensors embedded in every joint and in every finger pad which relay information to a central computer for processing and analysis. Carnegie Mellon University in Pennsylvania and Bielefeld University in Germany in particular have found The Dextrous Hand is an invaluable tool in progressing our understanding of haptic awareness and are currently involved (2006) in research with wide ranging implications. The first PHANTOM, which allows one in the human world to interact with objects in virtual reality through touch, was developed by Thomas Massie, while a student of Ken Salisbury at M.I.T.
Arts and Design
Touching is not limited to a feeling, but it allows interactivity in real-time with virtual objects. Thus, haptics are commonly used in virtual arts, such as sound synthesis or graphic design/animation. The haptic device allows the artist to have direct contact with a virtual instrument which is able to produce real-time sound or images. For instance, the simulation of a violin string produces real-time vibrations of this string under the pressure and expressivity of the bow (haptic device) held by the artist. This can be done with physical modelling synthesis.
Designers and modellers may use high-degree of freedom input devices which give touch feedback relating to the "surface" they are sculpting or creating, allowing faster and more natural workflow than with traditional methods.
Actuators
Haptics is enabled by actuators that apply the forces to the skin for touch feedback. The actuator provides mechanical motion in response to an electrical stimulus. Most early designs of haptic feedback use electromagnetic technologies such as vibratory motors with an offset mass, such as the pager motor, that is in most cell phones or voice coils where a central mass or output is moved by a magnetic field. The electromagnetic motors typically operate at resonance and provide strong feedback, but have limited range of sensations. Next-generation actuator technologies are beginning to emerge, offering a wider range of effects thanks to more rapid response times. Next generation haptic actuator technologies include Electroactive Polymers, Piezoelectric, and Electrostatic surface actuation.
Future Applications of Haptic Technology
Future applications of haptic technology cover a wide spectrum of human interaction with technology. Some of the current research is focusing on the mastery of tactile interaction with holograms and distant objects, which if successful will result in applications and advancements in industries such as the gaming, movie, manufacturing, and medical industry. The medical industry will also gain from virtual and telepresence surgeries, raising the overall standard of medical care. There is even talk that the clothing retail industry could gain from haptic technology in ways such as being able to "feel" the texture of clothes for sale on the internet. Future advancements in haptic technology may even create new industries that were not feasible or realistic before the advancements happening right now.
Holographic Interaction
Researchers at the University of Tokyo are currently working on adding haptic feedback to holographic projections. The feedback allows the user to interact with a hologram and actually receive tactile response, as if the holographic object were physically real. The research uses ultrasound waves to create a phenomenon referred to as "acoustic radiation pressure" which provides tactile feedback to the user as they interact with the holographic object. The haptic technology does not affect the hologram, or the interaction with it, only the tactile response that the user perceives. The researchers posted a video displaying what they call the "Airborne Ultrasound Tactile Display." The technology is not yet ready for mass production or mainstream application in industries, but it is quickly progressing, and "industrial companies" are already showing a positive response to the technology. It is important to note that this example of possible future application is the first in which the user does not have to be outfitted with a special glove or use a special control, they can "just walk up and use [it] " which paints a promising picture for future applications.