Sunday, 9 May 2010

Magazine "NewScientist" 20 February 2010

"NewScientist" 20 February 2010

Even in the virtual world, men judge women on looks.

HOW is a female avatar supposed to get a fair treatment in the virtual world? They should rely on human females - men can't help but be swayed by looks.
Thanks to video games and blockbuster movies, people are increasingly engaging with avatars and robots. So Karl MacDorman of Purdue School of Engineering and Technology in Indianapolis, Indiana, decided to find out how people treated avatars when faced with an ethical dilemma. Does an avatar's lack of humanity mean people fail to empathise with them? The answer seems to depend on gender.
He presented 682 volunteers with a dilemma modified from a medical ethics training programme. Playing the role of the doctor, they were faced with the female avatar, Kelly Gordon, pleading with them not to tell her husband at his next check-up that she had contracted genital herpes. The dilemma is intended to make medical students consider issues like doctor-patient confidentiality, not to produce a right or wrong answer, says MacDorman.\Gordon was presented to the volunteers in one of four different ways, either as an actress superimposed on a computer generated (CG) background (pictured) - and then either edited to move smoothly or in a jerky, unnatural way.
Overall, women responded more sympathetically to Gordon, with 52 per cent acceding to her request compared with 45 per cent of men. But whereas women's attitudes were consistent however Gordon was presented, the male volunteers' attitudes swung sharply. The two human versions got a far more sympathetic hearing than their avatar counterparts. "Clearly, presentational factors influence people's decisions of moral and ethical consequence," says MacDorman. "The different response from volunteers could suggest men showed more empathy towards characters that they see as a potential mate," he says.
However, Jesse Fox, a human-computer interaction researcher at Stanford University in California, who has studied female chracterisation in virtual environments, believes the less favourable attitude shown by men towards the CG Gordon may be explained by the fact that the avatar was more sexualised than the human one - with a bare midriff and fuller breasts.
"Sexualised representations of women are often judged to be dishonest, or 'loose', and more so by men than by women. This could explain the finding, especially in a situation in which you're talking about sexually transmitted diseases," she says.
The study will be published in a forthcoming edition of the journal Presence.

Magazine "NewScientist" 3 Januari 2010

"NewScientist" 3 Januari 2010

Microsoft ready to make games controllers obsolete.

A LONG-lived videogaming skill could be on the way out this year as Microsoft hones an add-on to its Xbox 360 console aimed at making button-studded games controllers obsolete. The device, called Natal after the city in northern Brazil, allows players to control a game using only their body movements and voice.
Microsoft unveiled Natal in June 2009 at the E3 games industry expo in Los Angeles, but revealed little about how it works. Now the company has allowed New Scientist access to the device and its creators to discover more details.
A player standing anywhere between 0.8 and 4 metres from Natal is illuminated with infrared light. A monochrome video camera records how much of that light they reflect, using the brightness of the signal to approximate their distance from the device and capture their movements in 3D.
This means Natal doesn't require users to wear markers on their body - unlike the technology used by movie studios to animate CGI figures.
Motion capture normally requires massive processing power, and paring down the software to run on an everyday games concole was a serious challenge, says Natal's lead developer, Alex Kipman. "Natal has to work on the existing hardware processing away from the games developers.
Microsoft collected "terabytes" of data of people in poses likely to crop up during game play, both in motion capture studios and their own homes. Frames from the home videos were manually labelled to identify key body parts, and the data was then fed into "expert system" software running on a powerful cluster of computers. The result was a 50-megabyte software package that can recognise 31 different body parts in any video frame.
"When we train this 'brain' we are telling it: this is the head, this is the shoulder. And we're doing that over millions of frames," says Kipman. "When it sees a new image it can tell you the probability it's seeing a certain body part based on that historical information."
Natal also includes software that has a basic understanding of human anatomy. Using its knowledge that, for example, hands are connected to arms, which are attached to shoudlers, it can refine its guesses about body pose to recognise where body parts are even when they are hidden from Natal's camera.
"It correctly positions your hand even if it's held behind your back," Kipman says. "It knows the hand can only be in one place." That's important because during multiplayer games there won't always be a clear view of both players at all times.
He says Natal consumes just 10 to 15 per cent of the Xbox's computing resources and it can recognise any pose in just 10 milliseconds. It needs only 160 milliseconds to latch on to the body shape of a new user stepping in front of it.
The system locates body parts to within a 4-centimetre cube, says Kipman. That's far less precise than lab-based systems or the millimetre precision of Hollywood motion capture. But Douglas Lanman, who works on markerless 3D interaction at Brown University in Providence, Rhode Island, and is not involved with Natal, says that this will likely be accurate enough for gamers.
Lanman is watching closely to see what kind of games Natal makes possible, and how they are received. "Will users find them as compelling as they found Wii games? Is it important to have physical buttons? We'll know soon.
Those kind of questions, and a desire to move away from the controller-focused interaction that has dominated for decades, are central to Natal, Kipman says. "We think input using existing controllers is the barrier, and by erasing that we can realistically say: all you need to play is life experience."

Saturday, 8 May 2010

Magazine "NewScientist" 9 Januari 2010

"NewScientist" 9 Januari 2010

Consciousness, not yet explained.

We won't crack that mystery any time soon, argues Ray Tallis, because physical science can only do its work by discarding the contents of counsciousness.

MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mstery of human consciousness in terms of the ectivity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences.
This was well captured in a 2009 article in Perspectives on Phychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued:"...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations wee obtained.
Belivers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness.
This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demontrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.
Many neuroscpetics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be inditinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.
This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H2O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels.
We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of slef, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would left with the insuperable problem of explaining how intracranial nerve impulses, which are materials vents, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object.
Biophysical science explains how the light gets in but now how the gaze looks out.
Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona and Roger Penrose at the University of Oxford), electromagnetic discharges in the brain (the late Francis Crick).
These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct.
My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate.
And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicity past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion".
The are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.
I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurements, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now chracterise the table in a way that is less beholden to personal experience.
Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms.
To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being noisy, colourful, smelly place we live in, is colourness, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically.
In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/quakia, the redness of red wine or the snell of a smelly dog.
Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and towards quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.
Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no contraints on the appearance of objects once they are objects of consciousness.
Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the selfcontradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that either appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.

Thursday, 6 May 2010

Magazine "NewScientist" 31 October 2009

"NewScientist" 31 October 2009

They know what you're thinking

What you look at or recall can now be "read"from a brain scan in real time, but is it mind reading?

WHAT are you thinking about? Which memory are you reliving right now? You may think that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.
In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering.
Last week at the Society for Neuroscience meeting in Chicago, Jack gallant, a leading "neural decoder" at the University of California, Berkeley, presented one of the fiel's most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity. Others at the same meeting claimed that such neural decoding could be used to read memories and future plans - and even to diagnose eating disorders.
Understandably, such developments are raising concerns about "mind reading" technologies, which might be exploited by advertisers or oppressive governments (see "The risks of open-mindedness", right). Yet despite - or perhaps because of - the recent progress in the field, most researchers are wary of calling their work mind-reading. Emphasising its Limitations, they call it neural decoding.
They are quick to add that it may lead to powerful benefits, however. These include gaining a better understanding of the brain and improved communication with people who can't speak or write, such as stroke victims or people with neurodegenerative diseases. There is also excitement over the possibility of being able to visualise something highly graphical that someone healthy, perhaps an artist, is thinking.
So how does neural decoding work? Gallant's team drew international attention last year by showing that brain imaging could predict which of a group of pictures someone was looking at, based on activity in their visual cortex. But simply decoding still images alone won't do, says Nishimoto. "Our natural visual experience is more like movies.
Nishimoto and Gallant started their most recent experiment by showing two lab members 2 hours of video clips culled from DVD trailers, while scanning their brains. A computer program then mapped different patterns of activity in the visual cortex to different visual aspects of the movies such as shape, colour and movement. The program was then fed over 200 days' worth of YouTuve clips, and used the mappings it had gathered from the DVD trailers to predict the brain activity that each YouTube clip would produce in the viewers.
Finally, the same two lab members watched a third, fresh set of clips which were never seen by the computer program, while their brains were scanned. The computer program compared these newly captured brain scans with the patterns of predicted brain activity it had produced from the YouTube clips. For each second of brain scan, it chose the 100 YouTube clips it considered would produce the most similar brain activity - and then merged them. The result was continuous, very blurry footage, corresponding to a crude "brain read-out" of the clip that the person was watching.
In some cases, this was more successful than others. When one lab member was watching a clip of the actor Steve Martin in a white shirt, the computer program produced a clip like a moving, human-shaped smudge, with a white "torso", but the blob bears little resemblance to Martin, with nothing corresponding to the moustache he was sporting.
Another clip revealed a quirk of Gallant and Nishimoto's approach: a reconstruction of an aircraft flying directly towards the camera - and so barely seeming to move - with a city skyline in the background omitted the plane but produced something akin to a skyline. That's because the algorithm is more adept at reading off brain patterns evoked by watching movement than those produced by watching apparently stationary objects.
ït's going to get a lot better," says Gallant. The pair plan to improve the reconstruction of movies by providing the program with additional information about the content of the videos.
Team member Thomas Naselaris demontrated the power of this approach on still images at the conference. For every pixel in a set of images shown to a viewer and used to trein the program, researchers indicated whether it was part of a human, an animal, an artificial object or a natural one. The software could then predict where in a new set of images these classes of objects were located, based on brain scans of the picture viewers.
Movies and pictures aren't the only things that can be discerned from brain activity, however. A team led by Eleanor Maguire and AMrtin Chadwick at University College London presented results at the Chicago meeting showing that our memory isn't beyond the reach of brain scanners.
A brain srtucture called the hippocampus is critical for forming memories, so Maguire's team focused its scanner on this area while 10 volunteers recalled videos they had watched of different women performing three banal tasks, such as throwing away a cup of coffee or posting a letter. When Maguire's team got the volunteers to recall one of these three memories, the researchers could tell which the volunteers was recalling with an accuracy of about 50 per cent.
That's well above chance, says Maguire, but it is not mind reading because the program can't decode memories that it hasn't already been trained on.
"You can't stick somebody in a scanner and know what they're thinking." Rather, she sees neural decoding as a way to understand how the hippocampus and other brain regions form and recall a memory.
Maguire could tackle this by varying key aspects of the clips - the location or the identity of the protagonist, for instance - and see how those changes affect their ability to decode the memory.
She is also keen to determine how memory encoding changes over the weeks, months or years after memories are first formed.
Meanwhile, decoding how people plan for the future is the hot topic for John-Dylan Haynes at the Bernstein Centre for Computational Neuroscience in BErlin, Germany. In work presented at the coference, he and colleague Ida Momennejad found the could use brain scans to predict intentions in subjects planning and peforming simple tasks. What's more, by showing people, including some with eating disorders, images of food, Haynes's team could determine which suffered from anorexia or bulimia via brain activity in one of the brain's "reward centres".
Another focus of neural decoding is language. Marcel Just at Carnegie Melon University in Pittsburgh, Pennsylvania, and his collegue Tom Mitchell reported last year that they could predict which of two nouns - such as "celery" and äirplane" - a subject is thinking of, at rates well above chance. They are now working on two-word phrases.
Their ultimate goal of turning brain scans into short sentences is distant, perhaps impossible. But as with the other decoding work, it's an idea that's as tantalising as it is creepy.

Magazine "NewScientist" 31 October 2009

"NewScientist 31 October 2009

Smart walls control the room

WHO says wallfloers don't grab people's attention? A new type of electronically enhanced wallpaper promises not only eye-pleasing designs, but also the ability to activate lamps and heaters - and even control music systems.
Interactive walls are nothing new, but most designs rely on expensive sensors and power-hungry projectors to make a wall come alive. Now the Living Wall project, led by Leah Buechley at the Massachusetts Institute of Technlogy's Media Lab, offers an alternative by using magnetic and conductive paints to create circuitry in attractive designs.
When combined with cheap temperature, brightness and touch sensors, LEDs and Bluetooth, the wall becomes a control hub able to talk to nearby devices. Touch a flower to turn on a lamp, for example, or set heaters to fire up when the room gets cold.
Öur goal is to make technologies that users can build on and change without needing a lot of technical skill,"says Buechley.
To create the wallpaper, the team start with wafer-thin steel foil sandwiched between layers of paper which are coated with magnetic paint - acrylic infused with iron particles. On top of this base they paint motifs such as flowers and vines using conductive paint, which uses copper particles rather than iron. The designs form cicuitry onto which sensors, lights, and other elements can be attached.

Magazine "NewScientist" 3 October 2009

"NewScientist" 3 October 2009

Virtual cities get real bustle

WHILE virtual globes such as Google Earth or Microsoft Visual Earth provide great bird's-eye views of urban landscapes, they show ghost towns - empty streets free of traffic or people.
Now a system that can draw on real-time video from traffic and surveillance cameras, and weather sensors, is set to change that. It fills virtual towns with cars and people and could even let online spectators zoom in on live sports events.
Computer scientists at Georgia Institute of Technology in Atlanta use video feeds from cameras around their city. Their augmented version of Google Earth incorporates sports scenes, traffic flows, the march of pedestrians and weather.
The system looks out for specific categories of moving objects in a video feed. Any vehicle moving along a street is classified as a car and replaced with a randomly chosen 3D car model. Pedestrians are replaced with human figures animated with stock motion-capture data to make them walk.
Although surveillance cameras are used, no one's privacy is at stake because the models obscure identifying details such as a car's colour and licence plates, says Kihwan Kim, who led the research.
"Every moving object is rendered symbolically,"says Kim.
Sports action can be recreated with less regard to privacy, using multiple camera views to create 3D models of the players.

Magazine "NewScientist" 3 October 2009

"NewScientist" 3 October 2009

Second Life gets a reality check

With Eye-popping suns at stake in the virtual economy, intellectual property disputes are heading for the courts

WHILE global economies have endured a torrid time of late, business is booming in the virtual economies of Second Life, Facebook and Everquest. As the economic boundaries between virtual and real worlds continue to blur, the supposedly liberated virtual worlds are now running up against some very real-world legal problems.
Financial analyst Piper Jaffray estimates that US citizens will spend $621 million in 2009 in virtual worlds; estimates of the Asian market are even larger. Research firm Plus Eight Star puts spending at $5 billion in the last year.
Over in Second Life, trade remains robust. The value of transactions between residents in the second quarter of this year was $144 million, a year-on-year increase of 94 per cent. With its users swapping virtual goods and services worth around $600 million per year, Second Life has the largest economy of any virtual world - which exceeds the GDP of 19 countries, including Samoa.
Thousands of users make money selling virtual goods from clothing and furniture to art and gazebos, as well as services such as virtual wedding planning, translation or architecture. Several hundred make thousands of dollars from the trade; the most successful have become millionaires.
Yet all is not rosy in the virtual Garden of Eden (see "Trouble breaks out in paradise"). Just as the digital revolution has facilitated piracy and copyright theft in other spheres, those who make a living running businesses in Second Life have seen their profits eroded by users who have found ways to copy their intellectual property (IP).
The Second Life case is believed to be the first time residents of a large virtual world have sued its owner for alleged IP rights violations by other users. But as the dollar value of virtual economies climb, it seems likely tohers will head to real-world courts to settle disputes, says James Grimmelmann, associate professor at the New York Law School. "As virtual worlds are becoming more and more important, and sites and games become more immersive, these kinds of cases are going to matter more," he says.
The case will also test the US Digital Millennium Copyright Act (DMCA) which grants the providers of online services with some degree of immunity to prosecution for copyright infringements perpetrated by their users. Similar exemptions are provided in Europe under the Electronic Commerce Directive.
"The law in this area is pretty good and should be protecting people who've got (intellectual property) or who are writing unique code, but the problem is policing it,"says Mark Stephens, a partner at London-based law firm Finers Stephens Innocent. "So increasingly people are trying to pin liability on the gatekeepers.
The lawsuit forms part of a group of related cases in which those who host online content are being targeted for the misdemeanours of their users.
Last month as US federal district court dismissed a complaint filed by record company gaint Universal Music Group, ruling that the DMCA did provide video site Veoh with immunity from liability for copyright violations committed by its users.
Online service providers such as Second Life's parent company Linden Lab, are likely to argue they have little control over or knowledge about user's activities, says Grimmelmann. "My general expectation is that they probably do have immunity under the ac.
Linden LAb has already taken some steps towards protecting the IP of its users. In August it issued a "content management roadmap", including plans for improvements to the Second Life IP complaints process, new tools, industry-standard tools for copying content to prevent IP infringement, a trusted seller scheme and more IP outreach work.
Speaking in a panel discussion at last week's Virtual Goods conference in San Jose, California, Tom Hale, chief product officer at Linden Lab, said: "Rest assured we feel very strongly about the rights of our IP creators and holders and want to protect them as much as we can in the virtual world. We have a challenge between our desire to have an open platform, and also our obligation to our residents, whether they be merchants or consumers, or creating for their own interest.
Only time will tell whether Linden will implement enough changes to placate its critics or whether the issue will be settled in court.
What is clear is that with so much money at stake, the case will be watched very carefully by a great many people.