08-01-2010
DECODE V&A
Digital technologies are providing new tools for artists and designers. Innovative, often interactive, displays use generative software, animation and other responsive technologies to instill a “live” element into contemporary artworks. Some works exist in a state of perpetual evolution; others are altered by the behaviour of the spectator.
From designs that draw on the barest fundamentals of code – the 1s and 0s of computational language made by a single programmer – to art that encompasses a global collective of online creativity, may of the exhibits here defy traditional design categories. They blur the boundaries between practices, between programming and performance, creator and participant.
Decode looks at three current themes within digital design. Code shows how computer code – whether bespoke and tailored, or hacked and shared – has become a new design tool. Interactivity presents works that respond to our physical presence. Network charts or reworks the traces we leave behind.
Wefeelfine.org
Mission
We Feel Fine is an exploration of human emotion on a global scale.
Since August 2005, We Feel Fine has been harvesting human feelings from a large number of weblogs. Every few minutes, the system searches the world's newly posted blog entries for occurrences of the phrases "I feel" and "I am feeling". When it finds such a phrase, it records the full sentence, up to the period, and identifies the "feeling" expressed in that sentence (e.g. sad, happy, depressed, etc.). Because blogs are structured in largely standard ways, the age, gender, and geographical location of the author can often be extracted and saved along with the sentence, as can the local weather conditions at the time the sentence was written. All of this information is saved.
The result is a database of several million human feelings, increasing by 15,000 - 20,000 new feelings per day. Using a series of playful interfaces, the feelings can be searched and sorted across a number of demographic slices, offering responses to specific questions like: do Europeans feel sad more often than Americans? Do women feel fat more often than men? Does rainy weather affect how we feel? What are the most representative feelings of female New Yorkers in their 20s? What do people feel right now in Baghdad? What were people feeling on Valentine's Day? Which are the happiest cities in the world? The saddest? And so on.
The interface to this data is a self-organizing particle system, where each particle represents a single feeling posted by a single individual. The particles' properties – color, size, shape, opacity – indicate the nature of the feeling inside, and any particle can be clicked to reveal the full sentence or photograph it contains. The particles careen wildly around the screen until asked to self-organize along any number of axes, expressing various pictures of human emotion. We Feel Fine paints these pictures in six formal movements titled: madness, murmers, Montage, Mobs, Metrics, and Mounds.
At its core, We Feel Fine is an artwork authored by everyone. It will grow and change as we grow and change, reflecting what's on our blogs, what's in our hearts, what's in our minds. We hope it makes the world seem a little smaller, and we hope it helps people see beauty in the everyday ups and downs of life.
Data Collection
At the core of We Feel Fine is a data collection engine that automatically scours the Internet every ten minutes, harvesting human feelings from a large number of blogs. Blog data comes from a variety of online sources, including LiveJournal, MSN Spaces, MySpace, Blogger, Flickr, Technorati, Feedster, Ice Rocket, and Google.
We Feel Fine scans blog posts for occurrences of the phrases "I feel" and "I am feeling". This is an approach that was inspired by techniques used in Listening Post, a wonderful project by Ben Rubin and Mark Hansen.
Once a sentence containing "I feel" or "I am feeling" is found, the system looks backward to the beginning of the sentence, and forward to the end of the sentence, and then saves the full sentence in a database.
Once saved, the sentence is scanned to see if it includes one of about 5,000 pre-identified "feelings". This list of valid feelings was constructed by hand, but basically consists of adjectives and some adverbs. The full list of valid feelings, along with the total count of each feeling, and the color assigned to each feeling, is here.
If a valid feeling is found, the sentence is said to represent one person who feels that way.
If an image is found in the post, the image is saved along with the sentence, and the image is said to represent one person who feels the feeling expressed in the sentence.
Because a high percentage of all blogs are hosted by one of several large blogging companies (Blogger, MySpace, MSN Spaces, LiveJournal, etc), the URL format of many blog posts can be used to extract the username of the post's author. Given the author's username, we can automatically traverse the given blogging site to find that user's profile page. From the profile page, we can often extract the age, gender, country, state, and city of the blog's owner. Given the country, state, and city, we can then retrieve the local weather conditions for that city at the time the post was written. We extract and save as much of this information as we can, along with the post.
This process is repeated automatically every ten minutes, generally identifying and saving between 15,000 and 20,000 feelings per day.
Statistical Computations
We Feel Fine's data is stored in a database, and can be queried in any number of ways by people using the We Feel Fine applet.
When the applet is first opened, the initial dataset consists of the most recent 1,500 feelings collected by our system. The applet's Panel can then be used to arbitrarily specify different populations, constrained by any combination of:
- Feeling (happy, sad, depressed, etc.)
- Age (in ten year increments - 20s, 30s, etc.)
- Gender (male or female)
- Weather (sunny, cloudy, rainy, or snowy)
- Location (country, state, and/or city)
- Date (your, month, and/or day)
Obviously, the more specific the population, the fewer feelings it will contain, and the less significant any associated statistical computations will be. For example, asking for feelings from "20 year old males in Bagdhad Iraq when it's rainy" might yield few or no feelings, whereas, asking for feelings from "20 year olds in New York City" would result in a larger number of feelings.
For any given population, the applet presents a number of different statistical views, offering insights into the traits of the specified population.
The "Mobs" movement of the piece shows distribution breakdowns of the chosen population along: feeling, gender, age, weather, and location. Mobs expresses the notion of "Most Common".
The "Metrics" movement of the piece shows the most representative traits of the chosen population along: feeling, gender, age, weather, and location. Metrics expresses the notion of "Most Salient".
"Most Common" is different from "Most Salient" in the following way:
- "Most Common" will be more or less the same across different populations. For example, :better" is the most common feeling overall, so in most populations, "better" will be the most common feeling.
- "Most Salient" expresses the ways in which a given population differs from the global average. For example, if most people feel "cold" 02% of the time, but Canadians feel "cold" 1.2% of the time, we claim "cold" to be especially salient among Canadians, because "cold" occurs among Canadians al 6 times the normal rate.
In making our salience computations, we are careful to avoid falsely claiming statistical significance. Salience computations count one individual blogger once and only once. For example, if there is one blogger in North Dakota who feels "magnificent" over and over again, it would be misleading to conclude that North Dakota as a state feels particularly "magnificent", just because of a single prolific blogger who happens to feel magnificent. So our magnificent North Dakotan would only be counted once. Similarly, we impose a threshold of at least four occurences for a given trait to be considered salient. For example, in a population of 100 feelings, say a given very obscure feeling like "downtrodden" occurs twice, representing 2% of the total feelings in that population. Say "downtrodden" usually occurs only .0003% of the time. It would be misleading to claim that this population feels particularly "downtrodden", just because two people out of 100 happened to feel that way. So we impose a minimum of four occurences in a given population for a trait to be considered for salience.
Furthermore, whenever possible the applet makes clear exactly what data, and how much, was used in making any salience claims, so viewers can discern for themselves how statistically significant the findings are.
The "Mounds" movement of the piece displays every valid feeling in our system, ordered and scaled to represent each feeling's frequency. This list is independent of the selected population, and is updated periodically as our database grows.
Privacy
We Feel Fine only collects and displays data that was already posted publicly on the World Wide Web. We Feel Fine never associates individual human names with the feelings it displays, though it always provides a link to the blog from which any displayed sentence or picture was collected. Also, bloggers may make a blog post invisible to the We Feel Fine crawler by including the following code somewhere in the post: .
Feeling Colors
The top 200 feelings were manually assigned colors that loosely correspond to the tone of the feeling. Happy positive feelings are bright yellow. Sad negative feelings are dark blue. Angry feelings are red. Calm feelings are green. And so on. A full list of all valid feelings, along with their counts and colors, is here.
Human Involvement
There is no human involvement in We Feel Fine. The system runs autonomously, collecting and presenting data about human feelings.
DECODE Science Museum
“Listening Post” by Mark Hansen & Ben Rubin”
Listening Post is a ‘dynamic portrait’ of online communication, displaying uncensored fragments of text, sampled in real-time, from public internet chatrooms and bulletin boards. Artists Mark Hansen and Ben Rubin have divided their work into seven separate ‘scenes’ akin to movements in a symphony. Each scene has its own ‘internal logic’, sifting, filtering and ordering the text fragments in different ways.
By pulling text quotes from thousands of unwitting contributors' postings, Listening Post allows you to experience an extraordinary snapshot of the internet and gain a great sense of the humanity behind the data. The artwork is world renowned as a masterpiece of electronic and contemporary art and a monument to the ways we find to connect with each other and express our identities online.
No comments:
Post a Comment