Thursday 25 February 2010

Emoji



Emoji


Emoji (絵文字) is the Japanese term for the picture characters or emoticons used in Japanese wireless messages and webpages. Originally meaning pictograph, the word literally means e "picture" + moji "letter". The characters are used much like emoticons elsewhere, but a wider range is provided, and the icons are standardized and built into the handsets. The three main Japanese operators, NTT DoCoMo,au and Softbank Mobile (formerly Vodfone), have each defined their own variants of emoji.

Emoji appearing on a Japanese iPhone.

Although typically only available in Japan, due to the nature of software development, the characters and code in order to use emoji is often physically present in phones, and some phones, including the Apple iPhone, allow access to the symbols. They have also started appearing in emailing services such as Gmail.


Encoding Of Emoji

For NIT DoCoMo's i-mode, each emoji symbol is drawn on a 12x12 pixel grid. When transmitted, emoji symbols are specified as a two-byte sequence, in the private-use range E63E through E757 in the Unicode character space, or F89F through F9FC for Shift-JIS. The basic specification has 176 symbols, with 76 more added in phones that support C-HTML 4.0.

au's emoji pictograms are specified using the IMG tag. SoftBank Mobile emojis are wrapped between SI/SO escape sequences, and support colors and animation. DoCoMo's emoji are the most compact to transmit while au's version is more flexible based on open standards.

There exist two proposals for encoding the Emoji in Unicode and ISO/IEC 10646, calling for adding characters to the standards to be able to represent the full set. One is by Google and Apple Inc., calling for adding 674 characters to the standards to be able to represent the full set.A revision and extension of this proposal is provided as a joint proposal by the German and Irish national bodies.


Projects Marc Owens


Marc Owens


Avatar Machine

The virtual communities created by online games have provided us with a new medium for social interaction and communication. Avatar Machine is a system which replicates the aesthetics and visuals of third person gaming, allowing the user to view themselves as a virtual character in real space via a head mounted interface. The system potentially allows for a diminished sense of social responsibility, and could lead the user to demonstrate behaviors normally reserved for the gaming environment.




Virtual Transgender Suit

Silicone Ruber / Paper Card

2008

Online virtual worlds are increasingly becoming a platform for gender exploration. Research shows that 50% of female characters, that exist within virtual spaces, are actually played by men.Virtual Transgender Suit replicates the aesthetics of the typical virtual female form and reproduces them within a real world context. The piece was specifically designed for men to wear in the real world, creating a bridge between real and virtual.




Sabre & Mace

Digitally produced - Second Life

2008

Sabre & Mace is a company that offers a unique service created for the online environment of Second Life. The service provides virtual characters the opportunity to experience death as a way to close their user account permanently. The project examines the notion of feeling sentimental toward a virtual character and examines the link between sentimentality and tangibility.Our research showed that a great deal of second life residents have multiple avatars. One guy who we spoke to had 14. He said that he used a many of them as platforms for different sides of his real life personality, and for others he invented entirely new fantasy personalities. However he admitted that some of his created avatars had fallen by the wayside and he no longer used them.



Grayson Perry


Grayson Perry


Grayson Perry (born 1960) is an English artist, known mainly for his ceramic vases and cross-dressing. He works in several media. Perry's vases have classical forms and are decorated in bright colours, depicting subjects at odds with their attractive appearance, e.g., child abuse and sado-masochism. There is a strong autobiographical element in his work, in which images of Perry as "Claire", his female alter-ego, often appear. He was awarded the Turner Prize in 2003 for his ceramics, receiving the prize dressed as Claire.

Grayson had unconventional sexual desires and fantasies. He describes his first sexual experience at the age of seven when he tied himself up in his pyjamas. From an early age he liked to dress in women's clothes and in his teens realized that he was a transvestite. At the age of 15 he moved in with his father's family at Chelmsford, where he began to go out dressed as a woman. When he was discovered by his father he said he would stop, but his stepmother told everyone about it and a few months later threw him out. He returned to his mother and stepfather at Great Bardfield.




Kinetica Art Fair 2010




06-02-2010

Kinetica Art Fair 2010

The Kinetica Art Fair provides collectors, curators, museums and the public with a unique opportunity to view and puchase artworks from leading international galleries, artist’s collectives, curatorial groups and organisations specialising in kinetic, electronic and new media art.

Kinetica’s aim through the fair is to popularise artists and organisations working in these genres and to provide a new platform for the commercial enterprise of this field.

Alongside the fair there will be specialevents, screenings, tours, talks, workshops and performances. These events will involve some of the world’s most eminent leaders in the fields of kinetic, electronic and new media art.

Cinimod Studio

'Flutter' is a new interactive artwork by Cinimod Studio that explores the viewer's encounter with a flutter of virtual butterflies. Set within a striking architectural framework and making use of cutting edge technologies developed with White Wing Logic engineers, the artwork is a product of our on-going fascination with the motion of a butterfly's flight and the iridescent reflections and scattering of light by the scales on a butterfly's wing.

Flutter consists of a linear array of 100 vertical double-sided video fins projecting from a mirrored surface. Butterflies flash through these screens on virtual flight paths, visible for fleeting moments as the light irridesces off their wings.

The sequenced form of the installation references animation ideas first developed in the zoetrope, and its later successor, the paxinoscope. However, in a developmental move away from the linear time-sliced nature of these devices, the introduction of interactive control in 'Flutter' makes the ephemerality of the encounter influence its semiotics.

The butterfly has different meanings in different cultures, from being a symbol of life or death, luck or tragedy, nervousness or happiness. 'Flutter' explores these notions through cutting-edge interactivity - the movement of the viewer around the piece determines the scale of the butterfly's flutter. By reaching out your hand a flying butterfly will respond, either by landing near you or by being scared away. In playing with the butterflies more intricate behaviours will be uncovered.

About Cinimod Studio:

Cinimod Studio is a cross-discipline practice based in London specializing in the fusion of art, architecture, lighting and interaction design. It was started by the architect Dominic Harris, whose passion for interactive art and lighting design has produced built projects now found across the international art and architecture scene.

color="black">www.cinimodstudio.com

Rosaline de Thelin

Roseline de Thelin works with light as a medium and as a subject.

Over the past 10 years she creates light sculptures and light installations that explore the epiphenomena of light: reflection, refraction, fragmentation, conduction, transparency. She uses a range of materials including fibre optic, quartz crystal, mirrors, Perspex, wires and chains, metals, photographic prints and video. She designs modern lighting installations for public spaces and private homes internationally. She exhibits regularly in Spain where she is based and abroad.

Finding inspiration in astronomy, scientific theories and quantum physics, her latest work focuses on organic forms such as spirals, ellipses, waves, volutes and veils, to create large light pieces.

Her recent holographic light sculptures will be presented this year in the Kinetica Art Fair. These pieces made of edged fiber optic are a reflection on life and illusion. The first series feature a family of light beings surrounded with spirals and ellipses of light. Her next project is to bring these characters into different life situations and illusory light decors.

TINT

TINT is an interdisciplinary media arts organisation. Dedicated to the display of art which is derived from, and reflects upon the intersections of technology and culture. As an artist run organisation our core intentions are concerned with the support of artistic collaboration, acting as a point of juncture for artists working within the fields of science and technology. We assist in pursuing and establishing collaborations with scientists, theorists, artists and other practitioners. Our program of exhibitions and events support an experimentation of media and interactive arts, encouraging audiences to participate, explore and create!

TINT Presents Memory by Parag K Mital and Agelos Papadakis

Parag K. Mital is a cross-disciplinary researcher, interested in how computer vision and human perceptions are intrinsically related. Questioning what stimulates our attention, how a computer can learn this and how we react through reason, emotion, and liminal processes.

Agelos Papadakis work is an investigation of human nature with an emphasis on the study of the individual and social parameters that shape us psychologically. A skilled glass blower, Papadakis combines traditional techniques with new media technologies.

Memory consists of a structural network of glass neurons, linked together by chains in an amorphous neuron cloud. Through mapping and facial recognition, cameras track and record the faces of audience members, these images are then projected back into the sculpture. A recorded clip of the audience members play as a neural network of disparate memories. As new faces are learnt, old memories fade, and the sculpture reorganises its entangled network of neurons.

Lecture

11.00 Robots and Avatars Collaborative Futures Panel

Panel members include Professor Noel Sharkey (University of Sheffield), Ron Edwards (Ambient Performance), Ghislaine Boddington (body>data>space), Peter McGowan (Queen Mary University of London), Anna Hill (Space Synapse) and Michael Takeo Magruder (King's Visualisation Lab, King's College London).

Keypoints:

- Elain: bodyspace

- NESTA

- Teli-present (cyborg – humanoid – avatar)

- Telematics (skype; whole body involved / remotely with 2/3 spaces)

Development: virtual touch / teli-present / intimacy / teli-intuition

Avatars:

- Motion capture system

- Mii avatars: Wii Nintendo (making your own avatar)

- Multi-identity (second life)

- Motion capture-suit: physical self avatar connected to you in the digital worl

- Milo, the virtual child (voice recognition / motion capture)

- Ever1 can express emotions

- Optical illusion: reality constructed in our heads

- Lurec project: child & robot play chess (expressions robot / relationship)

- Spacesynapse.com

- Future: mobility vs. immersion (connected society – high-tech facilities)

- Emotion A1: automating avatar emotions (biosensors (MIO doll)

Robots and teleporting:

- Roxxy: sexrobot

- Hugable: teddy to huge

- Movie “suragates”



Brain-computer interface


Brain-computer interface


A brain–computer interface (BCI), sometimes called a direct neural interface or a brain–machine interface, is a direct communication pathway between a brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions.

Research on BCIs began in the 1970s at the University of California Los Angeles(UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.


The field of BCI has since blossomed spectacularly, mostly toward neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-nineties.





Artikel "Second Life Avatars controlles by the Human Brain" by Aimee Weber

Second Life Avatars controlles by the Human Brain by…… by Aimee Weber

The Biomedical Engineering Laboratory at Keio University has been up to some cool stuff. They recently announced that they were able to control a Second Life avatar using an electrode-filled headset that monitors the motor cortex and translates the data into control inputs for a Second Life avatar. You can see this technology in action in this video.

So how would this all work and what would it mean for Second Life? I'm going to take a stab at it and say ... this is huge.

Brain-Computer Interface (BCI) isn't a new technology as scientists have been researching BCI to help the physically paralyzed for years. But most BCI experiments have come from invasive implants that target specific areas of the brain with better signal resolution. Not surprisingly, asking the user base of a virtual world to accept a brain implant poses some difficult marketing challenges. Fortunately, using the electroencephalogram (commonly known as the EEG) as a non-invasive method of getting brain inputs may eventually create a marketable input device for the masses, albeit with some challenges, for example the BCI using an EEG requires training.

An interesting thing about newborn infants is that they're not born with complete control over their motor functions, but rather they go through a slow process of motor development. Initially their brains will burst with semi-random thoughts that result in the child's body parts reacting in a similarly semi-random way. Eventually the child will build a core of positive and negative experiences that encourage it to move in a more meaningful way. For example, grabbing the bottle means yummy food is coming while wildly flailing a limb may mean pain is coming when it hits the side of the crib. This process of self-discovery starts with the torso and neck with gross motor functions and slowly works its way outward to the extremities along with fine motor control such as articulating your fingers.

This whole process basically boils down to learning what kinds of brain impulses result in what kinds of physical-world results. It's interesting to note that this process of self-discovery was even simulated by a spider-like robot created by Hod Lipson which learned about its own robotic limbs and learned how to use them to move. This fascinating experiment was demonstrated at TED this March and is well worth a watch.

So if you were to strap on your mind-reading headpiece and fire up your Second Life enabled electroencephalogram, you would likely begin by flailing about in the virtual world like a newborn infant. After a great deal of practice syncing

up your brain activity with your avatar's motion, you will begin to develop extremely crude gross motor skills such as moving forward and back, side to side, stopping, and asking complete strangers for spare Linden dollars. (Kidding!)

But eventually every precious angelic newborn becomes a teen, gets lots of body piercings, and becomes a ripper that does diamond kickflips on the half-pipe. I don't know what any of that skateboarding lingo means, but it sounds like it takes a supreme level of motor control to achieve, the kind one gets after years of practice. Well, when the novelty phase of using an EEG to control a virtual avatar ends, the technology becomes more sensitive, and people spend hours, days, months, or even years refining their virtual mind-control skills ... we may not have to settle for GROSS motor control.

At this time the state of Second Life avatar motion is abysmal (no fault of Linden Lab's.) Avatar motion simply consists of playing pre-scripted animations painstakingly created in Poser or recorded by a human subject from a motion capture device (mocap.) This allows avatars to exhibit greater bodily expression than ridged models but falls far short of the real-time physical reactions we take for granted in the real world. Ideally, avatars should be skeletal ragdolls

(http://en.wikipedia.org/wiki/Ragdoll_physics) that not only react to forces imparted upon them, but also respond to simulated muscle inputs by the user. According to the Second Life wiki, "Puppeteering" is indeed in the works but the act is that a keyboard and mouse are a woefully inadequate method for inputting the multitude of subtle realistic human bodily controls in real-time.

But if the avatar has a direct feed into the motor cortex, an experienced user may actually one day be able to control more than just forward and back. They might be able to cause a ragdoll avatar to wave, hug another resident, wiggle their fingers, or perform complex and beautiful interpretive dance motions complete with facial expressions showing love, fear, doubt, and passion ... all simply by thinking about it.

Of course this technology goes beyond Second Life. It could be used to remotely control anthropomorphic robots (actually, nothing says they have to be anthropomorphic!) that can be used to perform tasks that are dangerous or hazardous for humans. The physically impaired such as Stephen Hawkins could enjoy new degrees of freedom, not just exploring virtual worlds but in the real world using mind-controlled robotic assist. And as long as I'm drunk with optimism, perhaps in the very long term all humans will be outfitted with devices that make our fleeting desires a reality (Coq au vin for dinner! I was just thinking I was in the mood for a nice coq au vin!)

When enabling the human brain to input data directly into a computer becomes a mature, well established science, the next challenge will take us in the other direction ... feeding computer information directly into our minds. Sure, we may be able to make our avatars hug a friend in the virtual world, but when will we feel the warmth of their loving embrace?

Secret Language


Secret Language


CODE: A system which substitutes certain symbols, words, or groups of letters for the words or phrases or whole messages of plaintext.


Secret writing in lemon juice

Squeeze lemon into a cup or egg cup. With a toothpick, write out your message “between the lines”.To read the messages, heat the paper at a 150-watt globe.




In chemical ink:

Mix a quater-teaspoon of iron-sulphate with a quater-cup of water. Write the message as before. Mix a quater-teaspoon of washing soda with a quater-cup of water. Dab on message with cotton wool and wait for message to reveal itself.



The St. Cyr cipher

Based on the simple principle of sliding alphabets and developed by the famous French militairy college of the same name, this is one of the most effective simple ciphers.To make a it you need to take two strips of white cartridge paper, one about 3 cm wide and 20 cm long,and the other 1 cm wide and 40 cm long. Cut a slot in the shorter strip long enough to fit a normal alphabet above it.Write in the alphabet.




Other examples of secret languages: