Final Project: Ziyun Peng

Assignment,Final Project,Uncategorized — ziyunpeng @ 4:16 pm

 

FACE  YOGA

FACE_YOGA

 

 

Idea

There’s something interesting about the unattractiveness that one goes through on the path of pursuing beauty. You put on the facial mask to moisturize and tone up your skin while it makes you look like a ghost. You do face yoga exercises in order to get rid of some certain lines on your face but at the mean time you’ll have to many awkward faces which you definitely wouldn’t want others to see. The Face Yoga Game aims at amplifying the funniness and the paradox of beauty by making a game using one’s face.

 

Face Yoga Game from kaikai on Vimeo.

Setup

schematic

 

setup

Learnings

– Machine learning tools: Gesture Follower by IRCAM

This is a very handy tool and very easy to use. There’re more features worth digging into and playing with in the future such as channel weighing and expected speed, etc. I’m glad that got to apply some basics of machine learning in this project and I’m certain that it’ll be helpful for my future projects too.

– Conductive fabrics

This is another thing that I’ve had interest in but never had an excuse to play with. In this project the disappointment is that I had to apply water to the fabrics every time I want to use it but that might be a special case for the myoelectric sensor that I was using. And the performance was not as good with the medical electrodes, possibly due to the touching surface, and since it’s fabric (non-sticky), it moves around while you’re using it.

Obstacles & Potential Improvements

– Unstable performance with the sensors

Although part of this project is to experiment with the possibilities not to use computer vision to detect facial movements, given the fact that the performance wasn’t as good as had been expected, using combination of the both might be the better solution for the future use. One possible alternative I’ve been imagining is that I can use a transparent mask instead of the current one I’m using so that people can see their facial expressions through that and on which I can stick on some color mark points for the computer vision to track with. Although better lightings would be required, vanity lights still stands for this setting.

– User experience and calibration

My ultimate goal is to let everyone involved in the fun, however, opening to all people to play meaning the gestures that I trained myself before hand may not work for everyone, and this was proved on the show day. I was suggested to do a calibration every time at the start of the game play which I think is a very good idea.

– Vanity light bar

 

 

Final Project Presentation – Ziyun Peng

Assignment,Final Project,Max,Sensors — ziyunpeng @ 10:20 pm

Face Yoga Game

Idea

There’s something interesting about the unattractiveness that one goes through on the path of pursuing beauty. You put on the facial mask to moisturize and tone up your skin while it makes you look like a ghost. You do face yoga exercises in order to get rid of some certain lines on your face but at the mean time you’ll have to many awkward faces which you definitely wouldn’t want others to see. The Face Yoga Game aims at amplifying the funniness and the paradox of beauty by making a game using one’s face.

Set-up

Myoelectric sensors -> Arduino —Maxuino—> Max/MSP (gesture recognition)—OSC—> Processing (game)

Myoelectric sensor electrodes are replaced with electric fabrics so to be sewed onto a mask that the player is going wear. The face gestures that correspond to the face yoga video are pre-learnt in Max/MSP using the Gesture Follower external developed in IRCAM. When the player is making facial expressions under the mask, it will be detected in Max/MSP, the corresponding gesture number will be sent to Processing to determine if the player is performing the right gesture.

How does the game work?

Face_Yoga

 

The game is in the scenario of “daily beauty care” where you have a mirror, a moisturizer and a screen for game play.

Step 1: Look at the mirror and put on the mask

Step 2: Apply the moisturizer (for conductivity)

Step 3: Start practicing with the game!

The mechanism is simple, the player is supposed to do the same gesture as the instructor does in order to move the object displayed on the screen to the target place.

The final presentation is in a semi-performative  form to tell

Final Project Milestone 3 – Ziyun Peng

Assignment,Final Project,Max,Software — ziyunpeng @ 10:05 pm

Since my project has switched from a musical instrument to a beauty practice instrument that’s used to play the face yoga game that I’m going to design, hence my 3rd milestone then is to make the visuals and the game mechanics.

First is to prepare the video contents. What I did was to split the videos into 5 parts according to the beauty steps. After watching each clip, the player is supposed to follow the lady’s instruction and hold the gesture for 5 seconds – being translated in the language of the game is to move the object on the screen to the target place by holding the according gesture.

The game is made in processing, and it’s getting gesture results from the sensors in the wearable mask in Max/MSP via OSC protocol.

max_patch

 

Examples are shown as followed:

game_step_1

 

game_step_2

video credits to the wonderful face yoga master Fumiko Takatsu.

 

 

Final Project Milestone 2 – Ziyun Peng

Assignment,Final Project — ziyunpeng @ 10:56 pm

My second milestone is to make a stable and ready to use system.

Mask
After several tries, I finally got to decide on where the sensing points should be on the face and replaced the electrodes with conductive fabrics following this tutorial by the sensor kit provider. It took me couple of amazon trips to find the right sized snap button and finally got the right ones from Lo Ann. The right size should be 7/16 inches (1.1cm) as is shown in the picture below, in case any of you would need it in the future.

2013-11-22 22.58.06

mask

Center Board
Since there won’t be any change in the circuit, it’s time to solder! What you can see on the board is two muscle sensor breakout, and Arduino Nano and two 9V batteries. The wires coming out will be connected to the buttons on the mask for getting data on the face.

2013-11-22 21.37.16

Final Project Milestone 1 – Ziyun Peng

Assignment,Final Project,Max,Sensors — ziyunpeng @ 10:45 am

My first milestone is to test out the sensors I’m interested in using.

Breath Sensor: for now I’m still unable to find any off-the-shelf one which can differentiate between inhale and exhale. Hence I went to the approach using homemade microphone to get audio and using Tristan Jehan‘s analyzer object with which I found the main difference of exhale and inhale is the brightness. Nice thing about using microphone is that I can also easily get the amplitude of the breath which is indicating the velocity of the breath.  Problem with this method is it needs threshold calibration each time you change the environment and homemade mic seems to be moody – unstable performance with battery..

mini-mic

breath

Breath Sensor from kaikai on Vimeo.

Muscle Sensor:  It works great on bicep although I don’t have big ones but the readings on face is subtle – it’s only effective for big facial expressions – face yoga worked. It also works for smiling, opening mouth, and also frowning (if you place the electrodes just right). During the experimentation, I figure it’s not quite a comfortable experience having sticky electrodes on your face but alternatives like stretching fabric + conductive fabric could possibly solve the problem which is my next step. Also, readings don’t really differentiate each other, meaning you won’t know if it’s opening mouth or smiling by just looking at the single reading. Either more collecting points need to be added, or it could be coupled with faceOSC which I think is more likely the way I’m going to approach.

EMG_faces

FaceOSC: I recorded several facial expressions and compared the value arrays. The results show that the mouth width, jaw and nostrils turned out to be the most reactive variables. It doesn’t perform very well with the face yoga faces but it does better job on differentiating expressions since it offers you more parameters to take reference of.

faceOSC_faces

Next step for me is to keep playing around with the sensors, try to figure out a more stable sensor solution (organic combination use) and put them together into one compact system.

Final Project Proposal – Ziyun Peng

Assignment,Audio,Final Project,Hardware,Sensors — ziyunpeng @ 1:11 am

Assignment 2: “Comfort Noise” by Haochuan Liu & Ziyun Peng (2013)

Arduino,Assignment,Submission — ziyunpeng @ 10:40 pm

fini_500

Idea

People who don’t usually pay attention to noise would often take it for granted as disturbing sounds, omitting  the musical part in it – the rhythm, the melody and the harmonics. We hear it and we want to translate and amplify the beauty of noise to people who didn’t notice.

Why pillow?

The pillow is a metaphor for comfort – this is what we aim for people perceiving from hearing noise through our instrument, on the contrary to what noise has been impressed people.

When you place your head on a pillow, it’s almost like you’re in a semi-isolated space – your head is surrounded by the cotton, the visual signals are largely reduced since you’re now looking upwards and there’s not that much happening in the air. We believe by minimizing the visual content, one’s hearing would become more sensitive.

Make

We use computational tools ( Pure Data & SPEAR) and our musical ears to extract the musical information in the noise, then map them to the musical sounds (drum and synth ) that people are familiar with.

The Pduino (for reading arduino in PD) and PonyoMixer (multi-channel mixer) helped us a lot.

pd

Inside the pillow, there’s a PS2 joystick  used to track user’s head motions. It’s a five-direction joystick but in this project we’re just using the left and right. We had a lot of fun making this.

sensor_500

Here’s the mix box we made for users to adjust the volume and the balance of the pure noise and the musical noise extraction sounds.

mixBox

The more detailed technical setting is as listed below:

Raspberry Pi – running Pure Data

Pure Data – reading sensor values from arduino and sending controls to sounds

Arduino Nano – connected to sensors and Raspberry Pi

Joystick – track head motion

Pots – Mix and Volume control

Switch – ON/OFF

LED – ON/OFF indicator

 

 

Instrument: “Face Instrument” by Daito Manabe ( 2008 )

Instrument,Reference — ziyunpeng @ 10:19 pm

Face_DaitoManabe

More…

Instrument: “Quantum Parallelograph” by Patrick Stevenson Keating ( 2011 )

Instrument,Reference — ziyunpeng @ 10:18 pm

QuantumParallelograph

More…

Instrument: “Seamoons” by Maywa Denki ( 2004 )

Instrument,Reference — ziyunpeng @ 10:16 pm

seamoons

Video    More..

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2017 Hybrid Instrument Building 2014 | powered by WordPress with Barecity