Final Project Documentation: The Wobble Box 2.0

Arduino,Assignment,Audio,Final Project,Instrument,Max,Sensors,Software — Tags: — Jake Berntsen @ 9:46 pm

Presenting my original “wobble box” to the class and Ali’s guests was a valuable experience.  The criticisms I received were relatively consistent, and I have summarized them to the best of my ability below:

  • The box cannot be used to create music as an independent object.  When I performed for the class at the critique, I was using an Akai APC40 alongside the wobble box.  I was using the APC to launch musical ideas that would then be altered using the wobble box, which I had synced to a variety of audio effects.  The complaint here was that it was unclear exactly how much I was doing to create what the audience was hearing in real time, and very clear that I wasn’t controlling 100% of the noises coming out of my computer.  At any rate, it was impossible to trigger midi notes using the wobble box, which meant the melody had to come from an external source.
  • The box only has one axis to play with.  At the time of the critique, the wobble box only had one working distance sensor attached to the Teensy, which meant I could only control one parameter at a time with my hand.  Many spectators commented that it seemed logical to have at least two, allowing me to get more sounds out of various hand motions, or even using two hands at once.
  • The box doesn’t look any particular way, and isn’t built particularly well.  The wobble box was much bigger than it needed to be to fit the parts inside it, and little to no thought went into the design and placement of the sensors.  It was sometimes difficult to know exactly when it was working or not, and some of the connections weren’t very stable.  Furthermore, the mini-USB plug on the side of the device sometimes moved around when you tried to plug in the cord.

In the interested of addressing the concerns above, I completely redesigned the wobble box, abandoning the old prototype for a new model.

IMG_1636 The most obviously improved element of the new box is the design.  Now that I knew exactly what the necessary electronic parts were, I removed all the extra space in the box.  The new design conserves about three square inches of space, and the holes cut for the distance sensors are much neater.

IMG_1643

I applied three layers of surface treatment; a green primer, a metallic overcoat, and a clear glaze.  The result is a luminescent coloring, and a rubber-esque texture that prevents the box from sliding around when placed on a wooden surface.  In my opinion, it looks nice.

IMG_1639IMG_1645

A strong LED light was placed exactly in between the two distance sensors, illuminating the ideal place for the user to put his/her hand.  This also provides a clue for the audience, making it more clear exactly what the functionality of the box is by illuminating the hand of the user.  The effect can be rather eery in dark rooms.  Perhaps most importantly, it indicates that the Teensy micro-controller has been recognized by Max, a feature lacking in the last prototype.  This saved me many headaches the second time around.

IMG_1640

IMG_1644

The new box has two new distance sensors, with differing ranges.  One transmits very fine values between about 2 inches and 10 inches, the other larger values between about 4 and 18 inches.  Staggering the ranges like this allows for a whole new world of control for the user, such as tilting the hand from front to back, using two hands with complete independence, etc.

IMG_1642

Finally, I moved the entire USB connection to the interior of the device, electing to instead just create a hole for the cord to come out.  After then securing the Teensy within the box, the connection was much stronger than it was in the previous prototype.

In addition to fixing the hardware, I created a few new software environments between Max and Ableton that allow for more expressive use of the box.  The first environment utilized both Max and Ableton Live to create an interactive art piece.  As the user stimulated the two distance sensors, a video captured by the laptop camera would be distorted along with an audio track of the user talking into the computer microphone.  Moving forward, my goals were to extend the ability to use the box as a true instrument, by granting a way to trigger pitches using only the box and a computer.  To achieve this, I wrote a max for live patch that corresponds a note sequence-stepper with a microphone.  Every time the volume of the signal picked up by the microphone exceeds a certain threshold, the melody goes forward by one step.  Using this, the user can simply snap or clap to progress the melody, while using the box to control the timbre of the sound.  I then randomized the melody so that it selected random notes from specific scales, as to allow for improvisation.  The final software environment I wrote, shown below, allows for the user to trigger notes using a midi keyboard, and affect the sounds in a variety of ways using the box.  For the sake of exhibiting how this method can be combined with any hardware the user desires, I create a few sounds on an APC40 that I then manipulate with the box.

Final Project Presentation – Ziyun Peng

Assignment,Final Project,Max,Sensors — ziyunpeng @ 10:20 pm

Face Yoga Game

Idea

There’s something interesting about the unattractiveness that one goes through on the path of pursuing beauty. You put on the facial mask to moisturize and tone up your skin while it makes you look like a ghost. You do face yoga exercises in order to get rid of some certain lines on your face but at the mean time you’ll have to many awkward faces which you definitely wouldn’t want others to see. The Face Yoga Game aims at amplifying the funniness and the paradox of beauty by making a game using one’s face.

Set-up

Myoelectric sensors -> Arduino —Maxuino—> Max/MSP (gesture recognition)—OSC—> Processing (game)

Myoelectric sensor electrodes are replaced with electric fabrics so to be sewed onto a mask that the player is going wear. The face gestures that correspond to the face yoga video are pre-learnt in Max/MSP using the Gesture Follower external developed in IRCAM. When the player is making facial expressions under the mask, it will be detected in Max/MSP, the corresponding gesture number will be sent to Processing to determine if the player is performing the right gesture.

How does the game work?

Face_Yoga

 

The game is in the scenario of “daily beauty care” where you have a mirror, a moisturizer and a screen for game play.

Step 1: Look at the mirror and put on the mask

Step 2: Apply the moisturizer (for conductivity)

Step 3: Start practicing with the game!

The mechanism is simple, the player is supposed to do the same gesture as the instructor does in order to move the object displayed on the screen to the target place.

The final presentation is in a semi-performative  form to tell

Final Project Milestone 3 – Ziyun Peng

Assignment,Final Project,Max,Software — ziyunpeng @ 10:05 pm

Since my project has switched from a musical instrument to a beauty practice instrument that’s used to play the face yoga game that I’m going to design, hence my 3rd milestone then is to make the visuals and the game mechanics.

First is to prepare the video contents. What I did was to split the videos into 5 parts according to the beauty steps. After watching each clip, the player is supposed to follow the lady’s instruction and hold the gesture for 5 seconds – being translated in the language of the game is to move the object on the screen to the target place by holding the according gesture.

The game is made in processing, and it’s getting gesture results from the sensors in the wearable mask in Max/MSP via OSC protocol.

max_patch

 

Examples are shown as followed:

game_step_1

 

game_step_2

video credits to the wonderful face yoga master Fumiko Takatsu.

 

 

Final Project Documentation: The Wobble Box

Assignment,Audio,Final Project,Laser Cutter,Max,Sensors — Tags: , , , — Jake Berntsen @ 5:16 pm

After taking time to consider exactly what I hope to accomplish with my device, the aim of of my project has somewhat shifted. Rather than attempt to build a sound-controller of some kind that includes everything I like about current models while implementing a few improvements, I’ve decided to focus only on the improvements I’d like to see. Specifically, the improvements I’ve been striving for are simplicity and interesting sensors, so I’ve been spending all of my time trying to make small devices with very specific intentions. My first success has been the creation of what I’m calling the “Wobble Box.”

IMG_1522

IMG_1524

Simply stated, the box contains two distance sensors which are each plugged into a Teensy 2.0.  I receive data from the sensors within Max, where I scale it and “normalize” it to remove peaks, making it more friendly to sound modulation.  While running Max, I can open Ableton Live and map certain audio effects to parameters in Max.  Using this technique I assigned the distance from the box to the cutoff of a low-pass filter, as well as a slight frequency modulation and resonance shift.  These are the core elements of the traditional Jamaican/Dubstep sound of a “wobble bass,” hence the name of the box.  While I chose this particular sound, the data from the sensors can be used to control any parameters within Ableton.

IMG_1536

IMG_1535

IMG_1532

Designing this box was a challenge for me because of my limited experience with hardware; soldering the distance sensors to the board was difficult to say the least, and operating a laser-cutter was a first for me.  However, it forced me to learn a lot about the basics of electronics and I now feel confident in my ability to design a better prototype that is smaller, sleeker, and more compatible with similar devices.  I’ve already begun working on a similar box with joysticks, and a third with light sensors.  I plan to make the boxes connectible with magnets.

IMG_1528For my presentation in class, I will be using my device as well as a standard Akai APC40.  The wobble box is not capable or meant to produce its own melodies, but rather change effects on existing melodies.  Because of this, I will be using a live-clip launching method to perform with it, making a secondary piece of hardware necessary.

 

Final Project Milestone 2 – Jake Marsico

Assignment,Final Project,Max,Uncategorized — jmarsico @ 1:21 pm

_MG_2036

The Shoot

This past weekend I finished the video shoot with The Moon Baby. Over the course of three and a half hours, we shot over 80 clips. A key part of the project was to build a portrait rig that would allow the subject to register her face at the beginning of every clip. The first prototype of this rig consisted of a two way mirror that had registration marks on it. The mirror prototype proved to be inaccurate.

The second prototype, which we used for the shoot, relied on a direct video feed from the video camera, a projector and a projection surface with a hole cut out for the camera to look through.

 

 

At the center of this rig was a max/msp/jitter patch that overlayed a live feed from the video camera on top of a still “register image”. This way, the subject was able to see her face as the camera saw it, and line up her eyes, nose, mouth and makeup with a constant still image. See an image of the patch below:

max_screenshot

 

The patch relied on Blair Neal’s Canon2Syphon application, which pulls video from the Canon dslr’s usb cable and places it into a syphon stream.  That stream is then picked up by the max/msp/jitter patch.

Here is a diagram of the entire projection rig:

Woo portrait setup

Soon into the shoot, we realized a flaw with the system: the Canon camera isn’t able to record video to its CF card while its video feed is being sent to the computer.  As a result, we had to unplug the camera after the subject registered her face, record the clip, then plug the camera back in.  We also had to close and reopen Canon2Syphon after each clip was recorded.

SONY DSC

Wide shot of the entire setup.

 

To light the subject, I used a combination of DMX-controlled fluorescent and LED lights along with several flags, reflectors and diffusers.

 

 

Project Milestone 1 – Jake Marsico

Assignment,Final Project,Max — jmarsico @ 12:06 pm

Portrait Jig Prototype

photo 8-2 copy

One of the primary challenges of delivering fluid non-linear video is to make each clip transition as seamless as possible. To do this, I’m working on a jig that will allow the actor to ‘register’ the position of his face at the end of each short clip. As you can see in the image below, the jig revolves around a two-way mirror that sits between the camera and the actor.  This will allow the actor to mark off eye/nose/mouth registers on the mirror and adjust his face into those registers at the end of every movement.

 

photo 4-2

The camera will be located directly against the mirror on the other side to minimize any glare.  As you can see below, the actor will not see the camera.  Likewise, video shot from the camera will look as though the mirror is invisible. This can be seen in the test video used in the max/openTSPS demo below.

OpenTSPS-controlled Video Sequencer

 

The software for this project is broken up into two sections: a Markov Chain state machine and a video queueing system. The video above demonstrates the first iteration of a video queuing system that is controlled by OSC messages coming from openTSPS, an open-source people tracking application built on openCV and openFrameworks.  In short, the max/msp application queues up a video based on how many  objects the openTSPS application is tracking.

Screen Shot 2013-10-29 at 12.00.41 PM

The second part of the application is a probability-based state transition machine that will be responsible for selecting which emotional state will be presented to the audience.  At the core of the state transition machine is an external object called ‘markov’, written by Nao Tokui. Mapping out the probability of each possible state transition based on a history of previous state transitions will require much thought and time.

Sound Design

Along with queueing up different video clips, the max patch will be responsible for controlling the different filters and effects that the actor’s voice will be passed through. For the most part, this section of the patch will rely on different groups of signal degradation objects and resonators~.

 

Actor Confirmed

Screen Shot 2013-10-29 at 11.38.57 AM

Sam Perry has confirmed that he’d like to collaborate on this project. His character The Moon Baby is fragile, grotesque, self-obsessed and vain.  These characteristics mesh up well with the emotional state changes shown in the project.

Final Project Milestone 1 – Ziyun Peng

Assignment,Final Project,Max,Sensors — ziyunpeng @ 10:45 am

My first milestone is to test out the sensors I’m interested in using.

Breath Sensor: for now I’m still unable to find any off-the-shelf one which can differentiate between inhale and exhale. Hence I went to the approach using homemade microphone to get audio and using Tristan Jehan‘s analyzer object with which I found the main difference of exhale and inhale is the brightness. Nice thing about using microphone is that I can also easily get the amplitude of the breath which is indicating the velocity of the breath.  Problem with this method is it needs threshold calibration each time you change the environment and homemade mic seems to be moody – unstable performance with battery..

mini-mic

breath

Breath Sensor from kaikai on Vimeo.

Muscle Sensor:  It works great on bicep although I don’t have big ones but the readings on face is subtle – it’s only effective for big facial expressions – face yoga worked. It also works for smiling, opening mouth, and also frowning (if you place the electrodes just right). During the experimentation, I figure it’s not quite a comfortable experience having sticky electrodes on your face but alternatives like stretching fabric + conductive fabric could possibly solve the problem which is my next step. Also, readings don’t really differentiate each other, meaning you won’t know if it’s opening mouth or smiling by just looking at the single reading. Either more collecting points need to be added, or it could be coupled with faceOSC which I think is more likely the way I’m going to approach.

EMG_faces

FaceOSC: I recorded several facial expressions and compared the value arrays. The results show that the mouth width, jaw and nostrils turned out to be the most reactive variables. It doesn’t perform very well with the face yoga faces but it does better job on differentiating expressions since it offers you more parameters to take reference of.

faceOSC_faces

Next step for me is to keep playing around with the sensors, try to figure out a more stable sensor solution (organic combination use) and put them together into one compact system.

HSS Impulse Response Toolbox for MAX

Max,Reference,Software,Theory — Usmani @ 12:04 am

Slides and Patches of the HISS workshop available here.

Final Project Proposal – Can Ozbay

 

glassPresent

www.glissglass.com

 

Final Project Proposal – Jakob Marsico

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Hybrid Instrument Building 2014 | powered by WordPress with Barecity