Final Project Documentation: The Wobble Box 2.0

Arduino,Assignment,Audio,Final Project,Instrument,Max,Sensors,Software — Tags: — Jake Berntsen @ 9:46 pm

Presenting my original “wobble box” to the class and Ali’s guests was a valuable experience.  The criticisms I received were relatively consistent, and I have summarized them to the best of my ability below:

  • The box cannot be used to create music as an independent object.  When I performed for the class at the critique, I was using an Akai APC40 alongside the wobble box.  I was using the APC to launch musical ideas that would then be altered using the wobble box, which I had synced to a variety of audio effects.  The complaint here was that it was unclear exactly how much I was doing to create what the audience was hearing in real time, and very clear that I wasn’t controlling 100% of the noises coming out of my computer.  At any rate, it was impossible to trigger midi notes using the wobble box, which meant the melody had to come from an external source.
  • The box only has one axis to play with.  At the time of the critique, the wobble box only had one working distance sensor attached to the Teensy, which meant I could only control one parameter at a time with my hand.  Many spectators commented that it seemed logical to have at least two, allowing me to get more sounds out of various hand motions, or even using two hands at once.
  • The box doesn’t look any particular way, and isn’t built particularly well.  The wobble box was much bigger than it needed to be to fit the parts inside it, and little to no thought went into the design and placement of the sensors.  It was sometimes difficult to know exactly when it was working or not, and some of the connections weren’t very stable.  Furthermore, the mini-USB plug on the side of the device sometimes moved around when you tried to plug in the cord.

In the interested of addressing the concerns above, I completely redesigned the wobble box, abandoning the old prototype for a new model.

IMG_1636 The most obviously improved element of the new box is the design.  Now that I knew exactly what the necessary electronic parts were, I removed all the extra space in the box.  The new design conserves about three square inches of space, and the holes cut for the distance sensors are much neater.

IMG_1643

I applied three layers of surface treatment; a green primer, a metallic overcoat, and a clear glaze.  The result is a luminescent coloring, and a rubber-esque texture that prevents the box from sliding around when placed on a wooden surface.  In my opinion, it looks nice.

IMG_1639IMG_1645

A strong LED light was placed exactly in between the two distance sensors, illuminating the ideal place for the user to put his/her hand.  This also provides a clue for the audience, making it more clear exactly what the functionality of the box is by illuminating the hand of the user.  The effect can be rather eery in dark rooms.  Perhaps most importantly, it indicates that the Teensy micro-controller has been recognized by Max, a feature lacking in the last prototype.  This saved me many headaches the second time around.

IMG_1640

IMG_1644

The new box has two new distance sensors, with differing ranges.  One transmits very fine values between about 2 inches and 10 inches, the other larger values between about 4 and 18 inches.  Staggering the ranges like this allows for a whole new world of control for the user, such as tilting the hand from front to back, using two hands with complete independence, etc.

IMG_1642

Finally, I moved the entire USB connection to the interior of the device, electing to instead just create a hole for the cord to come out.  After then securing the Teensy within the box, the connection was much stronger than it was in the previous prototype.

In addition to fixing the hardware, I created a few new software environments between Max and Ableton that allow for more expressive use of the box.  The first environment utilized both Max and Ableton Live to create an interactive art piece.  As the user stimulated the two distance sensors, a video captured by the laptop camera would be distorted along with an audio track of the user talking into the computer microphone.  Moving forward, my goals were to extend the ability to use the box as a true instrument, by granting a way to trigger pitches using only the box and a computer.  To achieve this, I wrote a max for live patch that corresponds a note sequence-stepper with a microphone.  Every time the volume of the signal picked up by the microphone exceeds a certain threshold, the melody goes forward by one step.  Using this, the user can simply snap or clap to progress the melody, while using the box to control the timbre of the sound.  I then randomized the melody so that it selected random notes from specific scales, as to allow for improvisation.  The final software environment I wrote, shown below, allows for the user to trigger notes using a midi keyboard, and affect the sounds in a variety of ways using the box.  For the sake of exhibiting how this method can be combined with any hardware the user desires, I create a few sounds on an APC40 that I then manipulate with the box.

Final Project “ImSound”: Ding

Audio,Final Project,OpenCV — Ding Xu @ 9:31 pm

ImSound: Record/Find your sounds in images

We run into a lot of sounds in our lives and sometimes we will naturally come up with certain color with those sounds. We may even form a memory of our city or living environment with some interesting sounds and colors. As for me, when I listen to some fast and happy tempo I could sense a color of dark red and when I run into some soft music, I may feel it is green or blue. Different people may have different feeling about different sounds. Therefore, ImSound is a devices aiming to encourage people collecting useless sounds in lives, all kinds of noise for example, convert them to certain colors based on their understanding and play the similar mixed sounds when run into a new image. The process stems from sound to image and then to sound.

For the user himself/herself, this device may help him/her convert some useless or even annoying sounds into some interesting funny sounds and  find new information from it. As for others, this devices is like a business card of a user’s specific understanding about the world’s sounds and share to others.

Hardware improvement:

Based on last time’s feedback, people failed to get aware of the focus when capturing an image. Thus, in the final prototype, I use a magnifier attaching a camera and a mic as a portable capture device for people to focus where the sound and where they will capture an image, with a metaphor of finding sounds in our lives. Instead of several buttons to control the recording and taking image, a single push button in the handle of magnifier is used to trigger taking a photo and then automatically record a 3s sound.

Final1

Final3

Final2

Software improvement:

Instead of using the whole histogram of images, I converted the image from RGB to HSV and used the H value for histograms with 12 bins (variable). That is to say, the images will be divided into 12 clusters based on their major color. Each image is classified and the sound will be recorded into corresponding track contributing to the library of that color. That is to say, every color has a soundtrack which belong to this cluster. Then a granular analysis is used to divided the sound into small grains and remix them for a new sound of that class. When changing to the play mode, the H histogram is computed and the corresponding sound will be played.

I used a OF ofxMaxim with FFT processing for granular analysis, but the output sound effect is not that good. The speed of sound is changed but without much similar grains connecting together. This is a main aspect I should improve for the next step of this project.

 Demo Video:

Future Plan:

1. the most important is to get more in-depth granular analysis to re-mix the sounds. My current thought is to combine the grains with their similarities among each other. The funny part is that with the growing of number of recording sounds, the output sound is dynamic changing and form some new sound.

2. Take some more actual image and sound to test the effects of whole process. Aiming to a specific type of sound may be a good choice, such as city noise.

Conclusion/Acknowledge:

Although this project is far from fully completion, I learned a lot in this process, not only the technologies such as RPI, openFrameworks and Linux; more importantly, I learned a lot about input/output design, mapping, and telling story (a point I did not do well). It teaches me to think why should we design this device and inspired me to think whom and where does a device will be used in my future projects. Thanks Ali Momeni very much for his suggestions and all the conversations during this whole process of project, and all the reviewers and classmates who help me to improve my ideas and project.

 

The Spatianator: Final Presentation – Robert Kotcher & M. Haris Usmani

Assignment,Final Project — rkotcher @ 8:32 pm

Final Project: Spatianator – Robert Kotcher & M. Haris Usmani

Assignment,Final Project — rkotcher @ 8:15 pm

The Spatianator – Description

The Spatianator is a network of (currently) three semi-autonomous robots called crickets that, along with input from human “performers”, collaboratively explore and enhance the behavior of a space. The Spatianator performs a probabilistic composition that is managed by a central controller, which supervises the actions of the crickets through a state machine where each cricket is in one of three possible states at any given time:

  1. Recording mode
  2. Playback mode
  3. Perform mode

The performers (anyone present in the space with the Spatianator) are encouraged to interact with the crickets. The crickets record sounds that are occurring in the room and pass them around to one another, to the point where the room’s filter (behavior in the presence of excitation) becomes exaggerated. The composition aims to enhance the performer’s experience of resonant properties of the space they are in. Below is an example of some of the sounds created during the CFA exhibition last week.

The Spatianator – Acknowledgements

We would like to specially thank Ali Momeni for his guidance over the course of the semester, and for the privilege of being able to work in the ArtFab.

Final Project: Ziyun Peng

Assignment,Final Project,Uncategorized — ziyunpeng @ 4:16 pm

 

FACE  YOGA

FACE_YOGA

 

 

Idea

There’s something interesting about the unattractiveness that one goes through on the path of pursuing beauty. You put on the facial mask to moisturize and tone up your skin while it makes you look like a ghost. You do face yoga exercises in order to get rid of some certain lines on your face but at the mean time you’ll have to many awkward faces which you definitely wouldn’t want others to see. The Face Yoga Game aims at amplifying the funniness and the paradox of beauty by making a game using one’s face.

 

Face Yoga Game from kaikai on Vimeo.

Setup

schematic

 

setup

Learnings

– Machine learning tools: Gesture Follower by IRCAM

This is a very handy tool and very easy to use. There’re more features worth digging into and playing with in the future such as channel weighing and expected speed, etc. I’m glad that got to apply some basics of machine learning in this project and I’m certain that it’ll be helpful for my future projects too.

– Conductive fabrics

This is another thing that I’ve had interest in but never had an excuse to play with. In this project the disappointment is that I had to apply water to the fabrics every time I want to use it but that might be a special case for the myoelectric sensor that I was using. And the performance was not as good with the medical electrodes, possibly due to the touching surface, and since it’s fabric (non-sticky), it moves around while you’re using it.

Obstacles & Potential Improvements

– Unstable performance with the sensors

Although part of this project is to experiment with the possibilities not to use computer vision to detect facial movements, given the fact that the performance wasn’t as good as had been expected, using combination of the both might be the better solution for the future use. One possible alternative I’ve been imagining is that I can use a transparent mask instead of the current one I’m using so that people can see their facial expressions through that and on which I can stick on some color mark points for the computer vision to track with. Although better lightings would be required, vanity lights still stands for this setting.

– User experience and calibration

My ultimate goal is to let everyone involved in the fun, however, opening to all people to play meaning the gestures that I trained myself before hand may not work for everyone, and this was proved on the show day. I was suggested to do a calibration every time at the start of the game play which I think is a very good idea.

– Vanity light bar

 

 

Final Project “TAPO”: Liang

TAPO: Speak Rhythms Everywhere

Idea Evolution:

This project comes from the original idea that people can make rhythms through the resonant property and material of cups and interacting with cups. However, as the project progresses, it is more interesting and proper for people to input the rhythms by speaking than do gestures on cups. It also extends the context from cups to any surface because of the fact that each object has resonant property and specific material. So, the final design and function of TAPO have a significant change from the very raw idea. The new story here is:

“Physical objects have resonance property and specific material. Tap object gives different sound feedback and percussion experience. People are used to making rhythms by beating objects. So, why not provide a tangible way not only allowing people to make rhythms with physical objects around she/he, but also enriching the experience by some computational methods. The ultimate goal for this project is that ordinary people can make and play rhythms with everyday objects, even perform a piece of percussion performance.”

Design & Key Features:

TAPO is an autonomous device that generates rhythms according to people’s input (speech, tapping, making noise). TAPO can be placed on different surfaces, like desk, paper, ground, wall, window… With different material and the object’s resonant property, it is able to create different quality of sound. People’s input gives the pattern of rhythm.

System diagram

a) voice, noise, oral rhythm, beat, kick, knock, oral expression… can be the user input

b) using photo resistor to trigger recording

c) get rid of accelerometer, add led to indicate the state of recording and rhythm play

Hardware

It is composed of several hardware components: a solenoid, a microphone electret, a transistor, a step-up voltage regulator, a Trinket board, a colourful LED, a photocell, a switch and a battery.

photo1

 

photo2

 

Fabrication

I used 3D printing enclosure to package all parts together. The holes with different sizes on the bottom are used for different usage, people can mount a hook or a suction. With these extra tools, it can be places on any surfaces. The other big hole is used for solenoid to beat the surface. The two holes on the top  side are used to show microphone and LED light separately. On each side, there is a hole for photo resistor and switch.

photo3 photo4

TAPO finally looks like this:

photo6 photo5 photo7 photo8 photo9

Demostration:

Final introduction video:

Conclusion & Future Work:

This project gives me a lot more than technology. I learn about how to design and develop a thing from a very raw idea, and keeping thinking about its value, target users, and possible scenarios in a quick and iterative process. I really enjoy the critique session, even though it is tough and sometimes makes me feel disappointed. The positive suggestions are always right and lead me to a high level and more correct direction. I realise my problems on motivation, design, and stroytelling from these communications. Fortunately, it gets much more reasonable from design thinking to value demonstration. I feel better when I find something more valuable and reasonable comes up in my mind. It also teaches me the significance of demonstrating my work when it is hard to describe and explain. In the public show on Dec. 6th, I found people would like to play with TAPO and try different inputs, they are curious about what kind of rhythm TAPO could generate. In the following weeks, I will refine the hardware design and rich the output (some control and digital outputs).

Acknowledge:

I would like to thank very much Ali Momeni for his advices and support on technology and idea development, and all the guest reviewers who gave me many constructive suggestions.

Final Project: Drawable Stompbox – Haochuan Liu

Assignment,Audio,Final Project — haochuan @ 12:32 pm

Drawable stompbox

Drawable stompbox offers a more interesting and interactive way for guitarist to explore the variety of the parameters of the guitar effect world. With this instrument, people can select the guitar effect they want, then use finger to draw the parameters of your effect. Just like the diagram of time-domain and frequency-domain, this instrument can map what you’ve drawn to specific a set of number representing the amplitude and frequency information which will change the pre-written guitar effect’s parameter. You will get a lot of fun when you are trying to figure out the relationship between your drawing and the sound you heard.

Screenshot and video demo

Here is the screenshot of the Drawable Stompbox running in my iPad:

ipad

When you are drawing on iPad, you can not see the lines or patterns. The reason I made the canvas always blank is letting people hear what they have drawn instead of seeing what they have drawn.

Here is a video demo:

drawable stompbox final video from Haochuan Liu on Vimeo.

Previous version

There had been a big change of my project. At the beginning, drawable stompbox was just like a selector for guitar effects: After people wrote down the effects on a piece of paper, the webcam which was above the paper would capture what you had written into a software written in openframeworks. The software would analyze the words and do the recognition using optical character recognition (OCR). When you wrote the right words, the software will tell puredata to turn on the specific effect through OSC, you would finally hear what you’ve written when you play your guitar.

Technical Details

Here is the diagram of Drawable Stompbox:

 Screenshot 2013-12-09 11.35.57

Buttons and coordinates

I use very simple functions in OpenFrameworks to draw the buttons and get the x/y coordinates when moving fingers on the screen of iPad.

Mapping

The blue coordinates which is invisible in the real software represents amplitude (x coordinate) and time (y coordinate) information.

IMG_0007

When you draw something on the canvas, the peak will determine the volume of the sound you will hear. The length of your drawing will determine the frequency parameter.

IMG_0008

IMG_0009

IMG_0010

Communication

The software in iPad uses OSC to communicate with PureData running on the laptop. Thus, PureData can always know which effect is selected and also the values of amplitude and frequency.

Future Work

Currently when you play the guitar with using the Drawable Stompbox you still need a partner to help you draw something on the canvas of iPad to get the parameter change of the effect. It is right now just a prototype or a toy for people to practice instead of performance. The improvement of this project can be changing using people’s finger to using people’s foot. Thus, you can play the guitar and use your foot to draw the effect parameter at the same time.

Final Project: Through the Lens – Patra Virasathienpornkul

Assignment,Final Project — Patt @ 3:01 am

Through the Lens

Through the Lens is a hybrid instrument that involves a piece of paper, a pen, and an OLED transparent display. My goal for this project is to understand the possibilities and the limitations of the device, and to come up with applications that are interesting, educational, and entertaining.

Bouncing Ball from Patt Vira

Steps: 

  • Draw inside the pre-calibrated section on a piece of paper (that is placed on top of a Wacom tablet) using a Wacom Inkling Sketch Pen.
  • Place the OLED transparent display on top of the paper .
  • Watch the graphics on the display interact with the drawings.

What did I learn? 

From this project, I realized that I put a majority of my time trying to understand the basic use of the transparent display and how to get all the technology to work properly. Even though I wish I could have created more applications and presented my project beyond the proof of concept, I am now at the comfortable point where I can use the knowledge that I have to  create interesting applications based on my own imagination and the feedback that I received from the outsiders perspective. The comments I received during both the final presentation and the show are invaluable. One important point I took away is that no one cares about the technology – what matters is the thing you do with the technology.

How can the project be improved? 

The 4D System transparent display has a lot of potential, and I believe I have only tackled a small section of the possibilities. The feedback I received during the final presentation and the show is very helpful, and widen the scope of project ideas I can do with the knowledge I currently have.  Here are the two directions I like to explore further.

1) Increase the area on a piece of paper to allow a bigger space for people to draw.

2) Use the display as a lens (think google glass)

I’d also like to get rid of the graphics tablet and make the display portable by exploring alternative ways of acquiring the pen strokes.

Acknowledgement: 

  • This project is inspired by Glassified by the Fluid Interfaces Group at the MIT Media Lab.
  • Special thanks to  Ali Momeni and Anirudh Sharma.
  • Thanks to Golan Levin and the Frank-Ratchye STUDIO for Creative Inquiry grant for allowing me to purchase the 4D systems OLED transparent display.

Final Project: Jake Marsico

Final Project,Submission,Uncategorized — jmarsico @ 11:45 pm

The final deliverable of these two instruments (video portrait register and reactive video sequencer) was a series of two installations on the CMU campus.

 Learnings:

The version  shown in both installations had major flaws.  The installation was meant to show a range of clips that varied in emotion and flowed seamlessly together. Because I shot the footage before completing the software, it wasn’t clear exactly what I needed from the actor (exact time of each clip, precision of face registration, number of clips for each emotion).  After finishing the playback software, it became clear that the footage on hand didn’t work as well as it could.  Most importantly, the majority of the clips lasted for more than 9 seconds. In order to really nail the fluid transitions, I had to play each clip foward and then in reverse, so as to ensure each clip finished in the same position it started. To do that with each 9 second clip would have meant that each clip would have lasted a total of 18 seconds (9 forward, 9 backwards). These 18 second clips would eliminate any responsiveness to movements of viewers.

As a result, I chose to only use the first quarter of each clip and play that forward and back. Although this made the program more responsive to viewers, it cut off the majority of the subject’s motions and emotions, rendering the entire piece almost emotionless.

Another major flaw is that the transitions between clips very noticeable as a result of imperfect face registrations. In hindsight, it would require an actor or actress with extreme dedication and patience to perfectly register their face at the beginning of each clip. It might also require some sort of physical body registering hardware. A guest critic suggested that a better solution might be to pair the current face-registration tool with a face-tracking and frame re-alignment application in post production.

If this piece were to be shown outside the classroom, I would want to re-shoot the video with a more explicit “script” and look into building a software face-aligning tool using existing face-tracking tools such as ofxFaceTracker for openFrameworks.

Code:

github.com/jmarsico/Woo/tree/master

 

Final Project: Note Cubes – Wanfang Diao

Assignment,Final Project,Submission — Wanfang Diao @ 3:45 pm

Idea

How we learn about our physical world? How we learn light? How we learn sound? How we get the basic concept of space and time?

As for me, I learned form experience. I learned from the experience of stack toy bricks and tearing them down. I learned from  tapping a stainless steel plate with wooden spoon. I learn from doing, learn from trials. Once I get the rule of game, I begin to create.

In this project, I want to make music notes more tangible and touchable, which can be experienced in a more intuitive way. I aim to build a very straight forward mapping relationship between “time/sound” and “space/light(or color)” , which can not only give children a concept of the structure  of melody but also give them access to  create  a piece of music.

Therefore, I designed the Note Cubes, a set of tangible cubes for kids to explore sound, notes and rhythm. By putting them a line or also stacking them (just like playing toy bricks), kids can let cubes trigger their “neighbor cubes “by colorful LEDs to play notes and then get a piece of sound or melody after a few time trials.

Note Cubes from Wanfang Diao on Vimeo.

 

This project has been shown at Assemble ( assemblepgh.org/ ) Dec. 2013.Here is the video about how kids play with Note Cubes!What I learn from the show is there should be more obvious signs to the trigger direction of cubes. And more shapes can be explored.

 

Public Show for Note Cubes2 from Wanfang Diao on Vimeo.

About tech:
There is micro-controller (Trinket), photosensors, speaker and LEDs in each cubes. Each speaker can play notes when triggered by LED lights from other cubes under the control of micro-controller.
The cube’s shell is made by hardboard by laser cutting.
Acknowledgements
Thanks for the help of Ali Momeni, Dale Clifford, Zack Jacobson-Weaver,  Madeline Gannon and my  friends in CoDelab!

 

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Hybrid Instrument Building 2014 | powered by WordPress with Barecity