Hello and welcome to Building Hybrid Instruments.
Links to relevant resources for the course as well as student contributions and discussions will be posted on this blog.
Hello and welcome to Building Hybrid Instruments.
Links to relevant resources for the course as well as student contributions and discussions will be posted on this blog.
Presenting my original “wobble box” to the class and Ali’s guests was a valuable experience. The criticisms I received were relatively consistent, and I have summarized them to the best of my ability below:
In the interested of addressing the concerns above, I completely redesigned the wobble box, abandoning the old prototype for a new model.
The most obviously improved element of the new box is the design. Now that I knew exactly what the necessary electronic parts were, I removed all the extra space in the box. The new design conserves about three square inches of space, and the holes cut for the distance sensors are much neater.
I applied three layers of surface treatment; a green primer, a metallic overcoat, and a clear glaze. The result is a luminescent coloring, and a rubber-esque texture that prevents the box from sliding around when placed on a wooden surface. In my opinion, it looks nice.
A strong LED light was placed exactly in between the two distance sensors, illuminating the ideal place for the user to put his/her hand. This also provides a clue for the audience, making it more clear exactly what the functionality of the box is by illuminating the hand of the user. The effect can be rather eery in dark rooms. Perhaps most importantly, it indicates that the Teensy micro-controller has been recognized by Max, a feature lacking in the last prototype. This saved me many headaches the second time around.
The new box has two new distance sensors, with differing ranges. One transmits very fine values between about 2 inches and 10 inches, the other larger values between about 4 and 18 inches. Staggering the ranges like this allows for a whole new world of control for the user, such as tilting the hand from front to back, using two hands with complete independence, etc.
Finally, I moved the entire USB connection to the interior of the device, electing to instead just create a hole for the cord to come out. After then securing the Teensy within the box, the connection was much stronger than it was in the previous prototype.
In addition to fixing the hardware, I created a few new software environments between Max and Ableton that allow for more expressive use of the box. The first environment utilized both Max and Ableton Live to create an interactive art piece. As the user stimulated the two distance sensors, a video captured by the laptop camera would be distorted along with an audio track of the user talking into the computer microphone. Moving forward, my goals were to extend the ability to use the box as a true instrument, by granting a way to trigger pitches using only the box and a computer. To achieve this, I wrote a max for live patch that corresponds a note sequence-stepper with a microphone. Every time the volume of the signal picked up by the microphone exceeds a certain threshold, the melody goes forward by one step. Using this, the user can simply snap or clap to progress the melody, while using the box to control the timbre of the sound. I then randomized the melody so that it selected random notes from specific scales, as to allow for improvisation. The final software environment I wrote, shown below, allows for the user to trigger notes using a midi keyboard, and affect the sounds in a variety of ways using the box. For the sake of exhibiting how this method can be combined with any hardware the user desires, I create a few sounds on an APC40 that I then manipulate with the box.
We run into a lot of sounds in our lives and sometimes we will naturally come up with certain color with those sounds. We may even form a memory of our city or living environment with some interesting sounds and colors. As for me, when I listen to some fast and happy tempo I could sense a color of dark red and when I run into some soft music, I may feel it is green or blue. Different people may have different feeling about different sounds. Therefore, ImSound is a devices aiming to encourage people collecting useless sounds in lives, all kinds of noise for example, convert them to certain colors based on their understanding and play the similar mixed sounds when run into a new image. The process stems from sound to image and then to sound.
For the user himself/herself, this device may help him/her convert some useless or even annoying sounds into some interesting funny sounds and find new information from it. As for others, this devices is like a business card of a user’s specific understanding about the world’s sounds and share to others.
Based on last time’s feedback, people failed to get aware of the focus when capturing an image. Thus, in the final prototype, I use a magnifier attaching a camera and a mic as a portable capture device for people to focus where the sound and where they will capture an image, with a metaphor of finding sounds in our lives. Instead of several buttons to control the recording and taking image, a single push button in the handle of magnifier is used to trigger taking a photo and then automatically record a 3s sound.
Instead of using the whole histogram of images, I converted the image from RGB to HSV and used the H value for histograms with 12 bins (variable). That is to say, the images will be divided into 12 clusters based on their major color. Each image is classified and the sound will be recorded into corresponding track contributing to the library of that color. That is to say, every color has a soundtrack which belong to this cluster. Then a granular analysis is used to divided the sound into small grains and remix them for a new sound of that class. When changing to the play mode, the H histogram is computed and the corresponding sound will be played.
I used a OF ofxMaxim with FFT processing for granular analysis, but the output sound effect is not that good. The speed of sound is changed but without much similar grains connecting together. This is a main aspect I should improve for the next step of this project.
1. the most important is to get more in-depth granular analysis to re-mix the sounds. My current thought is to combine the grains with their similarities among each other. The funny part is that with the growing of number of recording sounds, the output sound is dynamic changing and form some new sound.
2. Take some more actual image and sound to test the effects of whole process. Aiming to a specific type of sound may be a good choice, such as city noise.
Although this project is far from fully completion, I learned a lot in this process, not only the technologies such as RPI, openFrameworks and Linux; more importantly, I learned a lot about input/output design, mapping, and telling story (a point I did not do well). It teaches me to think why should we design this device and inspired me to think whom and where does a device will be used in my future projects. Thanks Ali Momeni very much for his suggestions and all the conversations during this whole process of project, and all the reviewers and classmates who help me to improve my ideas and project.
The Spatianator – Description
The Spatianator is a network of (currently) three semi-autonomous robots called crickets that, along with input from human “performers”, collaboratively explore and enhance the behavior of a space. The Spatianator performs a probabilistic composition that is managed by a central controller, which supervises the actions of the crickets through a state machine where each cricket is in one of three possible states at any given time:
The performers (anyone present in the space with the Spatianator) are encouraged to interact with the crickets. The crickets record sounds that are occurring in the room and pass them around to one another, to the point where the room’s filter (behavior in the presence of excitation) becomes exaggerated. The composition aims to enhance the performer’s experience of resonant properties of the space they are in. Below is an example of some of the sounds created during the CFA exhibition last week.
The Spatianator – Acknowledgements
We would like to specially thank Ali Momeni for his guidance over the course of the semester, and for the privilege of being able to work in the ArtFab.
There’s something interesting about the unattractiveness that one goes through on the path of pursuing beauty. You put on the facial mask to moisturize and tone up your skin while it makes you look like a ghost. You do face yoga exercises in order to get rid of some certain lines on your face but at the mean time you’ll have to many awkward faces which you definitely wouldn’t want others to see. The Face Yoga Game aims at amplifying the funniness and the paradox of beauty by making a game using one’s face.
– Machine learning tools: Gesture Follower by IRCAM
This is a very handy tool and very easy to use. There’re more features worth digging into and playing with in the future such as channel weighing and expected speed, etc. I’m glad that got to apply some basics of machine learning in this project and I’m certain that it’ll be helpful for my future projects too.
– Conductive fabrics
This is another thing that I’ve had interest in but never had an excuse to play with. In this project the disappointment is that I had to apply water to the fabrics every time I want to use it but that might be a special case for the myoelectric sensor that I was using. And the performance was not as good with the medical electrodes, possibly due to the touching surface, and since it’s fabric (non-sticky), it moves around while you’re using it.
Obstacles & Potential Improvements
– Unstable performance with the sensors
Although part of this project is to experiment with the possibilities not to use computer vision to detect facial movements, given the fact that the performance wasn’t as good as had been expected, using combination of the both might be the better solution for the future use. One possible alternative I’ve been imagining is that I can use a transparent mask instead of the current one I’m using so that people can see their facial expressions through that and on which I can stick on some color mark points for the computer vision to track with. Although better lightings would be required, vanity lights still stands for this setting.
– User experience and calibration
My ultimate goal is to let everyone involved in the fun, however, opening to all people to play meaning the gestures that I trained myself before hand may not work for everyone, and this was proved on the show day. I was suggested to do a calibration every time at the start of the game play which I think is a very good idea.
– Vanity light bar
First I had to find a way to fix the glasses better onto the spinners, and I designed a small screw-cap-lock mechanism to keep them in position.
I’ve reduced the number of pipes, and now there’s only one, both for sucking and pumping.
Finally, my custom made Arduino Due shields came from fabrication, and I’m now cable-free.
This project comes from the original idea that people can make rhythms through the resonant property and material of cups and interacting with cups. However, as the project progresses, it is more interesting and proper for people to input the rhythms by speaking than do gestures on cups. It also extends the context from cups to any surface because of the fact that each object has resonant property and specific material. So, the final design and function of TAPO have a significant change from the very raw idea. The new story here is:
“Physical objects have resonance property and specific material. Tap object gives different sound feedback and percussion experience. People are used to making rhythms by beating objects. So, why not provide a tangible way not only allowing people to make rhythms with physical objects around she/he, but also enriching the experience by some computational methods. The ultimate goal for this project is that ordinary people can make and play rhythms with everyday objects, even perform a piece of percussion performance.”
TAPO is an autonomous device that generates rhythms according to people’s input (speech, tapping, making noise). TAPO can be placed on different surfaces, like desk, paper, ground, wall, window… With different material and the object’s resonant property, it is able to create different quality of sound. People’s input gives the pattern of rhythm.
a) voice, noise, oral rhythm, beat, kick, knock, oral expression… can be the user input
b) using photo resistor to trigger recording
c) get rid of accelerometer, add led to indicate the state of recording and rhythm play
It is composed of several hardware components: a solenoid, a microphone electret, a transistor, a step-up voltage regulator, a Trinket board, a colourful LED, a photocell, a switch and a battery.
I used 3D printing enclosure to package all parts together. The holes with different sizes on the bottom are used for different usage, people can mount a hook or a suction. With these extra tools, it can be places on any surfaces. The other big hole is used for solenoid to beat the surface. The two holes on the top side are used to show microphone and LED light separately. On each side, there is a hole for photo resistor and switch.
TAPO finally looks like this:
Final introduction video:
This project gives me a lot more than technology. I learn about how to design and develop a thing from a very raw idea, and keeping thinking about its value, target users, and possible scenarios in a quick and iterative process. I really enjoy the critique session, even though it is tough and sometimes makes me feel disappointed. The positive suggestions are always right and lead me to a high level and more correct direction. I realise my problems on motivation, design, and stroytelling from these communications. Fortunately, it gets much more reasonable from design thinking to value demonstration. I feel better when I find something more valuable and reasonable comes up in my mind. It also teaches me the significance of demonstrating my work when it is hard to describe and explain. In the public show on Dec. 6th, I found people would like to play with TAPO and try different inputs, they are curious about what kind of rhythm TAPO could generate. In the following weeks, I will refine the hardware design and rich the output (some control and digital outputs).
I would like to thank very much Ali Momeni for his advices and support on technology and idea development, and all the guest reviewers who gave me many constructive suggestions.
Drawable stompbox offers a more interesting and interactive way for guitarist to explore the variety of the parameters of the guitar effect world. With this instrument, people can select the guitar effect they want, then use finger to draw the parameters of your effect. Just like the diagram of time-domain and frequency-domain, this instrument can map what you’ve drawn to specific a set of number representing the amplitude and frequency information which will change the pre-written guitar effect’s parameter. You will get a lot of fun when you are trying to figure out the relationship between your drawing and the sound you heard.
Here is the screenshot of the Drawable Stompbox running in my iPad:
When you are drawing on iPad, you can not see the lines or patterns. The reason I made the canvas always blank is letting people hear what they have drawn instead of seeing what they have drawn.
Here is a video demo:
There had been a big change of my project. At the beginning, drawable stompbox was just like a selector for guitar effects: After people wrote down the effects on a piece of paper, the webcam which was above the paper would capture what you had written into a software written in openframeworks. The software would analyze the words and do the recognition using optical character recognition (OCR). When you wrote the right words, the software will tell puredata to turn on the specific effect through OSC, you would finally hear what you’ve written when you play your guitar.
Here is the diagram of Drawable Stompbox:
Buttons and coordinates
I use very simple functions in OpenFrameworks to draw the buttons and get the x/y coordinates when moving fingers on the screen of iPad.
The blue coordinates which is invisible in the real software represents amplitude (x coordinate) and time (y coordinate) information.
When you draw something on the canvas, the peak will determine the volume of the sound you will hear. The length of your drawing will determine the frequency parameter.
The software in iPad uses OSC to communicate with PureData running on the laptop. Thus, PureData can always know which effect is selected and also the values of amplitude and frequency.
Currently when you play the guitar with using the Drawable Stompbox you still need a partner to help you draw something on the canvas of iPad to get the parameter change of the effect. It is right now just a prototype or a toy for people to practice instead of performance. The improvement of this project can be changing using people’s finger to using people’s foot. Thus, you can play the guitar and use your foot to draw the effect parameter at the same time.