Group Project: Multi-Channel Sound System (part 1)

Audio,Hardware,Instrument,Reference,Software — rkotcher @ 4:23 pm
circle500

 

Introduction:
The multi-channel sound system group is implementing a spatial instrument that allows us to interactively experience sound in space. The system includes software that controls a set of eight (currently) speakers that are positioned in space. The experiences will depend on the specific hardware setup and mechanics, which are still in the works. Our group has listed five possible hardware/software setups, and we are brainstorming the many experiences we can create with each setup. These ideas are listed in the section “Categories”.

 

Categories:
Each section corresponds to a specific hardware/software setup. For each, we include a few ideas that we have come up with so far.
  • A mobile disk that can be worn (as a hat, etc) – “Ambiance Capture Headset/Scenes from a Memory”

Every day, we move about from place-to-place to spend our time as driven by our motivations. Home, Road/Car, Office/School, Library, Park, Cafeteria, Bar, Nightclub, Friend’s place, Quite Night- we all experience a different ambiance around us and a change of environment is usually a good thing. It may soothe us, or trigger a certain personal mode we have (like a work mode, a social mode or a party mode). What if we could capture this ambiance, in a ‘personalized’ way and create this around us when we want- introducing the Ambiance Capture Headset. This headset has a microphone array around it and it records all the audio around you- it may catalog this audio using GPS data. You come home, connect the headset to your laptop & an 8-speaker circle and after processing audio (extracting ambiance only, using differences in amplitude and correlation in time etc.), the system lets you choose the ambiance you’d want. You can quickly recall your day by sweeping through and re-experiencing where you’ve been.

  • Head-sized disk with speakers positioned evenly around the disk. Facing inward – “Circle of Confusion”

In this section, our ideas tend to fall in two categories, either using the setup to confuse a listener’s perception of the world around them, or to enhance it in some way. In the first scenario, one idea is to amplify sounds that are occurring at 180 degrees from the speaker, in other words, experiencing a sonic environment that is essentially reversed from reality.

  • Speakers hanging from the ceiling in arbitrary shapes – “EARS”

Sometimes you just need someone to listen to you, like a few ears to hear you out maybe? A secret, a desire, an idea, a confession. This is a setup that connects with people, and let’s them express what they want. It’s a room you walk into which has speakers suspended from the ceiling. You raise your hand towards one, and when that speaker senses you coming near, it descends to your mouth level so you may talk/whisper into it (speakers can act as microphones as well! or we may attach a mic to each). You may tell different things to the different speakers, and once you’ve said all you want, you hear what the speakers have heard before. This is chosen by the current position of speakers, as all speakers start to descend if you try to touch them. All voices are coded, like through a vocoder to protect identities of people. Hearing some more wishes, problems, inspirations, hopes you probably feel lighter than you did before.

  • A 3D setup (perhaps a globe-shaped setup) – “World Cut”

You enter into an 8 speaker circle, having a globe in front. You spin the globe and input a particular planar intersection of the world- this planar intersection ‘cuts’/intersects a number of countries/locations. These intersected locations map to a corresponding direction in our circle, so you hear music/voices/languages from the whole ‘cut’ at a go in out 8 speaker circle. You can spin the globe and explore the world in the most peculiar of ways.

Current system setup
The diagram shows the current hardware setup. As we explain in the section below, it is subject to minor modifications.

 

pic2

p2-500

 

Implementing a STABLE and ROBUST system for practical use:
The current hardware implementation is not practical, yet. The wiring obstructs the experience and the system itself is difficult to transport. The items in the section called “Categories” describes completely new hardware setups that will fix this problem. Shielded speaker wires or PCB boards may be part of a more stable. Making the project more robust could include making acrylic enclosures for the speakers. Finally, we’re looking into using larger speakers to improve the experience.

 

Initial Group Members:
Sean Lee
David E. Lu
Jake Jae Wook Lee
M Haris Usmani

 

Current Group Members:
M Haris Usmani (Persistent Member)
Robert Kotcher
Haochuan Liu
Liang He
Meng Shi
Wanfang Diao
Jake Berntsen

Assignment 2 “Musical Painting” by Wanfang & Meng(2013)

Arduino,Assignment,Max,Sensors,Submission — meng shi @ 1:28 am

musical painting

 

What:

This idea came from the translation between music and painting. When somebody draws some picture, the music will change at the same time. So, it looks like you draw some music:)

How:

We use sensor to test the light, get the analog input , then transform it  to sound. I think the musical painting is a translation from visible to invisible, from seeing to listening.

Why:

It is fun to break the rule between different sense. To on the question, the differences between noise, sound and music, personally thinking, that the sound is normal listening for people. The noise may be disturb people. The music is a sound beyond people’s expect. So based on the environment, people’s idea will also change, we will create more suitable music .

Assignment 2: “Comfort Noise” by Haochuan Liu & Ziyun Peng (2013)

Arduino,Assignment,Submission — ziyunpeng @ 10:40 pm

fini_500

Idea

People who don’t usually pay attention to noise would often take it for granted as disturbing sounds, omitting  the musical part in it – the rhythm, the melody and the harmonics. We hear it and we want to translate and amplify the beauty of noise to people who didn’t notice.

Why pillow?

The pillow is a metaphor for comfort – this is what we aim for people perceiving from hearing noise through our instrument, on the contrary to what noise has been impressed people.

When you place your head on a pillow, it’s almost like you’re in a semi-isolated space – your head is surrounded by the cotton, the visual signals are largely reduced since you’re now looking upwards and there’s not that much happening in the air. We believe by minimizing the visual content, one’s hearing would become more sensitive.

Make

We use computational tools ( Pure Data & SPEAR) and our musical ears to extract the musical information in the noise, then map them to the musical sounds (drum and synth ) that people are familiar with.

The Pduino (for reading arduino in PD) and PonyoMixer (multi-channel mixer) helped us a lot.

pd

Inside the pillow, there’s a PS2 joystick  used to track user’s head motions. It’s a five-direction joystick but in this project we’re just using the left and right. We had a lot of fun making this.

sensor_500

Here’s the mix box we made for users to adjust the volume and the balance of the pure noise and the musical noise extraction sounds.

mixBox

The more detailed technical setting is as listed below:

Raspberry Pi – running Pure Data

Pure Data – reading sensor values from arduino and sending controls to sounds

Arduino Nano – connected to sensors and Raspberry Pi

Joystick – track head motion

Pots – Mix and Volume control

Switch – ON/OFF

LED – ON/OFF indicator

 

 

Assignment 2 “Musical Painting” by Wanfang & Meng(2013)

musical painting Our idea begins from traditional Chinese Painting, painting lines express a feeling of strength and rhythm concisely. Our work tries to transfer the beauty of painting to music. However, I found that making a harmonious”sound” from people’s easy input from 9 photosensors is not as easy as I thought. I tried some “chord”,like C mA D and so on. It helps a little bit…The final result is in the vedio in Vimeo.

I know there are pretty work to improve the “algorithm”. Although, not sound really like music, I think the value of the work is that we try to break the line between sound and light, and see what happens.

In order to find a improvement way, I  observed several other interactive musical instruments  works, I think maybe using some basic rhythm pieces play repeatedly and only changing the chords as interactive elements is a way of improvement of this work.

The technology we used are photo sensors and arduino and Max.

assignment1 1 from Wanfang Diao on Vimeo.

Assignment 2: “DialToneMadness” by Mauricio Contreras (2013)

Arduino,Assignment,Audio,Hardware,Sensors,Software,Submission — mauricio.contreras @ 9:38 pm

DialToneMadness is an instrument that generates different audio tones, which frequency and period of repetition can be altered  by proximity. It is based both in the Android and Arduino development platforms. It uses an ultrasonic based proximity sensor to measure distance to the “triggering” object, whatever that may be. The sensor is triggered and read from within Arduino and is then sent to the smartphone running Android. The latter reads this information and produces a tone based on it. Due to the aesthetic of the piece, with a high end smartphone, “retro” DTMF tones were used. This gives the audio output of the piece an interesting turn, since these sounds contain multiple tones and are not just sorted in a higher/lower pitch fashion. The smartphone responds with a single tone per message sent by the Arduino, hence the period of the repetitions is controlled by the latter, and is a linear function of the distance.
 

 
ProximityDialToneMadness backProximityDialToneMadness front

Assignment 2: “Footprints” by Rob Kotcher & Spencer Barton (2013)

Assignment,Software,Submission — spencer barton @ 9:20 pm

youtu.be/yClIkoxqSas&w=500&h=375

As we traverse our world we leave a trail. These paths
tell a story about from where we came and where we are
headed. As much as these paths are of our own free
will, they are also a product of the intentions of
those around us. With footprints we explore these
individual paths and in turn manipulate them through
subtle actions. Our intentions force the creation of
an all seeing eye which only we as the controllers and
observers are aware of.

Our camera watches from above recording motion with a
tracking algorithm while those below walk unaware. We
record motion by looking at frame differences which
then translates to activity. As a space becomes more
active it goes from blue to red.

Clone our git repo:
github.com/sbarton272/footprints.git

Assignment 2: “Suspended Motion” by M Haris Usmani (2013)

Assignment,Max,Software,Submission — Usmani @ 8:28 pm

ins1_usmani

Suspended Motion is a setup that tends to make the user believe that he/she is in a state of motion on a spinning chair, while in fact for most part of the experience the user remains stationary. It is based on a Philosophical Theme revolving around Scientism.

Today, we all live in the Age of Science and we embrace everything that science brings with it. Just look around and you will find that we are surrounded by technology that was just science fiction some decades ago- but this sometimes tends to make us believe that Science is the most authoritative worldview: it has all the answers to our questions and it alone can explain the true inner working of the universe- only science can answer how the universe came about, how we evolved or what our purpose in this world is. Suspended Motion gives a different perspective on the topic.

Suspended Motion consists of a rotating-chair (in fact any rotating chair) where the user sits on the chair, wears a headphone (preferably wireless, or hold your laptop as you spin) and follows the instructions on the sound clip (link below). He is first instructed to close his eyes, spin the chair and observe how the sound field exactly matches his current position. This is done by angular position data sent to the laptop via OSC from an iPhone’s Compass attached to the chair. After about 40 seconds, the user is instructed to give a final push and to set off in a decelerating rotation. The user focuses on the sound, and experiences Suspended Motion for the last 25 seconds of his spin.

ins2_usmani

This is more of an ‘experience-based’ instrument so I would urge you to try it yourself using the following setup. You can always hear the audio clip, just to get a sense of how things go.
Audio Clip (Disclaimer: Lock Howl-Storm Corrosion from the 2012 release “Storm Corrosion” is the copyrighted property of its owner(s). )
MAX Sketch
Ambisonics for MAX
Compass Data via GyroOSC (iPhone App)

Max Extensions: Part 1

Arduino,Hardware,Max,OpenCV,Sensors — Ali Momeni @ 12:17 am

In order to add extra functionality to max, you can download and install “3rd party externals”.  These are binaries that you download, unzip and place within your MAX SEARCH PATH (i.e. in Max, go to Options > File Preferences… and add the folder where you’ll put your 3rd party extensions; I recommend a folder called “_for-Max” in your “Documents” folder).

Some helpful examples:

  • Physical Computing: Maxuino
    • how to connect motors/lights/solenoids/leds to Max with Maxuino
  • Machine Vision: OpenCV for Max (cv.jit)
  • Audio Analsyis for Max: Zsa Objects (Emmanuel Jourdain) and analyzer~ (Trista Jehan)

Instrument: “Mouse glove” by Marco Ramilli (2010)

Arduino,Instrument,Reference,Software — rkotcher @ 10:17 pm

Screen Shot 2013-09-02 at 6.15.41 PM

more…

sound sensing and arduino

Arduino,Audio,Sensors — Ali Momeni @ 3:45 pm
« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2025 Hybrid Instrument Building 2014 | powered by WordPress with Barecity