Group Project: Multi-Channel Sound System (part 1)

Audio,Hardware,Instrument,Reference,Software — rkotcher @ 4:23 pm
circle500

 

Introduction:
The multi-channel sound system group is implementing a spatial instrument that allows us to interactively experience sound in space. The system includes software that controls a set of eight (currently) speakers that are positioned in space. The experiences will depend on the specific hardware setup and mechanics, which are still in the works. Our group has listed five possible hardware/software setups, and we are brainstorming the many experiences we can create with each setup. These ideas are listed in the section “Categories”.

 

Categories:
Each section corresponds to a specific hardware/software setup. For each, we include a few ideas that we have come up with so far.
  • A mobile disk that can be worn (as a hat, etc) – “Ambiance Capture Headset/Scenes from a Memory”

Every day, we move about from place-to-place to spend our time as driven by our motivations. Home, Road/Car, Office/School, Library, Park, Cafeteria, Bar, Nightclub, Friend’s place, Quite Night- we all experience a different ambiance around us and a change of environment is usually a good thing. It may soothe us, or trigger a certain personal mode we have (like a work mode, a social mode or a party mode). What if we could capture this ambiance, in a ‘personalized’ way and create this around us when we want- introducing the Ambiance Capture Headset. This headset has a microphone array around it and it records all the audio around you- it may catalog this audio using GPS data. You come home, connect the headset to your laptop & an 8-speaker circle and after processing audio (extracting ambiance only, using differences in amplitude and correlation in time etc.), the system lets you choose the ambiance you’d want. You can quickly recall your day by sweeping through and re-experiencing where you’ve been.

  • Head-sized disk with speakers positioned evenly around the disk. Facing inward – “Circle of Confusion”

In this section, our ideas tend to fall in two categories, either using the setup to confuse a listener’s perception of the world around them, or to enhance it in some way. In the first scenario, one idea is to amplify sounds that are occurring at 180 degrees from the speaker, in other words, experiencing a sonic environment that is essentially reversed from reality.

  • Speakers hanging from the ceiling in arbitrary shapes – “EARS”

Sometimes you just need someone to listen to you, like a few ears to hear you out maybe? A secret, a desire, an idea, a confession. This is a setup that connects with people, and let’s them express what they want. It’s a room you walk into which has speakers suspended from the ceiling. You raise your hand towards one, and when that speaker senses you coming near, it descends to your mouth level so you may talk/whisper into it (speakers can act as microphones as well! or we may attach a mic to each). You may tell different things to the different speakers, and once you’ve said all you want, you hear what the speakers have heard before. This is chosen by the current position of speakers, as all speakers start to descend if you try to touch them. All voices are coded, like through a vocoder to protect identities of people. Hearing some more wishes, problems, inspirations, hopes you probably feel lighter than you did before.

  • A 3D setup (perhaps a globe-shaped setup) – “World Cut”

You enter into an 8 speaker circle, having a globe in front. You spin the globe and input a particular planar intersection of the world- this planar intersection ‘cuts’/intersects a number of countries/locations. These intersected locations map to a corresponding direction in our circle, so you hear music/voices/languages from the whole ‘cut’ at a go in out 8 speaker circle. You can spin the globe and explore the world in the most peculiar of ways.

Current system setup
The diagram shows the current hardware setup. As we explain in the section below, it is subject to minor modifications.

 

pic2

p2-500

 

Implementing a STABLE and ROBUST system for practical use:
The current hardware implementation is not practical, yet. The wiring obstructs the experience and the system itself is difficult to transport. The items in the section called “Categories” describes completely new hardware setups that will fix this problem. Shielded speaker wires or PCB boards may be part of a more stable. Making the project more robust could include making acrylic enclosures for the speakers. Finally, we’re looking into using larger speakers to improve the experience.

 

Initial Group Members:
Sean Lee
David E. Lu
Jake Jae Wook Lee
M Haris Usmani

 

Current Group Members:
M Haris Usmani (Persistent Member)
Robert Kotcher
Haochuan Liu
Liang He
Meng Shi
Wanfang Diao
Jake Berntsen

Assignment 2 “Musical Painting” by Wanfang & Meng(2013)

Arduino,Assignment,Max,Sensors,Submission — meng shi @ 1:28 am

musical painting

 

What:

This idea came from the translation between music and painting. When somebody draws some picture, the music will change at the same time. So, it looks like you draw some music:)

How:

We use sensor to test the light, get the analog input , then transform it  to sound. I think the musical painting is a translation from visible to invisible, from seeing to listening.

Why:

It is fun to break the rule between different sense. To on the question, the differences between noise, sound and music, personally thinking, that the sound is normal listening for people. The noise may be disturb people. The music is a sound beyond people’s expect. So based on the environment, people’s idea will also change, we will create more suitable music .

Assignment 2: “Comfort Noise” by Haochuan Liu & Ziyun Peng (2013)

Arduino,Assignment,Submission — ziyunpeng @ 10:40 pm

fini_500

Idea

People who don’t usually pay attention to noise would often take it for granted as disturbing sounds, omitting  the musical part in it – the rhythm, the melody and the harmonics. We hear it and we want to translate and amplify the beauty of noise to people who didn’t notice.

Why pillow?

The pillow is a metaphor for comfort – this is what we aim for people perceiving from hearing noise through our instrument, on the contrary to what noise has been impressed people.

When you place your head on a pillow, it’s almost like you’re in a semi-isolated space – your head is surrounded by the cotton, the visual signals are largely reduced since you’re now looking upwards and there’s not that much happening in the air. We believe by minimizing the visual content, one’s hearing would become more sensitive.

Make

We use computational tools ( Pure Data & SPEAR) and our musical ears to extract the musical information in the noise, then map them to the musical sounds (drum and synth ) that people are familiar with.

The Pduino (for reading arduino in PD) and PonyoMixer (multi-channel mixer) helped us a lot.

pd

Inside the pillow, there’s a PS2 joystick  used to track user’s head motions. It’s a five-direction joystick but in this project we’re just using the left and right. We had a lot of fun making this.

sensor_500

Here’s the mix box we made for users to adjust the volume and the balance of the pure noise and the musical noise extraction sounds.

mixBox

The more detailed technical setting is as listed below:

Raspberry Pi – running Pure Data

Pure Data – reading sensor values from arduino and sending controls to sounds

Arduino Nano – connected to sensors and Raspberry Pi

Joystick – track head motion

Pots – Mix and Volume control

Switch – ON/OFF

LED – ON/OFF indicator

 

 

Assignment 2: “Drip Hack” by David Lu (2013)

Assignment,Submission — David Lu @ 10:00 pm

Drip Hack is a hack that drips. Inspired by my past experiences with cobbling together random pieces of cardboard, this piece aims to explore our childlike curiousity of water dripping in the kitchen sink. The rate of dripping can be controlled by the pair of valves on top, and the water drops hits the metal bowl and plastic container below. The resulting vibrations are picked up by piezo-electric mics, which are amplified by ArtFab’s pre-amp. Since there are two drippers, interesting cross rhythms can be generated.

Assignment 2: “Be Still” by Jakob Marsico (2013)

Assignment,Submission — Tags: , , , , , — jmarsico @ 9:57 pm

 

 

“Be Still” mimics rhythmic patterns that can only be heard when one stops to listen. The piece focuses on our need to pay attention to nature. Using motion sensors, solenoid valves and a micro controller, “Be Still” forces the visitor to stop for an exaggerated amount of time before it grants them the pleasure of hearing its song.

Assignment 2 “Musical Painting” by Wanfang & Meng(2013)

musical painting Our idea begins from traditional Chinese Painting, painting lines express a feeling of strength and rhythm concisely. Our work tries to transfer the beauty of painting to music. However, I found that making a harmonious”sound” from people’s easy input from 9 photosensors is not as easy as I thought. I tried some “chord”,like C mA D and so on. It helps a little bit…The final result is in the vedio in Vimeo.

I know there are pretty work to improve the “algorithm”. Although, not sound really like music, I think the value of the work is that we try to break the line between sound and light, and see what happens.

In order to find a improvement way, I  observed several other interactive musical instruments  works, I think maybe using some basic rhythm pieces play repeatedly and only changing the chords as interactive elements is a way of improvement of this work.

The technology we used are photo sensors and arduino and Max.

assignment1 1 from Wanfang Diao on Vimeo.

Assignment 2: “ProSound” by Liang He and Ding Xu (2013)

Assignment,Submission — lianghe @ 9:52 pm

IMG_1111

ProSound is an instrument that explains “Proxemics” theory by altering lights and audio to the audience. It is composed of a 3D priented enclosure, infrared proximity sensors, a LED, and Arduino. Its size is  4 inch (height)*3.5 inch (width) and it looks like a semitransparent bottle with three “eyes”. Proxemics introduces four types of interaction spaces: intimate space, personal space, social space and public space. In ProSound, we hope to detect three spaces: personal space, social space and public space. Through the central proximity sensor it can detect the distance between the user and the bottle, indicating what space the user is staying at. It will changes LED’s colours and the loudness of the sound according to the distance. In addition, when the user stay at the social space, which means the normal interaction space, ProSound records user’s speech and repeats it again and again. A piece of midi clip is playing when user interact with the bottle. The user is able to control the speech’s pitch and the interval of the midi clip by approaching the proximity sensors on both sides by hand. Our project is aimed to deliver the concept of “Interaction Space” through user’s interaction with ProSound. We hope users understand the principles of Proxemics in playing with ProSound and the magic things they can make with space.

Assignment 2: “FootNotes” by Job Bedford(2012)

Assignment,Submission — jbedford @ 9:50 pm

Dance is an art form, and normally it needs a beat or rhythm to orchestrate the motions and movement. But what if one can create music through dance. Utilizing the articulation and versatility of foot step, Footnotes enable the user to command rhythms with their own step. Simple limit switches at the base of the feet transfer signal via wireless Xbee to music playing software running Maxs. Footnotes is a rough prototype, but a start into something potentially greater.

Assignment 2: “Lights Within Live” by Jake Berntsen (2013)

Assignment,Submission — Tags: , — Jake Berntsen @ 9:50 pm

My instrument controls various audio effects in Ableton with a light sensor.  The electrical components are very simple; just a light sensor going into a Teensy 2.  Within Max, I have a patch that converts the information from the light sensor to information that Ableton can recognize.  Then within Ableton, I can route the information from the light sensor to any effect I want to.  In the video above, I designed a basic oscillating patch and made the light sensor control the rate, resonance, and frequency of a low pass filter.  The sound can then be controlled by allowing less or more light to enter the sensor.  Ideally, future drafts of the project will have a larger variety of sensors and a plethora of associated audio effects.

Assignment 2: “DialToneMadness” by Mauricio Contreras (2013)

Arduino,Assignment,Audio,Hardware,Sensors,Software,Submission — mauricio.contreras @ 9:38 pm

DialToneMadness is an instrument that generates different audio tones, which frequency and period of repetition can be altered  by proximity. It is based both in the Android and Arduino development platforms. It uses an ultrasonic based proximity sensor to measure distance to the “triggering” object, whatever that may be. The sensor is triggered and read from within Arduino and is then sent to the smartphone running Android. The latter reads this information and produces a tone based on it. Due to the aesthetic of the piece, with a high end smartphone, “retro” DTMF tones were used. This gives the audio output of the piece an interesting turn, since these sounds contain multiple tones and are not just sorted in a higher/lower pitch fashion. The smartphone responds with a single tone per message sent by the Arduino, hence the period of the repetitions is controlled by the latter, and is a linear function of the distance.
 

 
ProximityDialToneMadness backProximityDialToneMadness front

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Hybrid Instrument Building 2014 | powered by WordPress with Barecity