Project Summary + Reflection
Our Design Approach
For presentations, Daniel had requests for every group to explain the design approach for our AI instruments, what actually happened as well as what we could have done differently. Therefore, in this cluster I will be reflecting regarding that.
This week, as I’ve mentioned in an earlier entry, is quite a hectic week for many. For some, there are re-exams to complete and submit by Monday, as well as our pitch for the second module to complete on the same day. While this is all going on, we have to submit our up-to-date journal by Friday as well, the same day as our last day of presentations. Therefore, I thought to myself that by making our lives easier maybe it’s best to aim for finishing everything earlier in the week, to provide more time at the end to journal & think about the pitch.
That lead to us not having a constructive ideation session. Rather, we tinkered with the possibilities of Processing, Wekinator and Arduino code, as well as the different serial inputs Arduino is capable of. In addition to it being an outcome of trying to save time, I actually thought it was better way to go— by tinkering first with technology, since we didn’t know what the code was capable of as of yet, I thought it was better to tinker first rather than setting a goal first and working towards that.
How did we approach this brief?
We established communication between Wekinator and Arduino by translating serial Data inputted from the Arduino into serial data for Processing for Wekinator to listen to on 6448. We tested different input components, like the potentiometer, buttons, and finally light sensors. We explored the interaction of the instrument quite a bit, going from one to three light sensors because we wanted a more complex and perhaps even collaborative experience. Then we recorded and trained the program etc.

Looking Ahead
What could we have done differently— if we had more time, or if we decide to come back and push this concept to another level, we talked about changing the look of the physical instrument in order to make the experience more collaborate, such as having a whole jar that represents a whole choir. We also discussed about training the program a little more or figure out ways to add nuance to the sound, as people were walking and trying out our instrument during the week, we noticed that people would try to manipulate the sound by letting a little light only and then move the lid up slowly and see if anything would change. But we only trained it so that as soon as any light is shined through the sound would begin playing.
We also discussed about adding a whole orchestra with other instruments, since it was difficult with the code we were using that required a constant sound. We tried using a timpani as a bass but it didn’t sound good at all, for example. The rhythm was too quick so the drum was getting cut off 0.3 seconds in.
Maybe adding different inputs like pressure sensors that can manipulate other aspects of the sound like the pitch or rhythm. Not only can other aspects be manipulated, the overall interaction would be more complex and dynamic.

AI Noise Ensemble
Crazy concoction of music! Not sure if this was meant to be pleasant to the ears at all, but having 5 groups with identical drum machine output and another with loud brainwave oscillator sound outputs was not gonna create a nice, harmonious music ensemble. Which I guess is why Daniel named it the Noise Ensemble instead.
Initial feedback from external guests. Many were curious as to how our jars functioned, peaking into the lid. We always had to explain “haha, it’s light sensors!” One girl commented that our instrument was her favorite, because our output / music produced was actually pleasant to listen to.
Presentations + Feedback
Many groups played with the technology offered by Leap Motion, one group stood out to me. Many groups simply played with different hand gestures without thinking about meaning behind those hand gestures. Manuel et. al‘s group however, really explored the gestures of Scratching that DJs perform in gigs. They used the same drum machine output we did, but swapped it into DJ sounds and trained multiple training gestures.
Felix et. al‘s group used the same technology as we did, the Arduino, which is why I also found their project to be intriguing. They designed a miniature violin that utilized both light sensors and pressure sensors to manipulate sound. I liked their project because essentially if we were to continue our project, we would’ve added additional sensors to establish more nuance in our sound, and we have contemplated about adding pressure sensors.
Our presentation didn’t receive feedback from anyone other than Daniel himself. Over the week, Daniel had demonstrated enthusiasm over our project. He had constantly been checking in with our group, asking how we’re progressing as well as showering us with complements regarding how he enjoyed the direction we took with his project. For the presentation, he recommended (following-up our own reflection) that yes it’s indeed an option to make the instrument more collaborative by increasing the space for interaction, perhaps the jars being larger in size, spaced out, and MORE! Daniel also talked about adding a smell dimension that could potentially increase playfulness of our instruments. Maybe smell would also be released when the lid is lift up, which could add another dimension for our musical ambience? Overall, Daniel was quite happy with our project and really liked the approach we took to actually make a “nice” sound.
































































