Digital Foundations II: Max 7 Final

This exercise in Max 7 relates to my initial “Big Idea” (see https://liamtwall.wordpress.com/2017/04/19/max-7-big-idea-sensory-audio-experience/ ) in which I want to create an audiovisual experience using Max 7. The initial idea involved using the input from an audio source, such as an instrument or a midi file, and representing the sound visually with colors and physically with a haptic feedback suit. The trick was to create some type of draft or prototype that would explain on a more basic level how elements of this work so that it could be used, for example, to pitch the idea to a company or an organization.

Initially, as a building block, I used the Beap object keyboard as a way to have control over audio input. From there, however, I had to make a simple synth. I accomplished this using a series of oscillator objects. These in turn each go to an audio mixers, which are ported into VCA’s. Simultaneously, the gate of the Keyboard object is put into an ADSR (Attack, Decay, Sustain, Release) object, which allows modification of those aspects of the signal. The ADSR’s output is ported into the VCA’s as well. From here is where the path gets split up into a few different routes: Firstly, the output is put into a stereo object that outputs the sound signal to the sound output of the computer. The output is also ported to AUDIO2VIZZIE converters, which, as the name suggests, translate that data into Vizzie data. This allows the data to be imported into PRIMR objects, which assign the data to a color value and can be useful indicators of oscillation, as one can see the color picker vibrate up and down. From there, the three feeds are ported into VIEWR objects, which allow the oscillation to be represented via flickering as well.

What this does is allow, via the oscillation represented in the PRIMR objects, the frequency of the note being played to be visually represented. This is the first step towards creating my initial “Big Idea”. However, the final product is still a long way off; the next step would most likely be to make the visualization smoother and more immediate to understand, perhaps through direct color change as frequency of sound waves changes. After that, the next step would most likely be to make it so that other audio sources besides the simple keyboard could work as effectively in the same setup.

The Big Idea has room to evolve as well. I think that the next evolution, in terms of helping create an association between audio and visuals, would be to reverse the process. In other words, I think to have a similar process where instead a user could arrange colors that correlated with pitch to make an audio piece based solely on combinations of colors would be very interesting. That might help solidify in the minds of the viewers the connection between color and sound, and maybe even get them to associate colors with sounds in day to day life.

Visual of the Max patch working.

Advertisements
Video | This entry was posted in Time-Based, Writing and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s