The Glimptool is a research into the possibilities of creating a drawing tool which could be used as an instrument by a performer reacting to live audio input. How can you create a responsive graphical tool to improvise together with two musicians?
A question posed to the MAPLAB1 by three artists; an illustrator, a bass musician and a drummer, initiated this research. The three artists wanted to research how they could improvise together using video projection in a music theatre performance for children of two years and older.
As a starting point we organised a first lab session during which the two musicians played their instruments and the illustrator drew reacting to the two musicians. These drawings would then be projected on the stage where the two musicians played. This gave them the possibility to improvise together, however the musicians had no way to directly influence the visual content with their music. Once a line was drawn it stayed static.
While experimenting during the first lab with different setups and software tools it became apparent it was a challenge for all three artists to play an equal part in the improvisation. When the illustrator was drawing she listened to the musicians and interpreted this in a drawing, but the musicians had no way to directly influence the visual character using different musical styles.
The question came up if it would not be possible to create a tool controlled by the illustrator but also by the music from the two musicians. A drawing tool where the auditory aspects could be used as input to change different attributes of the visual elements. A tool that enabled the illustrator to draw with different images she could design beforehand.
The research was split into three phases. The first phase consisted of improvising one day at the MAPLAB with different hardware and software to test different scenarios and observe how the interaction went. For this phase a first version of an audio analysis tool able to create generative graphics based on audio input was developed before hand.
The first version of the audio analysis tool
The second phase consisted of a three day lab improvising with the first prototype of the tool. This session was also used by the illustrator to master her new instrument. Feedback and insights from this session was used to improve the tool.
The third phase consisted of improvising with the improved tool. Between the second and third phase the illustrator also had the time to further master the tool on her own.
The necessity of a specific drawing tool operated by the illustrator and receiving input from the audio analysis tool first became apparent after the first lab. We concluded that the audio analysis and the automated generative graphics prepared for the first lab, had visual potential, but excluded the illustrator since she could not control it. From this came the desire to develop a custom tool to stimulate the interaction between the three artists. The goal was to create an instrument for the illustrator to play on the same level as the two musicians.
This also triggered a research in how it would be possible with the Glimptool to create a visual representation of the character of the audio created in that moment.
For the development of the Glimptool the period directly after the first lab was the most important. It was then that we realised we needed some kind of tool and started to sketch out ideas and make a list of requirements that the tool should be able to do.
Some of these requirements where:
Quite quickly it became apparent that there where two distinct challenges:
To meet these challenges we first sketched and brainstormed how the interface should look and function. Then we started to create the structure and do technical tests to see if we could implement all the features. We also optimized the audio analysis to further meet the demands we had set for the drawing tool.
For the audio analysis we created a tool that could analyse two different audio sources and send the audio information to the graphical tool using OSC. For each of the audio inputs we sent the following parameters to the Glimptool:
The audio analysis tool final version
This gave us enough flexibility to couple different visual properties (size, colour, location, rotation) to the different audio properties.
We had limited time between the first lab and the second lab so based on our initial sketches and ideas we started working on the tool as quickly as possible. Our intention was to closely work with the illustrator and test in different steps what we where making. However due to time restraints we ended up developing the tool without much feedback from the illustrator. This meant that the first prototype we presented at the beginning of the second lab was completely new for her.
At the beginning of the second lab we were quite curious how the illustrator would use the Glimptool. We thought is was an easy to use tool with a clear interface and a sea of different possibilities, but would it function in a performance context? During the lab we explained the tool to the illustrator and worked together to play with it during the improvisations. The illustrator needed the first day just to get acquainted with her new instrument. She could work with it and could create visual compositions, but it was not until the second day that she had enough routine with the tool to start exploring with it.
What was noticed is that we developed the tool in such a way that we created a toolkit with many possibilities for the illustrator to play with. But making the tool we had not actually played with it so much. So when she began to explore the possibilities she actually created all kinds of visual compositions and combinations which we had never thought off. This showed the enormous potential of the tool.
The learning curve for the illustrator was steep in the beginning. However she could immediatelly improvise with it and play equally with the musicians. We also noticed a number of flawed design and functional decisions. Some decisions were not suited towards the way she worked. During the first lab we spent quite some time changing small details in the interface and adding some essential functionality. For example the layout of the buttons to select the color for a object where for her not logical. Also the icons for the different ways to place objects on the stage where not immediatly clear for her.
Some of the changes we made after feedback during the second lab were:
At the end of the second lab we had developed the Glimptool into a usable prototype. We collected all feedback after the second lab and continued to work on the Glimptool between the second and third lab. The most important element we changed was the layout of the interface. We originally designed the interface in such a way that you started on the top choosing first the display mode of the image, then the image and then the different properties to connect to the audio. While working with the tool the illustrator noticed that it was more natural for her to work from left to right. So we changed the interface to consist of several columns to help accommodate that.
Other changes to the Glimptool were new features such as:
Final version of the interface for the Glimptool
After all these changes were implemented we organised a third lab with the second iteration of the Glimptool. During the third lab we noticed that the illustrator needed some time to again adjust to different new features in the tool. Once she mastered the new features she could quickly focus more in-depth on the creation of different “sets” of visuals instead of struggling to learn to master her instrument.
The Glimptool developed during this research has been very successful and so it will be utilised during their live “Glimp” performances to come.
It is a great challenge to design simplicity for a complex system. Implementing the requested functionality is only part of making the application. A suitable interface to clearly communicate the functionality is essential. The Glimptool is a success in the sense that it creates the possibility for the illustrator to freely improvise with the musicians. It uses the music as input for the visual properties of the different elements that are drawn. A tool like The Gimptool is not as fast and responsive as direct drawing, e.g. using a Wacom tablet, but it does offer many other possibilities creating a graphically richer environment.
When developing a tool like done in this research it is essential to use an iterative development process. It is not possible to design for all needs in one go. Especially in a creative making process the usability requirements tend to differ from regular use. For these cases it is best to follow strict Human Computer Interaction guidelines however acceptance from the operator is essential.
Photo taken during the second Glimp lab
Photo taken during the second Glimp lab
Theatre group Oorkaan will use the tool as an integral part of the music theatre performance “Glimp” for which this research was done. Furthermore we will use the audio analysis tool and graphic tool in future labs and workshops for the MAPLAB. the tool as it is will not be further developed, but will serve as an inspiration for future projects in this same realm.
1Media and Performace Laboratory http://maplab.nl, retrieved 2014-09-11
2Open Sound Control http://opensoundcontrol.org/, retrieved 2014-09-11
3Oorkaan http://oorkaan.nl, retrieved 2014-09-11
This research is made possible by the MAPLAB and the enthusiastic collaboration with the artists of Glimp; Lotte, Rob, Tony and Bram