I shifted from my original sample editing plan - to use my software Wave Exchange to cut and process the samples - to instead continue to use Audition. Although Wave Exchange is much better for dynamic effects processing, I quickly realized that I could save most of that for the laptop improvisation phase, as simply editing the mixdowns into useable fragments was already a huge task. To give an example - a 10 minute recording yielded about 50 separate samples - equaling hours of post-production work of the stifling non-performative variety.
I since realized that a system could be easily devised which would allow me to process recordings into separate samples on the fly in the studio – at the same time the performer was playing:
1) I custom design software that would essentially provide me with as many separate audio buffers as I needed (creatable at the press of a button) which could be individually recorded to – with or without effects, and the effects could be manipulated by me (or potentially by the performer) in real-time as the performer was playing
2) I sit in the control room and signal to the performer in the studio to start, stop, fade in fade out, etc - in order to synchronize my recording with their separate gestures
3) At the end of the session, each separate buffer can be saved individually, and each of these samples would require minimal editing.
4) Thus musical gestures are immediately captured as individual samples, and the recording session becomes a collaborative improvisation.
So later that day I modified one of my software instruments “Strange Creatures” to suit this purpose. With this, recording separate gestures to different buffers with input effects can be done with minimal setup, thus not overly interrupting the performative flow.
Although this software won’t be used in this Uni phase of the project, I plan on using it for future collaborative Streamland studio sessions.