I see three areas that this project could evolve towards:
1) Group laptop improvisations – where the samples generated in the studio are shared among all performers in either free improvisation, or with each laptop performer designated a role (e.g. beats, melody, bass/drone, effects processing). The collective output of these performances could then easily be recorded as polished album pieces. A system for recording synchronization between all laptops could easily be implemented over a wireless network (these files would then have to be mixed down), or an instant master recording could be made through the PA mixer. 2) Laptops with shared samples (as above) blended with live processing of acoustic instruments via microphones. In this case the recording space and quality of microphones would become an important factor as to the quality of the final product – a limiting factor that a purely laptop ensemble would not have. 3) The final logical extension of this is not only processing the live audio stream, but also capturing and manipulating fragments of it in real-time during, and as part of, the performance. With software custom designed for this purpose, both the individual sample fragments and the master output of each laptop could be saved at the end of the recording session – thus resulting in instant polished pieces of music with little or no post-production necessary, as well as generating processed collections of samples that would require little or no editing. This third option would ideally be undertaken by ensembles that play together regularly, so that a feedback loop of samples from previous sessions could inform the development of future sessions. For a some time now I've had the long-term goal of creating a recording and sample-production setup that I could travel with, to be able to go anywhere in the world and record, sample and produce music - to experience a variety of cultures, atmospheres, people, instruments, musical styles, etc.
What interests me most about this idea is not just the recording interactions, but creating the music while in those places, and getting a performance/sampling feedback loop happening. Although this idea is not related to Streamland as it currently exists, nonetheless Streamland has laid the groundwork for at least the traveling production setup - laptop, headphones and NanoKontrol. A few compact portable mics and a compact mic stand are all that would be needed to fulfill the sampling aspect. Using USB mics could eliminate the need for an audio interface. Ideally, this is what I'd love to guide the Streamland concept towards. As the process has evolved, I've explored the idea of a fixed improvisation setup - i.e. in the Streamland instrument, having several sample slots and effects pre-loaded and pre-MIDI-mapped. This way, at least a core set of controls could get into my muscle-memory, and I could develop a more immediate and focused auditory-tactile relationship, relying less on my eyes.
What eventually evolved was not to have anything pre-mapped, but rather to just develop a flexible consistency. This generally involved: -sample on-off triggers and sample reverse buttons in the same place on the QWERTY keyboard each time -volume faders mapped to the same or similar places (depending on what samples need a fader, and what could be grouped together on a single fader) -core effects mappings - for the basic effects setup that has been used almost everytime, and sometimes augmented with additional effects - dials mapped to delay time, delay feedback, reverb room size This would typically result in the entire control setup consisting of approximately: -20 QWERTY keys -8 faders -4 dials -and the laptop track-pad This setup is also completely portable - MIDI mappings were done with an Icon iControl, and I've now switched to a Korg NanoKontrol. Apart from this, just a laptop and headphones. A few considerations on the music of Messiaen - a fellow synaesthete - that resonate with the Streamland idea.
"The bird's ritournelle (ritornello) not only defends a terrain, it also establishes continuously the terrain by encircling it with song. It is the bird's "song of the earth," linking place and color with sound, weaving together ornamented terrain (patterns of leaves, etc.) with ornamented rhythm. The ritournelle is simultaneously pure expression and a returning "fold" that wards off death and other threats and becomes also a song of courtship." The relevance of this for Streamland is simply in the aesthetics - I get a sense of this 'ornamented terrain' often when I make music. Patterns of leaves, the light in the spaces between and the colour of bird-songs are recurring inspirations and atmospheres that I sense myself within. "If one reverses a retrogradable series as follows: 1,2,3,4, 4,3,2,1, then the entire unit of eight elements is itself non-retrogradable, palindromic… these palindromic devices were associated with magical conjuring formulae and Messiaen spoke of a "good magic" that concerns the ability of music to invoke the cosmos in a new way and to evoke and alter human emotion." This idea of the palindrome relates to the technique in the Streamland instrument of sample reversing - flipping the playspeed of a sample backwards and forwards, effectively mirroring fragments. Pickstock, C. 2007, “God and Meaning in Music: Messiaen, Deleuze, and the Musico-Theological Critique of Moderism and Postmodernism” in Sacred Music, 134.4, pp40-62 Notes for refinement based on the improvisation process so far:
-often leaving loops playing for too long -most of the first 6 recordings did not feature much prominent performative FX tweaking, sample micro-looping, granular-suspension -tended to either be spacey and fragmented, or totally reliant on loop sync. Would like a balance of both -need to do some pre-preparation of ‘sections’ of musical material, so can easy change from one to the next and stay in flow -often turning up high pitched samples too loud for too long, they generally don’t need to be as loud as the rest -volume dynamics always slow, or immediate cuts. Other dynamics speeds, and techniques like swells and fluctuations -not using any panning techniques – static or dynamic – on samples or FX -not being bold with FX, not experimenting with new techniques and combinations -apart from glitch sounds, otherwise all high fidelity – explore low fidelity in other aspects – perhaps process samples in Audition, or get some of the dirtier mixdowns, and explore the textures of noise floors, etc -reveal the inner working of the technology through the music – travel further away from sounding like acoustic recordings -lack of rhythmic samples and combined samples in the collection – do a few Audition sessions of editing together rhythms and combining rhythms of different instruments together I began my laptop improvisations in an unfinished prototype Max instrument, which was an evolution of one of my older instruments - Square Bender. As I zeroed in on the improvisation phase, I decided that this would be the only instrument I'd use - so as not to dilute my performative focus. So I named this instrument Streamland and has now undergone a series of transformations as the improvisations unfolded: -sample muting that retains loop position - so samples can be muted without going out of sync or being retriggered -phase offset of synced loops - shift the position at which synced loops intersect. This retains uniform loop lengths, but can offset start position of each sample -loop syncing in reverse, and loop syncing for selections of sample fragments (i.e. keeping the loop synced when only a part of the sample is selected, not the whole thing) - this was essentially just a bug that needed fixing. The grain-stretched loop syncing is not very tight, but it has an organic glitchy charm -added MIDI and QWERTY keyboard pitch control (for non-granular playback only) - this is old-school pitch shifting without time-stretching. Although I could alter playback speed before, there was no way to land accurately on a pitch -added portamento to pitch control – slide between sample playspeeds/pitches. A fine addition to the record-player-style repertoire of performative controls -added synchronization for non-grain player (non-pitch correcting) - I added this mainly for syncing non-pitched material (e.g. percussive loops or noise) without granular distortion. The granular syncing wasn't tight enough for some purposes, and the non-pitch correction in this addition often adds richness to the sound-world -added reversing to MIDI note control of samples - so MIDI note can set playspeed in forwards or reverse -added MIDI button mapping for sample triggering and reversing - this frees up QWERTY keys for pitch control (MIDI note input) [note: this instrument is not available to the public yet, but should be within the next 6 months] I mentioned earlier on that I was considering a way of classifying and finding the samples in future iterations as this project grew. I actually found that after doing only two or three improvisations I needed something immediately - there were too many samples and navigating through them was a case of random hit and miss.
So I discovered Audio Finder which has an amazingly efficient way of tagging samples by multiple category. This has become an essential element to my improvisations - I can search for instrument type, technique, rhythm, etc in Audio Finder, then drag-drop straight from there into my Max performance instrument A few things regarding the scope of the project that I've been considering and/or adjusting along the way. These are as much for me to keep track of future iterations of this project as much as anything else.
1) I had the idea early on to create visual artworks that I'd bring to the studio that the performers would look into while improvising. I'd then use these same artworks when I did my laptop improvisations - to explore a kind of atmospheric perceptual consistency. I didn't explore this due to the time and expenses involved but I'd still like to in the future 2) Whether or not to include my own samples (i.e. those that I had created / performed myself) into the sample bank has been an ongoing question. At first I saw no problem with it, but after beginning my improvisations I discovered how far outside my comfort zone I was forced by not doing so. I recorded one improvisation with a sampled beat (a Native Instruments sample, so not really in the spirit of this project), and although the piece still resonated with me, it didn't feel as liberating as the others, where non-reliance on the grounding of a sampled beat necessitated much more structural ingenuity and spontaneity 3) I've been grounding this research in the concept of "group mind" improvisation, but in an asynchronous sense - where we are improvising together, but at different points in time. What this lacks is the feedback from me to the performers - it's only a one-way dialogue, me responding to them. Some ideas were proposed by my peers, such as playing my improvisations to a second round of studio performers and getting their musical responses to that, etc. This kind of feedback mechanism may also be developed in future iterations The sample editing process was finally completed after many hours and days of intricate work.
Final count - 381 individual samples (gleaned from just under an hour of raw studio recordings). And that's just on the first pass. If I went over those recordings again, I could easily double that number. And it's interesting to consider what would happen if someone else edited samples out of those same recordings, how their interpretation would differ. Anyway, 381 samples is huge. That could keep me going for years. But in the interests of diversity, as I continue to develop this project in the future I aim to expand the array of instruments and personalities used. My consideration now is categorizing the samples for quick and easy discovery before and during performance. For now I just use the Mac Finder - where my samples are arranged by instrument type, but it is not at all ideal. I have heard rumour of someone developing a visual system where samples can appear as nodes that float around, and can be clumped together based on meta-data or audio analysis -- for example instrument type, melodic/rhythmic, sample length, loudness, timbral characteristics etc. So you select one of these elements from a menu, and a cluster of relevant samples forms. Ideally then, I'd just drag and drop these sample-nodes into my instrument and play. I shifted from my original sample editing plan - to use my software Wave Exchange to cut and process the samples - to instead continue to use Audition. Although Wave Exchange is much better for dynamic effects processing, I quickly realized that I could save most of that for the laptop improvisation phase, as simply editing the mixdowns into useable fragments was already a huge task. To give an example - a 10 minute recording yielded about 50 separate samples - equaling hours of post-production work of the stifling non-performative variety.
I since realized that a system could be easily devised which would allow me to process recordings into separate samples on the fly in the studio – at the same time the performer was playing: 1) I custom design software that would essentially provide me with as many separate audio buffers as I needed (creatable at the press of a button) which could be individually recorded to – with or without effects, and the effects could be manipulated by me (or potentially by the performer) in real-time as the performer was playing 2) I sit in the control room and signal to the performer in the studio to start, stop, fade in fade out, etc - in order to synchronize my recording with their separate gestures 3) At the end of the session, each separate buffer can be saved individually, and each of these samples would require minimal editing. 4) Thus musical gestures are immediately captured as individual samples, and the recording session becomes a collaborative improvisation. So later that day I modified one of my software instruments “Strange Creatures” to suit this purpose. With this, recording separate gestures to different buffers with input effects can be done with minimal setup, thus not overly interrupting the performative flow. Although this software won’t be used in this Uni phase of the project, I plan on using it for future collaborative Streamland studio sessions. |