San Francisco Electronic Music Festival (SFEMF) Night 3

The 19th annual San Francisco Electronic Music Festival concluded yesterday, and we at CatSynth were on hand for the final concert. There were three sets, each showcasing different currents within electronic music, but they all shared a minimalist approach to their musical expression and presentation.

The evening opened with a set by Andy Puls, a composer, performer and designer of audio/visual instruments based out of Richmond, California. We had seen one of his latest inventions, the Melody Oracle, at Outsound’s Touch the Gear (you can see him demonstrating the instrument in our video from the event). For this concert, he brought the Melody Oracle into full force with additional sound and visuals that filled the stage with every changing light and sound.

Andy Puls

The performance started off very sparse and minimal, with simple tones corresponding to lights. Combining tones resulted in combining lights and the creation of colors from the original RGB sources. As the music grew increasingly complex, the light alternated between the solid colors and moving patterns.

Andy Pulse

I liked the sound and light truly seemed to go together, separate lines in a single musical phrase, and a glimpse of what music would be if it was done with light rather than sound.

OMMO, the duo of Julie Moon and Adria Otte brought an entirely different sound and presence to the stage.


The performance explored the “complexities and histories of the Korean diaspora and their places within it.” And indeed, words and music moved freely back and forth between traditional and abstract sounds and Korean and English words. Moon’s voice was powerful and evocative, and quite versatile in range and she moved through these different ideas. The processing on her voice, including delays and more complex effects, was crisp and sounded like an extension of her presence. Otte performed on laptop and analog electronics, delivering a solid foundation and complex interplay. A truly dynamic and captivating performance.

The final set featured a solo performance Paris-based Kassel Jaeger, who recently became director of the prestigious Groupe de Recherches Musicales (GRM). Sitting behind a table on a darkened stage, with a laptop, guitar and additional electronics, he brought forth an eerie soundscape.

Kassek Jaeger

The music featured drone sounds, with bits of recognizable recorded material, as well as chords and sharp accents. The musique concrète influence was abundant but also subtle at times as any source material was often submerged in complex pads and clouds over which Jaeger performed improvisations.

It is sometimes difficult to describe these performances in words, though we at CatSynth try our best to do so. Fortunately, our friends at SFEMF shared some clips of each set in this Instagram post.

Much was also made of the fact that this was the 19th year of the festival. That is quite an achievement! And we look forward to what they bring forth for the 20th next year…

Outsound Music Summit: Vibration Hackers

The second concert of this year’s Outsound Music Summit, entitled “Vibration Hackers”, featured electronic musical experimentations from Stanford’s CCRMA and beyond. It was a sharp contrast to the previous night in both tone and medium, but had quite a bit to offer.

The concert opened with #MAX, a collaboration by Caitlin Denny on visuals, Nicole Ginelli on audio, and Dmitri Svistula on software development. It was based on the ubiquitous concept of the hashtag as popularized by Twitter. Audience members typed in suggested terms on a terminal set up in the hall. The terms we then projected on the screen and used to search online for videos, audio and textual materials to inform the unfolding performance. Denny used found videos as part of her projection, while Ginelli interpreted results with processed vocals.


The idea was intriguing. I would have liked to see more explicit connection between the source terms and audio/video output – perhaps it was a result of the projection onto the distorting curtain instead of a flat surface, but the connection wasn’t always clear. It would have also been fun to allow audience members to input terms from their mobile phones via Twitter. But I applaud the effort to experiment artistically with social networking infrastructure and look forward to seeing future versions of the piece.

Next was a set of fixed-media pieces by Fernando Lopez-Lezcano, collectively called Knock Knock…anybody there? Lopez-Lezcano is a master of composition that uses advanced sound spatialization as an integral element, and these pieces presented a “journey through a 3D soundscape”.


The result was a captivating immersive and otherworldly experience with moving sounds based on voices, sometimes quite intelligible, sometimes manipulated into abstract wiggling sounds that spun around the space. There was also a section of pop piano that was appropriately jarring in the context which gave way to a thicker enveloping sound and then fades to a series of whispers scattered in the far corners of the space. The team from CCRMA brought an advanced multichannel system to realize this and other pieces, and the technology plus the expert calibration made a big different in the experience. Even from the side of the hall, I was able to get much of the surround effect.

The next performance featured Ritwik Banerji and Joe Lasquo with “Improvising Agents”, artificial-intellgience software entities that listen to, interpret, and the produce their own music in response. Banerji and Lasquo each brought their own backgrounds to the development of their unique agents, with Banerji “attempting to decolonize musician-computer interaction based not he possibilities that a computer is already intelligent” and Lasquo applying his expertise in AI and natural language processing to musical improvisation. They were joined by Warren Stringer who provided a visual background to the performance.

Joe Lasquo and Ritwik Banerji

As a humorous demonstration of their technology, the performance opened with a demo of two chatbots attempting to converse with one another, with rather absurd results. This served as the point of departure for the first piece, which combined manipulation of the chatbot audio with other sounds while Banerji and Lasquo provided counterpoint on saxophone and piano, respectively. The next two pieces, which used more abstract material, were stronger, with deep sounds set against the human performances and undulating geometric video elements. The final piece was even more organic, with subtle timbres and changes that came in waves, and more abstract video.

This was followed by Understatements (2009-2010), a fixed-media piece by Ilya Rostovtsev. The piece was based on acoustic instruments that Rostovtsev recorded and then manipulated electronically.

Ilya Rostovtsev

It began with the familiar sound of pizzicato strings, that gave way to scrapes and then longer pad-like sounds. Other moments were more otherworldly, including extremely low tones that gradually increased in volume. The final section featured bell sounds that seemingly came out of nowhere but coalesced into something quite serene.

The final performance featured the CCRMA Ensemble, which included Roberto Morales-Manzanares on flute, voice and his “Escamol” interactive system, Chris Chafe on celletto, John Granzow on daxophone and Rob Hamilton on resonance guitar. Musical creations were a major part of this set. Chris Chafe’s celletto is essentially a cello striped down to its essential structure and augmented for electro-acoustic performance. The saxophone is based on a bowed wooden element where the sound is generated from friction. The Escamol system employed a variety of controllers, including at one point a Wii.

CCRMA Ensemble

The set unfolded as a single long improvisation. It began with bell sounds, followed by other sustained tones mixed with percussive sounds and long guitar tones. The texture became more dense with guitar and shaker sounds circling the room. The celletto and daxophone joined in, adding scraping textures, and then bowing sounds against whistles. In addition to the effects, there were more idiomatic moments with bowed celletto and traditional flute techniques This was truly an experimental virtuosic performance, with strong phrasing, textural changes and a balance of musical surprises.

I was happy to see such a strong presence for experimental electronic technologies in this year’s Summit. And there was more electronics to come the following evening, with a very different feel.

RIP Max Mathews (1926-2011)

Yesterday morning I received the sad news that Max Mathews, considered by many of us to be the “father of computer music”, passed away.

Not only was he among the first to use general-purpose computers to make music, but his work spanned many of disciplines within the field that we know today, including sound synthesis, music-programming languages, real-time performance and musical-instrument interfaces.

He studied electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely-used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance. [Wikipedia]

The “MUSIC-N” languages have influenced much of how we still program computers to make music. It has direct descendants such as Csound, and has also influenced many of the other languages for composer, perhaps most notably Max (later Max/MSP) that was named in his honor.

His rendition of “Daisy Bell” (aka “Bicycle Built for Two”) is one of the early examples ofphysical modeling synthesis. Sections of the vocal tract were modeled as tubes, and sound generated directly from physics equations. His work inspired the version of “Daisy Bell” sung by HAL9000 in the film 2001: A Space Odyssey (though he did real at a talk in 2010 that the version in the film was not his recording.)

Mathews continued to expand and innovate throughout his career, moving into different areas of technology. In the 1970s his focus shifted to real-time performance, with languages such as GROOVE, and then later with the Radio Baton interface, which can be seen in this video below.

I had the opportunity to see and hear Mathews at the ICMC 2006 conference and MaxFest in 2007, both events that honored his 80th birthday and five decades in music technology. At 80, it would be relatively easy and quite understandable to eschew the latest technologies in favor of earlier technologies on which he did much of his work, but there he was working with the latest MacBooks and drawing upon new research in connection to his own work.

[Max Mathews at GAFFTA in 2010. (Photo by Vlad Spears.)]

More recently, I saw him give a talk at the Gray Area Foundation For the Arts in 2010, where he introduced the work of young artists and researchers, something he continued to do all the way to the end. He was at the Jean-Claude Risset concert at CCRMA (and I later found out, gave the introduction, which I had missed.) I have also heard comments over the past day that he was still involved in email and discussions over current projects up through this week, a testament to his character and his love for this field and for the work that he pioneered.

Jean-Claude Risset at CCRMA

A few weeks ago I had the opportunity to see composer and computer-music pioneer Jean-Claude Risset present a concert of his work at CCRMA at Stanford. Risset has made numerous contributions to sound analysis and synthesis, notably his extension of Shepard Tones to continuously shifting pitches. The sound of the “Shepard-Risset glissando” where pitches ascend or descend and are replaced to give the illusion of a sound that ascends or descends forever. You can hear an example here, or via the video below.

Sadly, I arrived slightly late and missed much of the first piece Duo for one pianist (1989-1992), featuring Risset himself on a Yamaha Disklavier piano. The duo comes from the computer control of the piano simultaneous to the live human performer. It’s not a simple computer-based accompaniment part, but rather a duo in which the actions of the live performer are interpreted by a program (written in an early version of Max) and inform the computer response in real-time.

The remainder of the concert features works for multichannel tape. The first of these pieces, Nuit (2010): from the tape of Otro (L’autre) featured eight channels with meticulous sound design and spatialization. The ethereal sounds at the start of the piece sounded like either frequency-modulation (FM) or very inharmonic additive synthesis (actually, FM can be represented as inharmonic partials in additive synthesis, so hearing both techniques makes sense). Amidst these sounds there emerged the deep voice of Nicholas Isherwood speaking in French, and then later in English as well – I specifically recalled the phrase “a shadow of magnitude.” Surrounding the vocal part was a diverse palette of sounds including low machine noise, hits of percussion and wind tones, a saxophone trill, tubular bells and piano glissandi. There were examples of Shepard-Risset glissandi towards the end of the piece.

The next piece Kaleidophone (2010) for 16-channel tape begins with similar glissandi, providing an interesting sense of continuity. In this instance, they were ascending, disappearing at the top of the range and re-emerging as low tones. Above this pattern a series of high harmonics emerged, like wispy clouds. The glissandi eventually switched to both up and down motions, and subsequently followed by a series of more metallic tones. At one point, a loud swell emerged reminiscent of the distinctive THX announcement at the start of most movies; and a series of percussive tones with discrete hits but continuous pitch changes, getting slower and slower. There was a series of piano-like sounds with odd intonations played more like a harp, followed by gong-like sounds reminiscent of gamelan music but with very artificial pitches and speeds. Industrial metallic sounds gave way to a section of tense orchestral music, and the long tones that subtly and gradually became more noisy and inharmonic. A sound like crackling fire seemed to channel the early electronic pieces by Iannis Xenakis. Highly-comb filtered environmental sounds gave way to eerie harmonies. They constantly changing sounds lull the listener in a calm state before starting him or her with a burst of loud noise (perhaps the most intense moment in the entire concert). This was followed by machine noises set against a sparse pattern of wind pipes, and a large cloud of inharmonic partials concluded the piece. I had actually not looked in advance at the subtitle in the program of “Up, Keyboards, Percussion I, Percussion II, Winds, Water, Fire, Chorus, Eole” – but my experience of the piece clearly reflected the section titles from perception alone.

The final piece Five Resonant Sound Spaces for 8-channel tape began with orchestral sounds, bells and low brass, gongs (or tam tam), timpani. The sounds seemed acoustic at first, but gradually more hints of electronics emerged: filtering, stretching and timbral decomposition. A low drone overlaid with shakers and tone swells actually reminded me eerily of one of my own pieces Edge 0316 which was based on manipulations of ocean-wave recordings and a rainstick. This image was broken by a trombone swell and the emergency of higher-pitched instruments. The overall texture moved between more orchestral music and dream-like water electronics. A series of fast flute runs narrowed to a single pure-tone whistle, which then turned into something metallic and faded to silence. All at once, loud shakers emerged and granular manipulations of piano sounds – more specifically, prepared piano with manual plucking of strings inside the body and objects used to modify the sound. The sound of a large hall, perhaps a train station, with its long echoes of footsteps and bits of conversation was “swept away” by complex electronic sounds and then melded together. A series of high ethereal sounds seemed to almost but not quite be ghostly voices, but eventually resolved the clear singing voices, both male and female. The voices gave way to dark sounds like gunfire, trains and a cacophony of bells – once again, channeling the early electronic work of Xenakis. A breath sound from a flute was set against a diversity of synthesized sounds that covered a wide ground, before finally resolving to a guitar-like tone.

The concert was immediately followed by a presentation and discussion by Risset about his music. His presentation, which included material from a documentary film as well as live discussion covered a range of topics, including using Max and the Disklavier to perform humanly impossible music with multiple tempi; and marrying pure sound synthesis with the tradition of musique concrete, with nods to pioneers in electronic music including Thaddeus Cahill, Leon Theremin, Edgard Varese, and Max Matthews (who was present at the concert and talk). He also talked about the inspiration he draws from the sea and landscape near his home in Marseilles. The rocky shoreline and sounds from the water in the video did remind me a lot of coastal California and made it even less surprising that we could come up with pieces with very similar sounds. He went on to describe his 1985 piece SUD in more detail, which used recordings of the sea as a germinal motive that was copied and shifted in various ways. Percussion lines were drawn from the contours, he also made use of sounds of birds and insects, including the observation that crickets in Marseilles seem to sing on F sharp. I did have a chance to talk briefly with Risset after the reception about our common experience of composing music inspired by coastal landscapes.

Overall, this was an event I am glad I did not miss.

In Tandem with Max Mathews, Aaron Koblin, and Daniel Massey

Last Friday I attended a talk featuring Max Matthews and a new conceptual work by local artists as the Gray Area Foundation for the Arts (GAFFTA). GAFFTA is an intriguing new organization and space for the intersection art, design, sound, and technology. They are “dedicated to building social consciousness through digital culture.”

Max Mathews. Photo by Vlad Spears

I had last seen Max Mathews, considered by many to be the “father” of computer music, at an 80th birthday tribute at the Computer History Museum in 2007, and before that delivering the keynote address at ICMC 2006. It is great to see him still going strong and engaged with technology and supporting others’ creative work. His talk primarily focused on his work at Bell Labs, and in particular the history and technologies surrounding his 1962 computer rendition of the song “Daisy Bell” (aka “Bicycle Built for Two”). It was an early example of physical modeling synthesis, where sections of the vocal tract were modeled as tubes, and sound generated directly from physics equations. His version of the song was popularized in the film 2001: A Space Odyssey, although Mathews revealed that the version Kubrick used in the film was not his recording. He also presented another famous example of computer-generated vocals, a performance of the “Queen of the Night” aria from Mozart’s The Magic Flute. This piece used formant synthesis in which focus on recreating spectral characteristics of the sounds (i.e., the formants that characterize vowel sounds) without necessarily modeling the physical processes that allow humans to create those sounds. The voice is quite compelling (if a bit dated), and demonstrates that the most realistic sounds are not necessarily those generated from physical models.

[Mathews, Koblin and Massey.  Photo by Vlad Spears.]

Mathews’ presentation of “Daisy Bell” served as an introduction for a new project “Bicycle Built for 2000” by Aaron Koblin and Daniel Massey. Koblin has been working on a series of conceptual pieces that utilize the Amazon Mechanical Turk, a framework for harnessing human intelligence to solve large problems. There are some things that humans are quite efficient at and computers are very poor at, such as recognizing distorted text (think of the CAPCHA codes that we all deal with on websites, including here at CatSynth). The Amazon Mechanical Turk, which derives its name from an 19th century hoax where a supposedly mechanical chess-playing machine turned out to be a human hidden inside a box, provides a framework and API for defining tasks to be solved by humans, recruiting people to work on them, and then compensating them for their efforts according to a fixed budget. One of Koblin’s pieces provides the instruction to “draw a sheep” for a few cents. He then collected the resulting drawings of various sheep (and non-sheep) from around the world and compiled them into a larger mosaic work, The Sheep Market. You can see the overall mosaic as well as click on individual sheep and even see an animation of how they were drawn by the individual contributors.

[Screenshot of Bicycle Built for 2000, by Aaron Koblin and Daniel Massey.  Click to enlarge.]

For “Bicycle Build for 2000”, Koblin collaborated with Daniel Massey on a work that focused on sound and music. The Max Matthews rendition of Daisy Bell was decomposed in a sound segments no longer than one syllable, and for each segment an Amazon Mechanical Turk task was created for someone to sing back the sound. Koblin and Massey then reassembled the sounds to re-create the song as sung by the participants. The initial version, in which only one singer was used for each component, was almost unrecognizable, though quite interesting. A larger version which included a chorus of voices on each segment better represented the original qualities of the song, and clearly recognizable. You can hear the full version at the project website, along with an interesting visualization of the assembled recordings. The fact that the larger ensemble produced a more recognizable result, essentially averaging out the various sung renditions, is perhaps an example of the oft touted “wisdom of crowds”.

There is also a slightly more ominous set of questions from these works concerning exploitation of individuals who don’t know the overall purpose of their tasks, or about hive mentality and such, but I still find it quite interesting and am inspired to try out the Mechanical Turk for a future art project.

Luxe at Hotel Biron, SF Electronic Music Festival, and “The Company”

I have been remiss in writing about the many art and music events from this past month. And especially in regards to the first week. I found myself attending events every night between September 4 and September 7, each of which had at least some personal connection. This was a coincidence, but it was also a great antidote to the just-concluded McCain-fest and the parade of speeches proclaiming “Small Town Good, City Bad.” What better response than to step outside for an evening walk in search of friends, art, music, food and drink.

The night of the 4th was the opening of a photo exibition by Luxe at Hotel Biron. It is not in fact a hotel, but a wine bar in the Hayes Valley neighborhood that features monthly art exhibits. It is a small, darkly lit and intimate space, with dark wines in huge glasses, and brick walls that provided quite a contrast to the photographs on display.

The exhibition was titled “Her Being and Nothingness” and featured a series of self portraits. In each image, the focus is on “the body.” The face is either absent or obscured, and the poses and attire vary in each. We of course know they are self portraits (itself an interesting concept in photography), but without the usual cues for identity. In this case, we draw the conclusion directly from the bodies.

Of course, the recognition is easier if the artist happens to also be a personal friend. Multiple of Luxe’s prints are on display at CatSynth HQ, so I can definitely be considered a “fan.” A more in-depth review can be found

On Friday, I attended the second night of the San Francisco Electronic Music Festival at the Project Artaud Theatre.

The performance opened with two works by Richard Teitelbaum, professor of composition and electronic music at Bard College. His first piece, Serenissima, featured two wind performers and a laptop computer running Max/MSP. The computer was performing spectral processing on samples and the live instruments, which could themselves control the sound. The wind instruments included several clarinets, including a contra-bass clarinet (which one does not see every day), performed by Matt Ingalls. The second piece was Piano Tree, for piano and computer, and was in part a tribute to Teitelbaum’s father, David and to “some musical forbears whose work has influenced me greatly.” The piano part, which included many extended and “prepared piano” techniques (a nod to John Cage), was performed by Hiroko Sakurazawa.

The next set was from Myrmyr, the local duo of Agnes Szelag and Marielle Jakobson. The combine experimental recording and live computer-based processing with a variety of acoustic instruments, including cello, violin and voice. The result is still very much “electronic music,” but it has a more traditional sound as well, especially in the parts that feature voice and songs. Myrmyr was accompanied by members of the sfSound ensemble during part of their performance, primarily with undulating long notes and “drones”. Again, the effect was both experimental and more “familiar” at the same time.

The final was from Ata “Sote” Ebtekar. He calls his music “a new form of Persian Art Music,” which I was very interested in hearing. However, the performance was so overpoweringly loud that I really was not able to appreciate it. I wish more electronic musicians would take care not to do that. Certainly, some music will be quite loud, I have come to expect that, but it should not remain so an extended period of time.

The following night was my performance with Polly Moller and Company at El Mundo Bueno Studios in Oakland. Polly Moller and Company in Oakland. We had a great set that combined elements from different past performances. And, as Polly relates, it was a “good crowd of nice people most of whom had not heard us before.” And it was interesting contrast to the other performances, which included folk music, traditional Celtic singing, and belly dancing.

On Sunday, it was back the SFEMF for the final night. This performance featured a collaboration of ]Pauline Oliveros and Carl Stone. Oliveros is of course on the giants in modern American music, the founder of the music practice Deep Listening and one of the founders of the original San Francisco Tape Music Center. History aside, this performance was quite contemporary, laptop-based, and very much in keeping with the other performances of the festival.

The second performance, Barpieces was a duo of Charles Engstrom and Christopher Fleeger. However, to those of us in the audience it appeared as a solo performance event though it was actually a “remote duo.” This was a bit of logistical improvisation in the wake of Hurricane Gustav.

The final performance of the festival was by Hans Fjellestad, a Los Angeles-based musician and filmaker, whom some readers of CatSynth may know from his documentary Moog. His performance featured analog electronics and custom instruments that were a contrast to the previous performances of evening, both sonically and visually:

In addition custom electronics in the transparent boxes with blue lights, he also had a Moogerfooger and one of the infamous tube-effects boxes from Metasonix. The performance consisted long evolving analog sounds, noise bursts and other effects. And it provided a conclusion to the festival by adding another variety of “electronic music” to the mix.

Weekend Cat Blogging #110: Preparing for tomorrow's performance, part 1

With my performance for the 7th Annual Skronkathon neary upon us, I am spending more time in the studio trying to prepare musically but mostly dealing with technical glitches.

Of course, Luna wants to participate:

But it's really about what she wants, which as always is a little love and attention:

So while we continue to get some actual prep-work done, you can go visit the WCB roundup, hosted by our friend Dragonheart, who celebrates his first birthday next week (just one day before we at CatSynth celebrate our first anniversary).

And for more feline-blogging fun, check out the Friday Ark (where we are always late), and Carnival of the Cats. And lest we forget, this weekend is the first Bad Kitty Chaos Festival (not that we would ever consider Luna or any of our feline friends to be “bad”).

And if you have a chance, Luna has been writing a few journal entries of her own over at Catster, including the much dreaded trip to the V-E-T this past week.

Guess the Electronics

Inspired by a discussion on the Bay Area New Music maling list about electronic-acoustic music as well as different electronic tools/technlogies (e.g., MAX, CSound, etc.), I present the Guess the Electronics game.

Simply listen to each of the challenges below and leave a comment on how you think each was one created. You can also take a guess as to which examples include acoustic material.

It's fun for the whole family ?

challenge 1
challenge 2
challenge 3
challenge 4
challenge 5

After radio performance

It seems like the radio performance went well, and it was a good experience. In addition to my live performance of the Wooden Fish, we played two selections of the CD and I participated in an interview with regular WTUL electronic-music host Conner Richardson and guest-host-for-the-day Kristine Burns. The show actually took place outside in a courtyard, with our audio feed relayed to the studio via internet:

A small number of people from the conference, as well as one or two other curious indviduals, did stop in the courtyard to watch for a bit.

The setup and soundcheck went smoothly, and as can be seen below, the rig was nicely laid out:

Thanks to Kathryn Hobgood of Tulane University Communications for these photos.

Overall, I was fairly happy with the live performance, nothing went particularly wrong, and Conner noted it was quite an unusual piece. The interview, conducted mostly by Kristine Burns, focused on the pieces themselves, both the musical ideas behind them and the technology used to realize them – of course we talked about the CD – also about my musical background, including my having studied with Ruth Schonthal. Because the next participant was a no show, we had more time for both discussion and music.

Hopefully I will get a recording of the event, and if so will make a link available here.

Preparing for tomorrow's radio performance and more ICMC

In order prepare for tomorrow’s performance on WTUL 91.5FM, I have set up a “compact” system here in a corner of the hotel room with laptop, audio interface, mic, small tablet and keyboard:

The only thing I wasn’t able to get running simultaneously was the Evolver, which is only used for a small part of the performance – the problem was not enough nearby outlets.

Performing live on radio presents some additional challenges because of the time constraints, constant sound-level requirements, and of course the fact than any flaws in the technology or my performance will be part of the broadcast and what everyone listening hears…and remembers. So I have been spending extra time preparing and practicing.

Musicially, The Wooden Fish is not a difficult piece. Basically, it is a guided improvisation based on a few short rhythm patterns in 3/4 and a graphical score. The technologies for this performance are straight-forward as well. The initial delay/loop section and a tablet-controlled loop are programmed in OSW, and other variations of the patterns are done in Emulator X2, including one variation using the Twistaloop feature. Both applications are running simulataneously, allowing me to easily switch between at any given point. So far this seems to be working fine – I am just a little wary after a nasty crash at a performance a few weeks ago.

I did take some time out of today’s preparations to return to the conference for the SEAMUS concert and Max Matthews’ keynote address. Matthews, who turns 80 next week, is considered the “father of computer music” and was received very warmly by everyone. While it is inspiring to hear from giants in one’s field, I couldn’t help feel a little demoralized during his relating of past accomplishments and interactions with others – it’s hard live up to those kind of standards, or even see how one could try given the way the computer-music community has evolved. But on balance, it was inspiring – at the very least I would like to explore some of the books and records he recommended. It was also great to see someone who at 80 can talk at length not only about theories and foundational work on mainframes, but also on the latest laptop technologies (like Mac core-duo) and sensor technologies for interactive music performance.

I also found myself more aesthetically in tune with what the SEAMUS musicians were doing than many of the pieces from the ICMC.

Those who are interested in turning in to the radio performance tomorrow can click here for the live internet stream.