San Francisco Electronic Music Festival (SFEMF), Part 2

Last week, I presented the opening night show of the San Francisco Electronic Music Festival at SFMOMA. Today we look back at the September 10 installment of SFEMF, which took place at the Brava Theater.

It was a busy Saturday evening of art and music, but after a trip through three neighborhoods on our illustrious public transportation system and chatting with several friends on the way in, I was still able to get a perfect seat in the center of the theater for the full immersive experience. As I often do these days, I was live-tweeting between sets with hashtag #SFEMF to share with a wider community both in the theater and beyond.

The concert opened with a tribute to Max Mathews presented by Marielle Jakobsons. Mathews is considered to be the “father of computer music” and his career spanned over five decades and continued up until the last days before he passed away earlier this year. The tribute brought together the technologies that Mathews pioneered and his love of classical music. It began with a recording of his 1971 piece Improvisations for Olympiad, set against images of Mathews’ long career and time with family and friends. In the piece, one can hear how far computer-music technology had advanced since the 1950s, in large part do to his own work (though it still hard to fathom that the piece was done using punch cards). The photos demonstrated how much he was loved by the community around him – many featured familiar faces from CCRMA at Stanford, where he had most recently worked.


[Diane Douglass and Marielle Jakobsons. Photo: PeterBKaars.com.]

Jakobsons then presented a personal tribute in the form of a new piece, Theme and Variations on Beethoven’s String Quartet No. 4 For Violin and Phaser Filters. Jakobsons had worked with Mathews on his Phaser Filters, a technology for live performance based on tuned resonances. With Diane Douglass on computer, Jakobsons performed on violin, with the familiar classical sounds blending seamlessly with the rich sounds from the filter technology.

Next up was Area C, a project of Erik J. Carlson. Carlson’s performance featured live looping of electric guitar and a variety of analog and digital effects, which were output via two guitar amps.

[Area C. Photo: PeterBKaars.com.]

Although the piece unfolded as a series of loops of small melodic and rhythmic figures on the guitar that were processed and re-looped, the overall texture of the music gave the impression of an ever evolving drone, not unlike something we might do at the Droneshift but with less strict rules and more opportunity for bits of texture to emerge.

After an intermission, the concert resumed with 0th, a “collective of four female artists, Jacqueline Gordon, Amanda Warner, Canner Mefe, and Caryl Kientz” presenting a live-performance piece Deep Blue Space: Factories and Forests. The performers were scattered at the edges of the stage, with a large lit hemisphere in the center, and an array of base drums in front. Behind them, a large video was projected. Additional unnamed performers beyond the quartet contributed to the dance elements. Costuming was also an important part of the piece, with interesting outfits and one performer sporting a pyramid-shaped hat.


[Setting up for 0th. Big bubble in the middle stage. And bass drums in front. #sfemf ]

Their performance was based on a fictional story that followed the exploits of the chess-playing supercomputer Deep Blue on a satellite that leaves Earth orbit and heads to the asteroid belt. The performance unfolded with a series of very punctuated sounds set against very deliberate motions with frequent pauses. The overall effect was mechanized and robotic, enhanced by the industrial imagery in the video. This was of course appropriate given the theme of machines in the underlying story.


[0th. Photo: PeterBKaars.com.]

Towards the end of the piece, several of the performers moved into place at the front of the stage, each behind one of the base drums and they began to strike the pedals in unison, a loud stream of slow rhythmic thumps against the electronic sounds spread in the background.

The final performance featured a collaboration by Yoshi Wada and Tashi Wada on a piece entitled Frequency Responses: 2011. The piece explored the interactions of the timbres of a variety of instruments and devices that can sustain long tones, such as a bagpipe, sirens, and old analog oscillators. It begin with jarring sound of an alarm bell but quickly settled into a steady state with an ever changing combination of sounds and instruments. Yoshi Wada, a veteran of Fluxus, frequently played the bagpipe during the piece. Tashi Wada remained behind the main table focused on a variety of electronic elements.


[Yoshi Wada and Tashi Wada. Photo: PeterBKaars.com.]

The equipment and overall texture of the piece evoked the early experiments in electronic music, and brought the concert full circle from its starting point with the tribute to Max Mathews. Although the interaction of the timbres could sometimes be rather intense, the focus on this element and listening for beating patterns on other details was quite meditative.

I think my live tweet “An exploration of very long tones ends in a major harmony #sfemf” is a fitting end for this review. Overall another strong concert.

San Francisco Electronic Music Festival, Part 1

Today we look back at the San Francisco Electronic Music Festival that took place earlier this month. Specifically, we review the opening concert which took place for the first time at SFMOMA. Appropriately for a collaboration with an institution focused on the visual arts, many of the pieces combined electronic music with graphics, video, or dance.

SFEMF is often a coming-together of people from the Bay Area electronic-music and new-music communities, and the audience was filled with familiar faces. Some even joined me in live tweeting with hashtag #sfemf during the concerts.

The concert opened with a solo performance by Sarah Howe entitled Peephole live electronic music and video.


[Sarah Howe. Photo: PeterBKaars.com.]

Howe describes her video work as “beautifully messy textures of low fidelity source material”. The result was quite mesmerizing, with ever-changing pixelated patterns on the large screen that pulsated and radiated, sometimes converging on seemingly recognizable images, sometimes completely abstract. The music featured highly processed electronic sounds taken from acoustic sources.

Next was Interminacy, a performance by Tom Djll and Tim Perkis based on “lost” John Cage stories, as “rescued from a Bay Area public-radio vault” (they did not say which public radio station). We hear Cage’s distinctive voice and speaking style, as recognized from his recorded interviews – see our post on John Cage’s 99th birthday for an example – with Djll and Perkis providing music in between the words supposedly derived from I-Ching. The music did cover a variety of synthesized electronic sounds, recording samples, and other elements, leaving plenty of silence as well.


[Tom Djll and Tim Perkis channel John Cage. Photo: PeterBKaars.com.]

It started out straightforward enough, but the narrations took a bit of a darker turn, which audience members may or may not have reacted to in amusement or horror. I personally fell into the former category, and considered this one of the more brilliant and well-crafted tributes I have heard in a long time. You can hear an excerpt from an earlier performance below (or here).

<a href=”http://djll.bandcamp.com/track/interminacy-excerpt” _mce_href=”http://djll.bandcamp.com/track/interminacy-excerpt”>Interminacy (excerpt) by Tom Djll/Tim Perkis</a>

The following performance featured Kadet Kuhne performing live with a video by Barcelona-based artist Alba G. Corral in a piece entitled STORA BJÖRN. Corral created visuals using the programming environment Processing that generated complex graphical patterns based on the constellation The Great Bear.


[Photo: PeterBKaars.com.]

Kuenhe’s music weaved in and out with the visuals in undulating but ever changing textures and timbres. The result of the combined music and visuals was quite meditative – at the same time, the visuals retained a certain analytical quality perhaps because of all elements based on connected lines. Glitchy elements in the music fed back into the lines and spaces.

Plane, a collaboration Les Stuck and Sonsherée Giles featured dance, visuals together with music. Stuck’s musical performance began against a video of Giles’ dancing that was created using a special camera technique and a limited palette of colors and effects to produce a low-resolution image with no sense of perspective. It did look a bit like a heat image of a moving body.


[Les Stuck. Photo: PeterBKaars.com.]

At some point during the performance, Giles herself appeared on the stage and the performance transitioned to live dance. Her movement was slow and organic, and she often stayed close to the ground, as if to make herself two-dimension like the images on the screen.  Stuck’s music combined with the dance had a greater intensity than the previous music-and-visual performances on the concert, particularly in contrast to the far more delicate STORA BJÖRN that preceded it.

The concert concluded with a performance of Milton Babbit’s Philomel, performed by Dina Emerson. We lost both Milton Babbit and Max Mathews this year, and both were recognized with tribute performances during the festival. Philomel is perhaps the best known of Babbit’s famously complex compositions. You can hear an early recording of the piece in a tribute post here at CatSynth, as sung by soprano Bethany Beardslee. Emerson certainly had her work cut out for her in taking on this piece, but she came through with a beautiful and energetic performance.


[Dina Emerson performs Milton Babbit’s Philomel. Photo: PeterBKaars.com.]

The piece combines electronic sounds, live voice and processed recorded vocals weaved together in a fast-moving texture that preserves a narrative structure. One can alternately listen to the words as disjoint musical events or as part of the larger story. At some point, even while focused directly on Emerson’s presence, the live and recorded sounds began to merge together. The electronics often seem to match the timbre and pitch register of the voice, which aided in the illusion of a single musical source.

Overall, I thought it was a strong concert with a particularly strong finish. It also was somewhat shorter and faster paced, with no intermission or long pauses between sets, which I thought was quite effective.

I also attended the Saturday concert and will review that in an upcoming article.

Myrmyr and Tiny Owl, Luggage Store Gallery

June began with a particularly strong electroacoustic and noise performance at the Luggage Store Gallery in San Francisco with Myrmyr and Tiny Owl.

Myrmyr is the electroacoustic duo of Agnes Szelag and Marielle Jakobsons, and their performance was in anticipation of the release of their new album Fire Star. Their work incorporates strings (in this case, electric violin and cello along with other instruments) and advanced electronics. I have heard and reviewed Myrmyr before, but this set was perhaps the most beautiful I have heard from them. Set amongst a dizzying array of electronics and wires, it opened with a series of struck string sounds that invoked the sounds of strings in South Asian or East Asian music. Szelag’s voice emerged over a series of rich arpeggios and became part of the texture via live looping. The complex harmony resolved to a long major-seventh chord, after which the strings became harsher and more percussive. Amidst pitch and delay effects, a plucked cello entered in counterpoint to the voice and other instruments. The overall effect was quite tonal and dream-like, and gave me the impression of glass objects.


[Myrmyr. Photo by Michael Zelner.]

The next piece started with strings, both plucked and tapped and used as a live-looping source. A rhythmic pattern formed from the loops, which built up in complexity and volume with lots of distortion. Over time, the distorted sounds became clearer and more ethereal as the strings cut out and left only the bells and electronic effects. These were in turn displaced by more liquidy sounds and the return of cello and violin, this time bowed. The piece featured interesting harmonies and vocals.

The final piece was from the soon-to-be released album. It became with a drone, with harmonium sounds and voice building up into a rich texture. As they fade out, a plucked string instrument (possibly guzheng after reviewing Myrmyr’s website) enters on a minor pattern. The sound was accompanied by bells and distortion effects. The music built up to a big recognizable chord that was unresolved. Another build-up followed, this time with voice that turned into a rich harmony with a particularly plaintive violin line.


[Tiny Owl. Photo by Michael Zelner.]

Myrmyr was followed by Tiny Owl, a band consisting of Matt Davignon (drum machines and synthesizers), Lance Grabmiller (computer and synthesizers), Suki O’Kane (percussion), and Sebastian Krawczuk (double bass and objects). Their performance consisted of one long constantly evolving piece. It opened with an impromptu round of “Happy Birthday” for Matt Davignon (it was indeed his birthday) that appropriately elided to a series of glitchy noise sounds. Soon the bass drum and cymbals and string bass entered. The overall undulating timbre seemed very insect-like, but there also bits of melody that came and went in opposition to the overall swells and dips in the sound. One gesture that I particularly liked involved drum machine “gurgling” set against bass. The gurgling sounds, which formed a complex timbre, were gradually slowed down to the point where it became a series of rhythmic elements – moments like this always make me think of Stockhausen’s Kontakte II. Eventually, they merged back into the overall ambient sound. Over time, the overall texture became busier, but also more drone like, with high pitches and even some screeches eventually emerging. Pitched noises moving up and down like factory machinery were set against a drum rhythm reminiscent of “Wipe Out” (that very insistent sixteenth-note rhythm that every young percussionist attempts to play). As the percussion (drums and objects) grew more rich, so the electronics became more intense with bursts of machine noise and longer notes with strange harmonics. The section of louder sound and more complex rhythm grew to a climactic point and suddenly faded out with just a low rumble and a sparse texture of percussive sounds. This part of the performance was drier, with more punctuated elements and scratching sounds. During a gentle rise in pitch and volume near the end of the performance, the sound seemed to merge with a passing siren on Market Street. (It wouldn’t be a Luggage Store Gallery performance without at least one siren incorporated into the music.)

The show concluded with both groups uniting for short jam. It was fun to hear the combined sounds: noise drones punctuated by strings, and at least one more siren from the street.

Regents Lecturer Concert, CNMAT (March 2011)

Today we look back on my solo concert at the Center for New Music Technologies (CNMAT) at U.C. Berkeley back in early March. It was part of my U.C. Regents Lecturer appointment this year, which also included technical talks and guest lectures for classes.

This is one of the more elaborate concerts I have done. Not only did I have an entire program to fill on my own, but I specifically wanted to showcase various technologies related to my past research at CNMAT and some of their current work, such as advanced multi-channel speaker systems. I spent a fair amount of time onsite earlier in the week to do some programming, and arrived early on the day of the show to get things set up. Here is the iPad with CNMAT’s dodecahedron speaker – each face of the dodecahedron is a separate speaker driven by its own audio channel.


[click image for larger view.]

Here is the Wicks Looper (which I had recently acquired) along with the dotara, an Indian string instrument often used in folk music.


[click image for larger view.]

I organized the concert such that the first half was more focused on showcasing music technologies, and the second half on more theatrical live performance. This does not imply that there wasn’t strong musicality in the first half or a lack of technological sophistication in the second, but rather which theme was central to the particular pieces.

After a very generous introduction by David Wessel, I launched into one of my standard improvisational pieces. Each one is different, but I do incorporate a set of elements that get reused. This one began with the Count Basie “Big Band Remote” recording and made use of various looping and resampling techniques with the Indian and Chinese instruments (controlled by monome), the Dave Smith Instruments Evolver, and various iPad apps.

Electroacoustic Improvisation – Regents Lecturer Concert (CNMAT) from CatSynth on Vimeo.

The concert included the premier of a new piece that was specifically composed for CNMAT’s impressive loudspeaker resources, the dodecahedron as well as the 8-channel surround system. In the main surround speakers, I created complex “clouds” of partials in an additive synthesizer that could be panned between different speakers for a rich immersive sound. I had short percussive sounds emitted from various speakers on the dodecahedron. I though the effect was quite strong, with the point sounds very localized and spatially separated from the more ambient sounds. In the video, it is hard to get the full effect, but here it is nonetheless:

Realignments – Regents Lecturer Concert, CNMAT from CatSynth on Vimeo.

The piece was implemented in Open Sound World – the new version that primarily uses Python scripts (or any OSC-enabled scripting language) instead of the old graphical user interface. I used TouchOSC on the iPad for real-time control.

I then moved from rather complex experimental technology to a simple and very self-contained instrument, the Wicks Looper, in this improvised piece. It had a very different sound from the software-based pieces in this part of the concert, and I liked the contrast.

The first half of the concert also featured two pieces from my CD Aquatic: Neptune Prelude to Xi and Charmer:Firmament. The original live versions of these pieces used a Wacom graphics tablet controlling OSW patches. I reimplemented them to use TouchOSC on the iPad.

The second half of the concert opened with a duo of myself and Polly Moller on concert and bass flutes. We used one of my graphical score sets – here we went on order from one to the next and interpreted each symbol.

The cat one was particular fun, as Polly emulated the sound of a cat purring. It was a great piece, but unfortunately I do not have a video of this one to share. So we will have to perform it again sometime.

I performed the piece 月伸1 featuring the video of Luna. Each of the previous performances, at the Quickening Moon concert and Omega Sound Fix last year, used different electronic instruments. This time I performed the musical accompaniment exclusively on acoustic grand piano. In some ways, I think it is the strongest of the three performances, with more emotion and musicality. The humor came through as well, though a bit more subtle than in the original Quickening Moon performance.

月伸1 – Video of Luna with Acoustic Grand Piano Improvisation from CatSynth on Vimeo.

The one unfortunate part of the evening came in the final piece. I had originally done Spin Cycle / Control Freak at a series of exchange concerts between CNMAT and CCRMA at Stanford in 2000. I redid the programming for this performance to use the latest version of OSW and TouchOSC on the iPad as the control surface. However, at this point in the evening I could not get the iPad and the MacBook to lock onto a single network together. The iPad could not find the MacBook’s private wireless network, even after multiple reboots of both devices. In my mind, this is actually the biggest problem with using an iPad as a control surface – it requires wireless networking, which seems to be very shaky at times on Apple hardware. It would be nice if they allowed one to use a wired connection via the USB cable. I suppose I should be grateful that this problem did not occur until the final piece, but was still a bit of an embarrassment and gives me pause about using iPad/TouchOSC until I know how to make it more reliable.

On balance, it was a great evening of music even with the misfire at the end. I was quite happy with the audience turnout and the warm reception and feedback afterwards. It was a chance to look back on solo work from the past ten years, and look forward to new musical and technological adventures in the future.

CCRMA Modulations at SOMArts

A few weeks I go, I attended CCRMA Modulations 2011, an evening of live electronic music and sound installations by CCRMA (the Center for Computer Research in Music and Acoustics at Stanford) and special guests at SOMArts in San Francisco. The event was an eight-hour marathon, though I only stayed for about half the time, seeing many of the installations and most of the live-music performances.

The first part of the evening featured sound sculptures from Trimpin and his students at CCRMA. This particular project, the “Boom Boom Record Player” by Jiffer Harriman stuck with me.

The output from the record player is used to drive the electromechanical instruments on the right. I thought the instruments were well crafted – but I thought it was particularly fitting to have a classic Earth Wind and Fire LP on the record player.


[Click image to enlarge.]

Trimpin’s offering featured coin-operated robotic percussion where the drums included just about every model of Apple notebook computer going back to an early PowerBook (and even earlier as I think I espied an Apple IIc).


[Click image to enlarge.]


The live-music portion of the evening with Tweet Dreams by Luke Dahl and Carr Wilkerson. Audience members with Twitter access were encouraged to live-tweet messages to a specific hashtag #modulations. The messages were then analyzed in real time and the data used to affect the music. As I was planning to live tweet from this event anyway via iPhone, I was ready to participate. Of course, inviting audience participation like this is a risky proposition for the artists, as one cannot control what people may say. I will freely admit I can be a bit snarky at times and it came out in some of my tweets. The music was relatively benign, with very harmonic runs of notes – and I exhorted them to “give me something harsh and noisy”. Inspired by another participant, I also quoted lines from the infamous “More Cowbell!” skit from Saturday Night Live, much to the delight of some in the audience. The main changes in the music seemed to be in density, rhythm and some melodic structure, but all within boundaries that kept the sound relatively harmonic and “pleasant.” I would have personally liked to see (as I suggested via Twitter), more complex music, with some noisy elements and more dramatic changes. But the interaction with the music and and the audience was a lot of fun.

The next piece, Sferic by Katharine Hawthorne, featured dance and electronics. It was described as “using radio and movement improvisation to explore the body as an antenna.” The dancers, dressed in black outfits with painted patterns, began the movement to a stream of radio static. The motions were relatively minimalist, and sometimes seemed strained. Gestures included outstretched arms and fingers pointing, with Hawthorne walking slowly as her dance partner Luke Taylor ran more quickly. Rich, harmonic music entered from the rear channels of the hall, and dancers moved to being flat on the ground. The static noise returned, but more crackly with other radio-tuning sounds, then it became a low rumble. The dancers seemed to be trying very hard to get up. Then they started pointing. The music became more anxious, with low percussive elements. The dance became more energetic and active as the piece came to a close.

This was followed by Fernando Lopez-Lezcano performing Dinosaur Skin (Piel de Dinosaurio) a piece for multi-channel sound diffusion, an analog synthesizer and custom computer software. The centerpiece was a custom analog synthesizer “El Dinosaur” that Lopez-Lezcano build from scratch in 1981.

The instrument is monophonic (but like most analog synthesizers, a very rich monophonic), multiplied for the purposes of the performance by audio processing in external software and hardware. The music started very subtly, with sounds like galloping in the distance. The sounds grew high in pitch, then descended and moved across the room – the sense of space in the multichannel presentation was quite strong. More lines of sound emerged, with extreme variations in the pitch, low and high. The timbre, continually changing, grew more liquidy over time, with more complex motion and rotation of elements in the sound space. Then it became more dry and machine like. There was an exceptionally loud burst of sound followed by a series of loud whistles on top of low buzzing. The sounds slowed down and became more percussive (I was reminded as I often am with sounds like this of Stockhausen’s Kontakte (II)). Then another series of harsher whistles and bursts of sound. One sound in particular started the resonant quite strongly in the room. Overall, the sound became steady but inharmonic – the timbre becoming more filtered and “analog-like”.

The final performance in this section of the evening featured Wobbly (aka Jon Leidecker) as a guest artist presenting More Animals, a “hybrid electronic / concrete work” that combined manipulated field records of animals with synthesized sounds. As a result, the piece was filled with sounds that either were actual animals or reminiscent of animal sounds freely mixed. The piece opened with pizzicato glissandi on strings, which became more wailing and plaintive over time. I heard sounds that either were whales and cats, or models of whales and cats. Behind this sounds, pure sine tones emerged and then watery synthesized tones. A series of granular sounds emerged, some of which reminded me of human moaning. The eerie and watery soundscape that grew from these elements was rich and immersive. After a while, there was a sudden abrupt change followed by violent ripping sounds, followed by more natural elements, such as water and bird whistles. These natural elements were blended with AM modulation which sounded a bit like a helicopter. Another abrupt change led to more animal sounds with eerie howling and wind, a strange resonant forest. Gradually the sound moved from natural to more technological with “sci fi” elements, such as descending electrical noises. Another sudden change brought a rhythmic percussion pattern, slow and steady, a latin “3+2+2” with electronic flourishes. Then it stopped, and restarted and grew, with previous elements from the piece becoming part of the rhythm.


After an intermission, the seats were cleared from the hall and the music resumed in a more techno dance-club style and atmosphere, with beat-based electronic music and visuals. Guest artists Sutekh and Nate Boyce opened with Bands of Noise in Four Directions & All Combinations (after Sol LeWitt). Glitchy bursts of noise resounded from the speakers while the screens showed mesmerizing geometric animations that did indeed remind me a bit of Sol LeWitt (you can see some examples of his work in previous posts).

Later in the evening Luke Dahl returned for a solo electronic set. It began calmly with minor chords processed through rhythmic delays, backed by very urban poster-like graphics. Behind this rhythmic motif, filtered percussion and bass sounds emerged, coalescing into a steady house pattern, with stable harmony and undulating filtered timbres. At times the music seemed to reach back beyond house and invoke late 1970s and early 1980s disco elements. Just at it was easy to get lost listening to Wobbly’s environmentally-inspired soundscapes, I was able to become immersed in the rhythms and timbres of this particular style. The graphics showed close-ups of analog synthesizers – I am pretty sure at least some of the images were of a Minimoog. I did find out that these images were independent of the musical performance, and thus we were not looking at instruments being used. I liked hearing Luke’s set in the context of the pieces earlier in the evening, the transition from the multi-channel soundscapes to the glitchy noise and to the house-music and dance elements.

I was unfortunately not able to stay for the remaining sets. But overall it was a good and very full evening of music and technology.

RIP Max Mathews (1926-2011)

Yesterday morning I received the sad news that Max Mathews, considered by many of us to be the “father of computer music”, passed away.

Not only was he among the first to use general-purpose computers to make music, but his work spanned many of disciplines within the field that we know today, including sound synthesis, music-programming languages, real-time performance and musical-instrument interfaces.

He studied electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely-used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance. [Wikipedia]

The “MUSIC-N” languages have influenced much of how we still program computers to make music. It has direct descendants such as Csound, and has also influenced many of the other languages for composer, perhaps most notably Max (later Max/MSP) that was named in his honor.

His rendition of “Daisy Bell” (aka “Bicycle Built for Two”) is one of the early examples ofphysical modeling synthesis. Sections of the vocal tract were modeled as tubes, and sound generated directly from physics equations. His work inspired the version of “Daisy Bell” sung by HAL9000 in the film 2001: A Space Odyssey (though he did real at a talk in 2010 that the version in the film was not his recording.)

Mathews continued to expand and innovate throughout his career, moving into different areas of technology. In the 1970s his focus shifted to real-time performance, with languages such as GROOVE, and then later with the Radio Baton interface, which can be seen in this video below.

I had the opportunity to see and hear Mathews at the ICMC 2006 conference and MaxFest in 2007, both events that honored his 80th birthday and five decades in music technology. At 80, it would be relatively easy and quite understandable to eschew the latest technologies in favor of earlier technologies on which he did much of his work, but there he was working with the latest MacBooks and drawing upon new research in connection to his own work.


[Max Mathews at GAFFTA in 2010. (Photo by Vlad Spears.)]

More recently, I saw him give a talk at the Gray Area Foundation For the Arts in 2010, where he introduced the work of young artists and researchers, something he continued to do all the way to the end. He was at the Jean-Claude Risset concert at CCRMA (and I later found out, gave the introduction, which I had missed.) I have also heard comments over the past day that he was still involved in email and discussions over current projects up through this week, a testament to his character and his love for this field and for the work that he pioneered.

Jean-Claude Risset at CCRMA

A few weeks ago I had the opportunity to see composer and computer-music pioneer Jean-Claude Risset present a concert of his work at CCRMA at Stanford. Risset has made numerous contributions to sound analysis and synthesis, notably his extension of Shepard Tones to continuously shifting pitches. The sound of the “Shepard-Risset glissando” where pitches ascend or descend and are replaced to give the illusion of a sound that ascends or descends forever. You can hear an example here, or via the video below.

Sadly, I arrived slightly late and missed much of the first piece Duo for one pianist (1989-1992), featuring Risset himself on a Yamaha Disklavier piano. The duo comes from the computer control of the piano simultaneous to the live human performer. It’s not a simple computer-based accompaniment part, but rather a duo in which the actions of the live performer are interpreted by a program (written in an early version of Max) and inform the computer response in real-time.

The remainder of the concert features works for multichannel tape. The first of these pieces, Nuit (2010): from the tape of Otro (L’autre) featured eight channels with meticulous sound design and spatialization. The ethereal sounds at the start of the piece sounded like either frequency-modulation (FM) or very inharmonic additive synthesis (actually, FM can be represented as inharmonic partials in additive synthesis, so hearing both techniques makes sense). Amidst these sounds there emerged the deep voice of Nicholas Isherwood speaking in French, and then later in English as well – I specifically recalled the phrase “a shadow of magnitude.” Surrounding the vocal part was a diverse palette of sounds including low machine noise, hits of percussion and wind tones, a saxophone trill, tubular bells and piano glissandi. There were examples of Shepard-Risset glissandi towards the end of the piece.

The next piece Kaleidophone (2010) for 16-channel tape begins with similar glissandi, providing an interesting sense of continuity. In this instance, they were ascending, disappearing at the top of the range and re-emerging as low tones. Above this pattern a series of high harmonics emerged, like wispy clouds. The glissandi eventually switched to both up and down motions, and subsequently followed by a series of more metallic tones. At one point, a loud swell emerged reminiscent of the distinctive THX announcement at the start of most movies; and a series of percussive tones with discrete hits but continuous pitch changes, getting slower and slower. There was a series of piano-like sounds with odd intonations played more like a harp, followed by gong-like sounds reminiscent of gamelan music but with very artificial pitches and speeds. Industrial metallic sounds gave way to a section of tense orchestral music, and the long tones that subtly and gradually became more noisy and inharmonic. A sound like crackling fire seemed to channel the early electronic pieces by Iannis Xenakis. Highly-comb filtered environmental sounds gave way to eerie harmonies. They constantly changing sounds lull the listener in a calm state before starting him or her with a burst of loud noise (perhaps the most intense moment in the entire concert). This was followed by machine noises set against a sparse pattern of wind pipes, and a large cloud of inharmonic partials concluded the piece. I had actually not looked in advance at the subtitle in the program of “Up, Keyboards, Percussion I, Percussion II, Winds, Water, Fire, Chorus, Eole” – but my experience of the piece clearly reflected the section titles from perception alone.

The final piece Five Resonant Sound Spaces for 8-channel tape began with orchestral sounds, bells and low brass, gongs (or tam tam), timpani. The sounds seemed acoustic at first, but gradually more hints of electronics emerged: filtering, stretching and timbral decomposition. A low drone overlaid with shakers and tone swells actually reminded me eerily of one of my own pieces Edge 0316 which was based on manipulations of ocean-wave recordings and a rainstick. This image was broken by a trombone swell and the emergency of higher-pitched instruments. The overall texture moved between more orchestral music and dream-like water electronics. A series of fast flute runs narrowed to a single pure-tone whistle, which then turned into something metallic and faded to silence. All at once, loud shakers emerged and granular manipulations of piano sounds – more specifically, prepared piano with manual plucking of strings inside the body and objects used to modify the sound. The sound of a large hall, perhaps a train station, with its long echoes of footsteps and bits of conversation was “swept away” by complex electronic sounds and then melded together. A series of high ethereal sounds seemed to almost but not quite be ghostly voices, but eventually resolved the clear singing voices, both male and female. The voices gave way to dark sounds like gunfire, trains and a cacophony of bells – once again, channeling the early electronic work of Xenakis. A breath sound from a flute was set against a diversity of synthesized sounds that covered a wide ground, before finally resolving to a guitar-like tone.


The concert was immediately followed by a presentation and discussion by Risset about his music. His presentation, which included material from a documentary film as well as live discussion covered a range of topics, including using Max and the Disklavier to perform humanly impossible music with multiple tempi; and marrying pure sound synthesis with the tradition of musique concrete, with nods to pioneers in electronic music including Thaddeus Cahill, Leon Theremin, Edgard Varese, and Max Matthews (who was present at the concert and talk). He also talked about the inspiration he draws from the sea and landscape near his home in Marseilles. The rocky shoreline and sounds from the water in the video did remind me a lot of coastal California and made it even less surprising that we could come up with pieces with very similar sounds. He went on to describe his 1985 piece SUD in more detail, which used recordings of the sea as a germinal motive that was copied and shifted in various ways. Percussion lines were drawn from the contours, he also made use of sounds of birds and insects, including the observation that crickets in Marseilles seem to sing on F sharp. I did have a chance to talk briefly with Risset after the reception about our common experience of composing music inspired by coastal landscapes.

Overall, this was an event I am glad I did not miss.

room: GLASS NOODLE (Pamela Z and Carl Stone)

On March 16 I attended the latest installment of Pamela Z’s ::ROOM:: series at the Royce Gallery in San Francisco. This concert, entitled room: GLASS NOODLE, featured Pamela Z and Carl Stone in solo performances, and then together as the duo.

The performance opened with a series of solo works by Pamela Z. I had heard several of these before, at the San Francisco Electronic Music Festival and earlier ::ROOM:: performances. As in previous performances, she began with a piece that featured live looping of melodic singing turned in harmonies, along with extended vocal techniques, “street textures” against sung lines, and bubble wrap. This was followed by a humorous piece the features the sounds and gestures of a manual typewriter, both key clicks and the carriage return – the narrative at the beginning of the piece is that the performer is writing to her penpal on a typewriter because her MacBook is broken. In the background, the video features transformations on old QWERTY typewriter keys. The round mechanical keys lent themselves to playful rolling animations. Over time, the music shifted to short voice loops and sample glitches, and gradually became darker. One piece that featured the experience of going through airport security (including an operating singing of the familiar “did you pack your own bags” inquiry) seemed familiar from SFEMF, although in that performance I recall a longer section in which people spoke about their contents of their suitcases. Pamela Z concluded her solo performances with sketches from new pieces. There were eerie loops of pure tones, whispers, stop motion video of the artist on a wooded path – bits of sound that resembled prepared piano were followed by several voices talking about memory.

Carl Stone’s performance was an electro-acoustic tour de force. His continuously changing samples and other electronic sounds weaved together a complex structure with both energy and sense of direction. It started off subtly, with a build-up of granular synthesis and complex harmonics that quickly became enveloping. Some of the sonic elements evoked a sense of relaxation, even as they were metallic and machine-like. A section of rhythmic percussive sounds and plucked strings seemed to suggest a rock influence, which gradually morphed into something more South Asian featuring tabla and other drums. As the sounds further transitioned from percussion to vocals with rolling watery lines, it seems we were traveling further east towards Southeast Asia. The music settled into an undulating six-eight rhythm, that every so often would pause abruptly and resume. String instruments provided both the harmonies and the rhythm, the vocals grew more tonally complex. Bell sounds emerged into the mix, at first part of the overall Asian sound but then becoming a more abstract element. It seemed that the bells were growing, with a soundscape of large metallic sounds, and constant harmonies against an ethereal background. The overall sound grew in intensity and sounded “choral”. After a period of time in this pattern, the Asian-influenced percussion and voice fragments re-emerged, although at times the voice seemed to be in a more classical Western style. Towards the end of the continuously evolving piece, there was at least one false ending where the sound disappeared, before returning, until it drew to a true close.


After a brief intermission, Carl Stone and Pamela Z returned to perform as a duo called Glass Noodle. The set started out very quietly with low granular sounds and low pitches that seemed like machinery winding down. Slowly, the sounds became a little higher and faster. Videos of glass noodles were projected on the background as Pamela Z began reciting noodle recipes. (For anyone reading this article who is not familiar with glass noodles, they are quite tasty and I highly recommend trying them.) The short vocal samples, which were looped and granulated, built up and became more complex over time, and were eventually joined by percussion and melodic bell-like sounds. As the voice and electronic sounds again became more subdued, the video became more glitchy, and I heard a recipe for fish source thrown in amongst the noodles, as well as vocal sound effects that evoked “deliciousness”. Minor harmonies emerged against the recipe recitations, along with references to red chili peppers and pickled garlic. If I had not already eaten before this performance, I am sure I would have been quite hungry (glass noodles and the other foods described are all quite tasty). The order and complex counterpoint of the music eventually decayed into a series of asynchronous loops.

The next section began with classical piano and granular sounds, sparse vocals and bird calls. The loops, pitch bends and other effects were quite playful, and evoked the sound experiments of the late 1960s (think “Revolution 9”). The sounds of the birds and voice gave way to strange percussive sound effects, squeaking and rubbing, before the voice returned in the distance. Over time the texture became more complex, with short hits of metal and glass sounds and a glitchy voice loop. The noodles being projected at this point seemed more brittle than the sinuous textures from the earlier part of the set – then all of a sudden they “melted” as the sounds grew more extended. As the sound once again grew glitchier and noisier, the piece drew to a close.

Pi Day, 2011 (with Music)

Every year, we at CatSynth join numerous other mathematics enthusiasts, geeks and otherwise eccentric characters in celebrating Pi Day on March 14.

March 14 is notated in the U.S. and some other countries as “3-14”, which evokes the opening digits of π (pi). Although the date representation is a very arbitrary connection to the number, we also recognize that the representation of π in decimal digits is arbitrary, an accident of human beings having ten fingers. So this year we are exploring the representations in binary and other related bases.

To represent an integer in binary, one of course presents it as a sum of powers of two, e.g., 11 = 8 + 2 + 1 or 1011 in binary. But one can also represent fractional numbers in binary. Digits to the right of the decimal point represents powers of one-half. So the binary number 0.11 is 1/2 + 1/4, or 3/4. Fractions like 1/3 can be represented with repeating digits as 0.010101…, much like in base ten. And this concept can be extended to irrational numbers like π.

The author of this website has calculated 32768 digits of pi in binary. We reprint the first 258 below:

11.
00100100 00111111 01101010 10001000 10000101 10100011 00001000 11010011 
00010011 00011001 10001010 00101110 00000011 01110000 01110011 01000100 
10100100 00001001 00111000 00100010 00101001 10011111 00110001 11010000  
00001000 00101110 11111010 10011000 11101100 01001110 01101100 10001001 

The initial “11” represents the 3 in π, and the remaining digits begin the non-integral portion. Like in the decimal representation, the binary representation continues forever with no particular pattern. While not as iconic or memorable as the decimal representation 3.14159…, there is something about the binary representation that makes it seem more universal, i.e., based on fundamental mathematical truths rather than a quirk of human anatomy. For me, the binary representation also lends itself to musical ideas. And for the occasion, I have created a couple of short synthesized pieces representing the 32768 binary digits of pi. In the first example, each binary digit represents a sample. The “1” represents full amplitude and the zero represents no amplitude (silence). The result, which at 44.1kHz sample rate is less than one second long, can be heard below.

The random configuration of digits sounds like noise, and more specifically like white noise, suggesting something approaching uniform randomness at least to human hearing. I also made an example slowed down to a level whether the individual samples became musical events. I find this one quite interesting.

With some additional refinement (and may some more digits to extend the length), it could certainly stand alone as a composition.

One interesting counterpoint to the notion that digits of pi form white noise is a conjecture related to its representation in hexadecimal (base 16), which as a power of two is “closer” to binary and seemingly less arbitrary than decimal. From Wolfram MathWorld, we find the following “remarkable recursive formula conjectured to give the nth hexadecimal digit of π – 3 is given by where is the floor function:

The formula is attributed to (Borwein and Bailey 2003, Ch. 4; Bailey et al. 2007, pp. 22-23). If true, it would add some sense of order to the digits, and thus additional musical possibilities.

Preparing for March 4 Concert, Part 2

On Tuesday, I went to the Center for New Music and Audio Technologies (CNMAT) in order to continue preparing for the Regent’s Lecture concert on March 4.  I brought most of the setup with me, at least the electronic gear:

Several pieces are going to feature the iPad (yes, the old pre-March 2 version) running TouchOSC controlling Open Sound World on the Macbook.  I worked on several new control configurations after trying out some of the sound elements I will be working with.  Of course, I have the monome as well, mostly to control sample-looping sections of various pieces.

One of the main reasons for spending time on site is to work directly with the sound system, which features an 8-channel surround speaker configuration.  Below are five of the eight speakers.


One of the new pieces is designed specifically for this space – and to also utilize a 12-channel dodecahedron speaker developed at CNMAT.  I will also be adapting older pieces and performance elements for the space, including a multichannel version of  Charmer:Firmament.  In addition to the multichannel, I made changes to the iPad control based on the experience from last Saturday’s performance at Rooz Cafe in Oakland.  It now is far more expressive and closer to the original.

I also broke out the newly acquired Wicks Looper on the sound system.  It sounded great!

The performance information (yet again) is below.


Friday, March 4, 8PM
Center For New Music and Audio Technologies (CNMAT)
1750 Arch St., Berkeley, CA

CNMAT and the UC Berkeley Regents’ Lecturer program present and evening of music by Amar Chaudhary.

The concert will feature a variety of new and existing pieces based on Amar’s deep experience and dual identity in technology and the arts. He draws upon diverse sources as jazz standards, Indian music, film scores and his past research work, notably the Open Sound World environment for real-time music applications. The program includes performances with instruments on laptop, iPhone and iPad, acoustic grand piano, do-it-yourself analog electronics and Indian and Chinese folk instruments. He will also premier a new piece that utilizes CNMAT’s unique sound spatialization resources.

The concert will include a guest appearance by my friend and frequent collaborator Polly Moller. We will be doing a duo with Polly on flutes and myself on Smule Ocarina and other wind-inspired software instruments – I call it “Real Flutes Versus Fake Flutes.”

The Regents’ Lecturer series features several research and technical talks in addition to this concert. Visit http://www.cnmat.berkeley.edu for more information.