The second concert of this year’s Outsound Music Summit, entitled “Vibration Hackers”, featured electronic musical experimentations from Stanford’s CCRMA and beyond. It was a sharp contrast to the previous night in both tone and medium, but had quite a bit to offer.
The concert opened with #MAX, a collaboration by Caitlin Denny on visuals, Nicole Ginelli on audio, and Dmitri Svistula on software development. It was based on the ubiquitous concept of the hashtag as popularized by Twitter. Audience members typed in suggested terms on a terminal set up in the hall. The terms we then projected on the screen and used to search online for videos, audio and textual materials to inform the unfolding performance. Denny used found videos as part of her projection, while Ginelli interpreted results with processed vocals.
The idea was intriguing. I would have liked to see more explicit connection between the source terms and audio/video output – perhaps it was a result of the projection onto the distorting curtain instead of a flat surface, but the connection wasn’t always clear. It would have also been fun to allow audience members to input terms from their mobile phones via Twitter. But I applaud the effort to experiment artistically with social networking infrastructure and look forward to seeing future versions of the piece.
Next was a set of fixed-media pieces by Fernando Lopez-Lezcano, collectively called Knock Knock…anybody there? Lopez-Lezcano is a master of composition that uses advanced sound spatialization as an integral element, and these pieces presented a “journey through a 3D soundscape”.
[Photo: PeterBKaars.com.]
The result was a captivating immersive and otherworldly experience with moving sounds based on voices, sometimes quite intelligible, sometimes manipulated into abstract wiggling sounds that spun around the space. There was also a section of pop piano that was appropriately jarring in the context which gave way to a thicker enveloping sound and then fades to a series of whispers scattered in the far corners of the space. The team from CCRMA brought an advanced multichannel system to realize this and other pieces, and the technology plus the expert calibration made a big different in the experience. Even from the side of the hall, I was able to get much of the surround effect.
The next performance featured Ritwik Banerji and Joe Lasquo with “Improvising Agents”, artificial-intellgience software entities that listen to, interpret, and the produce their own music in response. Banerji and Lasquo each brought their own backgrounds to the development of their unique agents, with Banerji “attempting to decolonize musician-computer interaction based not he possibilities that a computer is already intelligent” and Lasquo applying his expertise in AI and natural language processing to musical improvisation. They were joined by Warren Stringer who provided a visual background to the performance.
[Photo: PeterBKaars.com.]
As a humorous demonstration of their technology, the performance opened with a demo of two chatbots attempting to converse with one another, with rather absurd results. This served as the point of departure for the first piece, which combined manipulation of the chatbot audio with other sounds while Banerji and Lasquo provided counterpoint on saxophone and piano, respectively. The next two pieces, which used more abstract material, were stronger, with deep sounds set against the human performances and undulating geometric video elements. The final piece was even more organic, with subtle timbres and changes that came in waves, and more abstract video.
This was followed by Understatements (2009-2010), a fixed-media piece by Ilya Rostovtsev. The piece was based on acoustic instruments that Rostovtsev recorded and then manipulated electronically.
[Photo: PeterBKaars.com.]
It began with the familiar sound of pizzicato strings, that gave way to scrapes and then longer pad-like sounds. Other moments were more otherworldly, including extremely low tones that gradually increased in volume. The final section featured bell sounds that seemingly came out of nowhere but coalesced into something quite serene.
The final performance featured the CCRMA Ensemble, which included Roberto Morales-Manzanares on flute, voice and his “Escamol” interactive system, Chris Chafe on celletto, John Granzow on daxophone and Rob Hamilton on resonance guitar. Musical creations were a major part of this set. Chris Chafe’s celletto is essentially a cello striped down to its essential structure and augmented for electro-acoustic performance. The saxophone is based on a bowed wooden element where the sound is generated from friction. The Escamol system employed a variety of controllers, including at one point a Wii.
[Photo: PeterBKaars.com.]
The set unfolded as a single long improvisation. It began with bell sounds, followed by other sustained tones mixed with percussive sounds and long guitar tones. The texture became more dense with guitar and shaker sounds circling the room. The celletto and daxophone joined in, adding scraping textures, and then bowing sounds against whistles. In addition to the effects, there were more idiomatic moments with bowed celletto and traditional flute techniques This was truly an experimental virtuosic performance, with strong phrasing, textural changes and a balance of musical surprises.
I was happy to see such a strong presence for experimental electronic technologies in this year’s Summit. And there was more electronics to come the following evening, with a very different feel.