Opera Toolkit is a collection of audiovisual tools for performance, specifically dramatic musical narratives, or opera. The toolkit will consist of basic, self-contained software-based modules of audio and visual effects, ranging from straightforward ones like filters and sequencers, to more novel procedures and analytical devices. The interconnectedness of the toolkit components will be akin to a modular synthesizer, encouraging rapid prototyping and giving imaginative users the ability to arrange modules into sequences and networks of arbitrary complexity, accelerating the discovery of unexpected and serendipitous results, and reducing tension between technological and aesthetic impulses
The motivation for the toolkit is reducing the knowledge gap between technologists and performance artists which persists in contemporary new media production. Just as knowledge of electromagnetism is not required to play an electric guitar, we take the position that technical barriers should be reduced for tasteful integration of technology within the performing arts. We hope the toolkit will be accessible to artists and creatives who do not necessarily specialize in technology. We also hope the toolkit will assist in facilitating collaboration between parties who use different software and hardware set ups to produce work, including but not limited to openFrameworks, Max/MSP, Ableton Live, Processing, various DAWs.
Gene Kogan is an artist and programmer based in New York. He integrates emerging technologies into performing contexts including live music, dance, and theatre. His artistic output is characterized by inquiries into the grey areas of computational intelligence, and the application of machine learning to controlling generative and parametric systems. He is a contributor to OpenFrameworks, Processing, and other free and open-source creative software tools.
Lisa Kori Chung is an artist, creative producer and researcher working in the realms of sound art, performance, and the future of fashion. As a 2010-2011 Watson Fellow, she documented various communities that formed around technologically-based art practices. This interest in collaboration and community building, as well as bridging different forms of knowledge, has continued throughout her projects. These include Open Fit (with Kyle McDonald), an open source clothing workflow that brings pattern making knowledge into the Processing environment, Pianokosmos (with Tal Isaac Hadad and Gawid Gorny), a reactive system that illuminates nuances of a performerʻs gestures, and Sway (with Caitlin Morris), an immersive sound installation that aims to connect physical and sonic textures. She is currently a freelance creative producer and an artist-in-residence at Eyebeam Art + Technology Center.
Colin Self is an artist currently based in New York.
Colin composes and choreographs music, performance, and environments for expanding consciousness, troubling binaries and boundaries of perception and communication. Working with communities across disciplines and practices, Colin utilizes voice, bodies, and computers to interface with biological and technological software.
Colin Self is a Bard Milton-Avery MFA Candidate in Music / Sound.
This project is supported by a residency at Eyebeam Art + Technology Center, Brooklyn, NY.