michael dot gogins at gmail dot com
I now have my own Web site, please check it out!
There is some other music below, with discussion and papers…
I am an (almost) exclusively algorithmic composer. I find that algorithmic composition amplifies my musical imagination and contributes to formal unity. It opens up worlds of musical possibility for me that are beyond the power of my unassisted imagination. I am particularly interested in parametric and evolutionary composition. I am also trying to develop more efficient and recursive representations of music that encode musical craft without at the same time imposing a style.
I am interested (almost) solely in absolute music: instrumental music designed for undistracted listening. If I could find software that can sing passably, I would try “vocal” music since I have a great love of poetry.
I use Csound (almost) exclusively for rendering my pieces. Needless to say, I do not work in real time nor do I improvise, even though my interest in musical composition arose out of free improvisation on the flute.
I have become a contributor to the development of Csound 5, the next version of Csound, in order to improve Csound’s support for my approach to composition. In particular, I have added Python scripting to Csound, as well as the CsoundAC classes for various techniques of algorithmic composition.
These include imported MIDI sequences, loops and hockets, Lindenmayer systems, chaotic dynamical systems, iterated function systems, and the translation of images into both sounds (using additive synthesis) and scores (by extracting features), and most recently facilities mathematically generating and controlling voice-leadings, chord progressions, and voicings.
Here is a paper that I presented at the International Computer Music Conference in 1998, describing some of my ideas about algorithmic composition using scene graphs: http://ruccas.org/pub/Gogins/MusicGraphs.pdf. This article concerns an earlier version of my software written in Java. However, I have implemented the same concepts in CsoundAC, which is currently available as part of the Csound 5 distribution at http://www.sourceforge.net/projects/csound.
I recently completed a blind test of audible differences in music rendered with the single-precision versus the double-precision version of Csound: http://ruccas.org/pub/Gogins/csoundabx.pdf.
I am currently investigating score generation in voice-leading orbifolds. I recently finished my first paper in this field: http://ruccas.org/pub/Gogins/score_generation_in_voiceleading_orbifolds.pdf. The following studies were generated using only the algorithm described in this paper:
I presented my paper Score Generation in Voice-Leading and Chord Spaces at the 2006 International Computer Music Conference. This paper presents a more finished conception of the above ideas. You can hear the sample piece as a MIDI file or an MP3 – and here is the code for the piece.
I now working to achieve a more closely integrated representation of the principles of both voice-leading and harmonic progression. My new paper Atomic Operations for Algorithmic Composition presents the current state of my work. The operations and arithmetic described in the paper have been implemented in the Score, Voicelead, and VoiceleadingNode classes of the Silence composition system, which is available in CsoundAC in the Windows and Linux distributions of Csound 5 at SourceForge.
Chaotic Squares, 1991. I translated the measure on a 2-dimensional iterated function system into a Csound score. This is an early iterated function system piece, and one of my first-performed algorithmic compositions (at a Woof concert at Columbia University in 1991). The original piece was realized by a Korg M-1 MIDI synthesizer; this version was realized by a Csound frequency modulation instrument with dynamic index and comb post-filtering.
Cloud Strata, 1991. A Lindenmayer system piece originally composed using my program LinMuse. The Csound instrument was created by Michael Jude Bergeman for his piece Face On Mars, and modified by myself. This piece was performed at the 1998 International Computer Music Conference in Ann Arbor.
csound_2005-03-06_03.38.19.py, 2005. I wrote this piece in Python using Csound 5. I took the measures of a musical dice game first published in 1787 (sometimes attributed to Mozart), and constructed a score using Terry Riley’s technique of playing each measure in sequence a randomly chosen number of times. I added small offsets to some of the lines to make the rhythm more complex. The instruments include SoundFonts and Csound instruments adapted from Internet sources. Effects are adapted from J.L. Diaz and include chorusing adapted in turn from Lee Zakian, a multiple waveguide delay line reveb adapted in turn from Sean Costello, bass boost, and compression.
f--2002-01-28--17-37-42.042.mml, 2002. This piece was made using an earlier Java version of my Silence algorithmic composition system. It originated as a fractal image generated with FRACTINT.EXE, which was translated to a score and massaged in various ways, then rendered with Csound. The image itself can be found here: http://ruccas.org/pub/Gogins/FRACT186.GIF.
tags: artist audio mp3 algorithmic csound