TODO: links to references / definitions here.
The initial design hard-coded 4 techniques for generating melodies, but provided for overriding the built-in (probabilistic) expansion generator. Why is that? Why not allow for a melody generator to be specified in the same way the expansion is?
One dead easy generator to implement right away is 1/F noise. Anna Tracy has a method for generating melodies, along with doing straight variations on the resulting melody, as does the algorithm composition page.
IFS produces a sequence of values called an orbit that can be mapped to musical parameters. A survey of the (basically infinite) number of IFS systems is needed to find values that map well, possibly within some constraining systems. I've been reviewing the Dynamical Systems and Chaos courses at Complexity Explorer that gets a start on this. Also some papers (eg. in CMJ) have also covered IFS systems.
When looking at the grid, it's easy to think that is perfect for experimenting with CAs. But this actually requires a repurposing the function of the grid within the GridMusic system; some techniques map reasonably well to the grid (stochastic, choatic, with a bit of work ANN or GAs for a junior-level student), while others, most notably CAs, require a rethink. So it seemed best to back off from GridMusic and experiment with generating sounds using existing CA programs, and then decide later how best to integrate the right ideas into the GridMusic framework.
First, try sonification of the emergent behaviors of CAs, starting with the one we know best: Game of Life. Pearson has some reference implementations in Generative Art (a book I use in CSci 101) not only of GoL but also the Vicniac Vote, Brian's Brain, and 256 Shades of Grey. I've made initial attempts at sonification of each of these.
One must come up with a mapping, and then having the CA program broadcast the values we'll use for pitches over the network. Another program, developed using Pure Data, listens for these values and produces pitches from them. [Note here - try using an oboe sample with phasors as your synth sound] The reason for separating is that these programs consume a lot of computing resources (there is a fair amount of computation going on), so it helps to have a different computer on the network handle the audio part.
The seed melody comes from a sound file played through Pd (via [sigmund]) and harmonized using the Expansion. Can they sync so that the expansion plays alongside the Pd output? (Rebecca van de Ven).
WHAT IF we had the computer figure out when and where to play the snippets? Instead of the players deciding, something funky - like a CA rule or a Markov model - would decide when to kick off a particular snippet, and how long it would play. Mappings: 5-6 bits determine which snippet fires, a value from 1-16 determines how long it runs. Can this map to GridMusic? It kinda can.