User Tools

Site Tools


algocomp:fac_research_talk

Rebels Rest Fac Research Talk

Resources:

I envision a talk that begins with a bit of a walk-through of the key ideas as I've explored them, seeking a definition of A.C. using Cope (CMoMC) or Gerhard (AC) (or even Doornbusch). One question that would have to come up is WHY?, as in why philosophically does it make sense to use the computer, and these mathematical models (and not others), to generate art? and then my puny attempts to figure it.

Stephen Miller suggests that 20th c. composers were interested in constructivist schemes, obviously things like Schoenberg's 12-tone serialism and Xenakis' formalized music (whither Lygeti?), but to the more mathematically minded serialism was “child's play.”

Some reasonable things to do is show genart examples both Pearson and Cope, then GridMusic. I think Doornbusch's distinction between style imitation and genuine composition useful, and you can use Cope and Anna Tracy both as examples of style imitation, using Markov Models (etc) to analyze a corpus of compositions and then generating new works in the modeled style (fife tunes, Stephen Foster, Chopin). As background, I'd hit Xenakis' gas molecules, maybe Illiac Suite, and Miranda's Choasynth.

Here's another tack: sitting outside on a moonlight evening, what do we hear? Nothing orderly, nothing conducted. In my backyard, I hear a more or less constant drone of insect or amphibian noises as the background, which almost form a complete chord to my ears. I hear what seems to be an intermittent call-and-response between two groups of what sounds like cicadas, from right and left (Robbe Delcamp would call this “antiphonal”),which come in and out of phase. And there are periodic, seemingly random piercings of loud but less gregarious animals, the hooting of our neighborhood owl, a truck braking down a hill on the highway a mile or so away, and from time to time motorbikes gunning it down stretches of Dayton Blvd. Dogs bark, cars go by.

There is a visual to this as well, the moonlight filtering through the leaves of the big tree on the left side of the yard, then growing brighter as the moon moves past the tree into the open, then darker as it is interrupted by the large tree on the right. “Casting giant shadows of vast penetrating force” as Jon Anderson put it. It's multimedia.

So how might we model all this? The drone is an example of emergent behavior, which can be formalized using algorithms. We have two visual examples of emergent behavior (the audio examples are harder to construct, but here's a shot at it). The call-by-response are stochastic; there is likely a statistical distribution, such as the Poisson, that would model it. The less common sounds are aleatoric, at least in my model.

We have a growing Bibliography!

I see the talk then settling on my experiments using chaotic systems and CAs (and possibly L-systems for grammar buffs) as the basis for algorithmic composition systems.

Walking through these ideas, describe the first cut at CMT, explain (and listen to?) the difficulties of approaching such ideas through musicians (albeit geeky, math-savvy musicians) and the trek through the physicists and empirical scientists, ending (?) with Hodfstadter and Melanie Mitchell. “Well, in his Ebey Lecture Mark Guzdial of Ga Tech said something that has stayed with me. He said that computer scientists don't do enough reading in the history of their own field. So, by going back all the way through CS and math to Stanislaw Ulam I managed to build up a better understanding of the subject. After all, it was physicists and mathematicians that designed and built the first computers.”

And now to re-apply it to music.

“Well, when looking at the grid, it's easy to think that is perfect for experimenting with CAs. But this actually requires a repurposing the function of the grid within the GridMusic system; some techniques map reasonably well to the grid (stochastic, choatic, with a bit of work ANN or GAs for a junior-level student), while others, most notably CAs, require a rethink. So it seemed best to back off from GridMusic and experiment with generating sounds using existing CA programs, and then decide later how best to integrate the right ideas into the GridMusic framework.

“The first step is to try sonification of the emergent behaviors of CAs, starting with the one we know best: Game of Life. Pearson has some reference implementations in Generative Art (a book I use in CSci 101) not only of GoL but also the Vicniac Vote, Brian's Brain, and 256 Shades of Grey. I've made initial attempts at sonification of each of these.

“This works by coming up with a mapping, and then having the CA program broadcast the values we'll use for pitches over the network. Another program, developed using Pure Data, listens for these values and produces pitches from them. [Note here - try using an oboe sample with phasors as your synth sound] The reason for separating is that these programs consume a lot of computing resources (there is a fair amount of computation going on), so it helps to have a different computer on the network handle the audio part.”

DEMOS ENSUE»»to great consternation. GoL will be chaotic and dissolve into patterns. VicVote will follow the movement, otherwise it's just a great gush of sound. Brian's Brain will quickly go to repetition, but might sound cool speeded up. 256 Shades might work well as counterpoint of two pitches, depending on if the movement is horizontal or vertical.««EUSNE SOSED

GridMusic idea #2: get the seed melody from a sound file played through Pd (via [sigmund]) and harmonize it using the Expansion. Can they sync so that the expansion plays alongside the Pd output? (Rebecca van de Ven).

Cage calls his work “purposeless music” but Stockhausen and others coined (it appears) the term aleatoric music, where aleatoric means dependent on chance, luck, or a certain outcome while applied to music could mean music whose general form/structure is known but some details are left to chance (eg In C by Terry Riley). So I hashed out some terms with Douglas Drinen, and we decided that stochastic processes in music are straight random processes, aleatoric is as defined above, and algorithmic / formalized can be viewed as synonyms. Not that we have to talk about all that, but it might help classify some composition techniques.

Speaking of In C, WHAT IF we had the computer figure out when and where to play the snippets? Instead of the players deciding, something funky - like a CA rule or a Markov model - would decide when to kick off a particular snippet, and how long it would play. Mappings: 5-6 bits determine which snippet fires, a value from 1-16 determines how long it runs. Can this map to GridMusic? It kinda can.

algocomp/fac_research_talk.txt · Last modified: 2013/10/04 11:00 by scarl