This needs a better name.
Hidden Markov Models in CommonMusic are used for algorithmic composition. One fun and hopefully interesting way of presenting HMMs would be to generate a bass line for 12-bar B-flat blues. Use the first three (possibly four) orders of model to show increasing fidelity of the model to the “ideal.” Well, my idea of a blues line probably isn't to be considered “ideal” but you get the idea.
It would be nice to show both note generation and rhythmic variation. Wondering out loud how hard that ease. How does one quantify “feel”? Well, it's a good idea for talking about future avenues of research. Don't forget the Human-like Synthesis / Performance AI paper in the stacks.
The HMM presentation could presage a paper on “example of how to develop HMMs in AI class”. And you don't even have to be a musician yourself. Ideas for adapting to non-musical venues also might need to go into a paper. CCSC would be a “fersure” but SIGCSE and Inroads might be worth trying as well.
Implement the HMM generally enough in Scheme and you might get an ICFP or Scheme Workshop paper out of it. Implement it in Pd and get a Pd / ICMC / IDMAA paper out of it.