Disclaimer:

This audio test uses Fmod audio middleware, which doesn't work well with WebGL builds. You may need to double click the button, and you will most likely encounter some stuttering audio. I have created a video demonstration so that it can be viewed at full quality. This is an issue that Fmod developers are aware of. Here's a link to a thread involving Brett Paterson, the CEO of Firelight Technologies and original developer of FMOD, that describes the issue in more detail. https://qa.fmod.com/t/stuttering-during-scene-load-in-webgl/16160/4.


Explanation and origins:

This is a prototype of a procedurally generated, never-ending lo-fi song. This  current version procedurally generates percussion and  the background SFX that you would typically hear in lo-fi(vinyl crackling, nature sounds, etc.) The idea for this actually originates from an old musical dice game and sight reading exercise popular in the mid to late 1700's called Musikalisches Würfelspiel. It "allegedly" originates from Mozart, but that is not officially confirmed. The procedure of the game goes as follows: 

1. Take written melodies and cut it into multiple pieces and number each segment.

2. Roll dice to determine which segment of music to arrange in what order.

3. Put the rearranged pieces into the correct order and play.

This is an example of something called Indeterminate music, or Aleatoric music, but as far as I have found, It's also the earliest example of procedural music, not unlike what I wanted to do for video games. The only difference is that now, we have much better tools than paper and dice.

What I've essentially done is take the dice and instead of rolling it to rearrange a musical structure as large as a page of sheet music, I am instead rolling it on a micro scale. First for the tempo (although there is so far only one option for tempo. Adding multiple potential tempos will likely be the last thing I do since I have to rerecord all audio files separately for each tempo) then for variations of each instrument, then, depending on the instrument, for the melodic rhythm, harmonic rhythm, or... just rhythm for percussion, and finally for the actual note choice for each instrument. This is all organized as a decision tree, assembled in Fmod, then controlled with a script in Unity. 

The Goal:

Apart from having the absolute coolest 24 hour lo-fi stream on youtube, I think this technology could be useful  in a number of ways as a game developer and beyond. For example, nobody said that the elements of the music have to be random, we could also assign the different elements of the music to correlate to variables in Unity and then we have a hybrid between procedural and adaptive sound. If, for example, a game had a character creator, the different elements could be given a numerical value and sent to Fmod to create a unique melody for seemingly countless variations. It could be used to create a wholly original melody that reflects the custom characters traits, background, species/race, etc.