Synthesis toolkit and procedural generation of music

Now perhaps this is something that has been explored previously, but with STK it is fairly easy to create synthesizer programs, ones that could, with a little bit of work, produce something like a semblance of what the environment holds in store for your survivor.

A hulk approaches you, drums beat faster and faster as it approaches you. Your blood runs cold as it smashes through the wall, producing a flourish of brass. It knows where you are now. As do it’s all too many friends, other instruments(which I can’t be bothered to think of in particular) spring up as they charge(stumble) towards you.

A lower beat starts as you switch to running away, the sound of your ragged breathing as you run out of breath, the percussion of something that might be comparable(with a bit of creative freedom) to your feet hammering across the ground.

Now, I don’t have much musical insight, experience, or otherwise helpful knowledge, but I’ve written things with STK before, and I’ve found it to be a generally easy api to learn. If anyone can help with the music, I’d be willing to try and encapsulate it into the game.

I don’t know about whether or not the devs currently running the project have an opinion on the inclusion of additional libraries to an optional module of the game, but the license is very permissive.

It is my understanding that this library can synthesize fairly complex things in real-time, but I think that it would have to take place in its own thread.

So does this sound good to anyone out there? I’d like to try and implement this, but I’d like some kind of musical support(Not SOUND=1) prior to starting, you know, so that there are tunes or melodies that I can start out with.

Anyway, it would also require me to edit the makefile, and that’s a somewhat hairy proposition at the best of times.

Any thoughts, discouragements, or otherwise helpful musings to bring up?

The ambient sounds PR just landed, which supports a large number of environmental cues triggering various sound effects and changes to background music, it’s doing some of what you outline manually with tempo shifting and volume adjustment.
We don’t have a soundset for it merged because the original soundset didn’t do any management of license information.

The next steps are building a soundset with permissively-licensed sound samples (compatible with CC-BY-SA 3.0) and figuring out how we’re going to distribute it.

As for library inclusion, the main thing that makes me worry is it seems to want to talk directly to the sound hardware, so this would most likely need to replace SDL audio instead of add to it. What I’d probably want to do is get some test functionality working as a compile-time option and make sure it works on a bunch of people’s systems before committing to making the switch.

Nah, that part is optional, for example, we aren’t using a midi input or output, so it’ll be fine…
No, wait your concern might be valid. Oh dear.

I don’t compile with SOUND=1, so I hadn’t thought of this. In any case, I think that it plays nicely with the drivers, but Windows support looks rather problematic as it looks as though it requires directX headers. It doesn’t look suitable at all for that.

In any case I was talking about procedural generation of music that is in some way representational of the environment around the player. Not just playing sound effects, but generating music out of the game world.

STK doesn’t seem to fit as of now, unfortunately. I don’t know of any other generative sound libraries like it, so I guess this feature won’t work properly in this context.

@jaked122 > That’s not a new idea. In fact, it would amaze you how ProcGen’d approach drove early chip programming efforts into what it stands for today. I mean, remember (if you can) Lands of Lore and its countless solos.

Here, you want to create a type of “on-the-fly” mix to support the “Unique Experience” idea you’ve got. And I’ve gotta tell ya - the up-to-date tech finally supports that mindset, menaning there are but a few boundaries. Streams and Threading are where you want them to be today. Even the game itself serves the purpose perfectly - worrying about latency when looking into a (turn based) Roguelike game isn’t really the thing you’re expected to do now, isn’t it?

The biggest deal with Wave (non-MIDI, soundfont or else) processing is that you have to worry about every single instance of every sound. I’d love to compare that experience to the joy of learning another, both spoken and written language besides your native tongue. Imagine it, and remember how you’re bound to have a lot of doubts and questions about it before you’ve managed to gain some skill beyond nominal. That’s the deal with live mix, you’ve gotta know what you’re dealing with and be absolutely sure about what you’re up to – and ProcGen’d mixing has no real difference to it.

As far as composition is concerned, I can see how performing one (single, MIDI) arrangement could be a matter of _RANDOM. Even though I’m concerned how this may be a very slippery slope to stand tall whilst juggling different cues and beats into current track - it is more sensible than pretending you can code a brute-RND sequencer meant solely for CataDDA purposes. This article has many peers, and is a good starting point as any.

Good luck!
Vultures.

Thanks. That’s actually really neat. I’m aware that this is where things are. However, I know one library which I can use, one that I’m comfortable using.

That’s STK. It doesn’t work on windows well enough for me to want to add. It would add a sound dependency on Direct X, and I don’t want to deal with adding that. I was originally thinking of adding something to monsters that would output a note for each movement. Do so based on speed, maybe write an abc file, as it’s a lot easier to write an output module for that than midi.

I have done that, but I don’t like keeping track of as many details as is necessary to write midi. Such a big file format. Also keeping track of delta-time for each midi event seems really really cumbersome.

So that, instead of having a track played back in real time, developed on the fly, you end up with a file for your playthrough that is in some way representative of your game experience. I think that’s neat, and I might try to implement it for my own experience.