Moonwalker

The system works now, still without animations, but working nonetheless. I can see the numbers clicking along, and since I know what they mean, I can visualize what they are doing and what they will be doing soon. I’m happy with that.
Since I need animations to build the combat collision code I worked on one of the enemies behaviours, and it turned out to be a giant pain in the ass. All I wanted was for the enemy, if it was close, to back up. That doesn’t seem too terribly hard, but in the context of a system that was designed for chains of attacks, it devolved quickly. Basically, since the “animation” had no presence in the “Attack Animation” table, it was assumed that the “animation” is always finished. That meant that I got 1 game frame of having the enemy back up before it got tired of that. I bashed on it for 30 minutes before finally breaking the Enemy Combat AI into two pieces – Attacking and Everything The Hell Else. Of course the real trick is figuring out what constitutes an Attack and what should be in the ETHE column.
So it works now, I just need to tweak it to allow greater backing up distances. Right now it leaves the “Up Close” zone and stops backing up. I would like it to get further into the “Further Away” combat zone before it wants to do something else, but the Moving Back action is considered a Close behaviour. I think I’ll have the backing up happen no matter what, but only have a trigger for it when Up Close. I think that will work.
Oh, the title, New Move = No Animation yet, so the walking forward animation plays when the walking backwards is happening. If you were around in/remember/heard of the 80’s, you’ll know exactly what I’m talking about.

I’m finding that AI is damned hard. Rather, Dynamic AI with Dynamic Input is hard. For example, the Pathfinding Algorithms are Dynamic in that they considered and take action based on external factors. These factors however (the level geometry) are static. They do not change, so I can build a system that can easily take every possibility into account and deal with them consistently. Then, like a good little automoton, they just go.
Static AI with Dynamic Input is more of less easy too. The Character collision is an example. The Input can change a lot, but the outcomes are all known in advance. There are lots of ways to do something, but they fall into a variety of groups that can be categorized and dealt with. Again, consistent, observable and repeatable results are what you get.
What I’m into now, especially with having enemies have ways to be defensive, is needing them to respond in a dynamic way that is also natural looking. So I can’t give straight up, “if this, do this, every time,” instructions, since that is unnatural. The stuff needs to be contextual.
Take the new Moonwalking function as an example. Enemies “decide” to back up and move away, then “decide” since they are far away, to move back up again. If left to its own devices, it can do this lots of times, which is crap. Yet if I tell it, “if you back up, always lunge back in” it’s unnatural. The option to act stupid should be allowed from time to time, if only to provide variety.
I’m thinking that judicious use of random number generation and statistical tweaking will get me okay results, but I fear that in the end, the AI will just be lots and lots of specific circumstances. A giant list of exceptions. That’s a terrible way to design a system, any system really. Yet the more I dig into it, the more likely I’m finding that it’s possibly the only way to do it.

Leave a Reply

Your email address will not be published. Required fields are marked *