The executive office (as well as some additional assets) are now done, so I do a bit of NPC scripting.
In this video, I mostly try to teach guards not to be stumped by a locked door. Will I be successful? Watch the video to find out! (Spoiler: I was)
Longer video than usual as I’m trying out a new format. In this video I implement low melee attacks/kicking. If you have any questions, comments or suggestions for stuff you’d like for me to cover in future videos, please let me know!
As teased in my screenshot for last Saturday’s #screenshotsaturday, the big news in the world of Hidden Asset is that the game is no longer in pre-alpha. There are quite a bit of different opinions on what pre-alpha, alpha and beta means in the world of videogames, but here’s how I use these terms:
- Pre-alpha: Missing core features and functionality.
- Alpha: Core features and functionality done – missing content.
- Beta: All content done – missing bugfixes and balancing.
So this means that all the core features and functionality of Hidden Asset are done. This is basically all the hardcore programming – a huge milestone! I can now focus almost all my dev time on designing levels, making assets and writing the full script for the game. As a result, the pace of development should also pick up a lot going forwards.
There’s just one thing I need to change code-wise before I can leave the heavy coding behind: switching from SDL 1.2 to SDL 2. This shouldn’t be that huge of a deal for Hidden Asset, since I only use SDL as a wrapper for OpenGL and for processing sound and inputs, which I believe hasn’t changed much from SDL 1.2 to SDL 2. I’m also switching Ascii Sector over to SDL 2, though, which should fix some problems with running that game on Mac OS X. Ascii Sector uses SDL for graphics rendering, which seems to have changed a lot from 1.2 to 2, so that may take up some dev time. We’ll see.
When I’ve switched everything over to SDL 2, I’ll build the last of the two introductory tutorial missions for Hidden Asset, and then this alpha version will be ready for closed testing, barring a few bugfixes and polishing. I’ll make an announcement when I’m looking for testers.
In addition to working on fixing and improving the AI, I’m currently working on adding carrying corpses. Beyond the programming involved with that, it also entails a new set of animations and a bunch of character sprites for these animations.
When I first started working on Hidden Asset, making these character sprites took a looooong time. At least a couple of weeks were spent making the thousands of sprites just for the walking animation. Since then, I’ve automated a lot of this process so it now only takes a few days. I thought I’d run through the process of making the sprites for a single animation set and how I’ve automated as much as I could to speed it all up…
Every character animation I make always starts in Poser. Here, I manually animate the default male Poser character. When I’m happy with the animation for him, I then also apply the animation to the female Poser character and correct any issues that crop up due to these two characters not being of the same size and proportions, so the animation from the male character doesn’t fit 100% on the female one.
For the ‘carrying a corpse’ animation I had to animate two characters at once: the player character and the poor guy/gal that’s dead and being carried. When this was done, I rendered out my template animation frames. For this specific animation, it resulted in these 10 frames of animation (here shown in 1 of the 8 movement directions):
These template frames are then animated in the actual game engine, meaning that each frame gets a set of coordinates that tell the game where the sprite should be drawn relative to the tile the character is on. I’m now ready to start rendering the actual sprites for use in game, since these template sprites are never used directly in the game — only for figuring out the relative coordinates for each frame.
Thankfully, Poser offers something called Poser Script, which is short scripts written in Python that can be used to basically automate anything you do in Poser. For my needs, these scripts are used to automatically load and apply clothes and hair and the like to the Poser characters, then rendering out each of these items separately and for each frame and direction of the animation. It doesn’t take much time to write such a script, especially now that I’ve already done this process a bunch of times and can just reuse old scripts. So I set a script to run in Poser and can usually leave the computer for half an hour or so — or work on something else while the sprites render in the background.
When this has been done for all the body parts, clothes, shoes, hairstyles and everything else for the characters, I usually have a few thousand separate sprite images in my folder. For this particular animation, I actually first rendered out the player character, since he’s just a single sprite:
When rendering the sprites for the character being carried, I applied a flat grey material to anything that wasn’t the specific sprite being rendered to mask out anything that’s obscured by the character itself or the player character carrying it. Let’s use a pants sprite as an example. Here’s one of the pants styles for the first frame of the animation:
Before I got clever about automating as much of this as possible, I only rendered the pants themselves, so I had to manually erase everything that was supposed to be obscured. If you have to do that for X thousands of sprites, you can probably imagine why this took so long for the first set of animations I made for the game. Now I’ve just written a small program that automatically reads all the image files in a folder and erases anything in the image with the exact gray color used for the mask. So I just copy this program into the folder containing the image files and run it. It chugs along for a couple of minutes and then everything has been erased just the way I want it:
There are sometimes still a few issues that need to be corrected manually. Sometimes the gray color was also present in another part of the image, so I have to restore the part that shouldn’t have been erased. Or parts of the character was clipping through the sprite and covering it, so I have to patch that area. But it’s negligible compared to the work needed before. I can probably quickly check and fix all the thousands of separate sprites over a day or two.
I do this in GIMP. But I do more than just fix any problem sprites in GIMP — I also shrink the sprites to half size. Whether drawing by hand or modelling and rendering through a 3D program such as Poser, Blender or 3ds Max, it’s always a good idea to make the original artwork in higher resolution than needed and then shrink it. You’ll get more detail in the final artwork/sprite like that. Also, in my particular case, the jagged edges left behind when the gray areas were deleted become more smooth when the image is shrunk:
In keeping with my attempt to automate as much as possible, I’ve written a small GIMP script that automatically shrinks all open images to half size and saves them. So I usually have 20-30 images open at a time in GIMP, then quickly check them for issues that need fixing and then just run my script. Quick and easy!
For each of these sprites to be drawn in the right place on screen when running the game, the engine needs to know the sprites’ coordinates relative to the template sprites. So when the game is running, these ‘sub-sprites’ (that are each a separate body part or clothing item) are drawn to screen relative to the original template sprites, which were drawn relative to the ingame tile.
To get the coordinates relative to the template sprites, I use another program that I wrote for this purpose. Just as I did with the program that erased all the masked parts, I copy this program into the image folder and run it. This program automatically crops each image and then writes the coordinates of the cropped image’s top left corner (what the coordinates of the cropped image’s top left corner were on the uncropped image) to a text file. This leaves me with thousands of cropped sprite images that are as small as they can possibly be — and a long list with the coordinates for each sprite:
Using a third program I wrote for this specific purpose, these coordinates are then converted into a format that I can copy/past directly into the game’s code. Voila!
All I need to do now is gather all these thousands of sprites into a few sprite sheets and feed the coordinate of each sprite on the sprite sheet into the game’s code. This is also done with my own program that spits out a text file with coordinates that are formatted so they can be copy/pasted directly into code.
And there you have it. There are quite a few steps involved, but automating as much as possible has made what was once the biggest bottleneck when developing Hidden Asset almost a breeze!
It started when I wanted to optimize frightened AIs searching for cover to hide behind. Before, they’d check all tiles in a given radius and see if they provided cover by doing line of sight checks for each and every tile. That was quite wasteful. A simple way to improve that system was to have each tile already hold information about the cover it provided — and just fill in this information when loading a map and update it accordingly when doors are opened or closed, and so on.
It also improved performance by not having to do as many line of sight checks in the rest of the code. Because with this improved system, if I’m on a tile that is designated as providing cover in direction X and the enemy I’m checking line of sight for is also in direction X, he doesn’t have line of sight and there’s no need to go through the more performance-heavy process of tracing his eyeline and checking if it intersects with any objects.
And now that I had information for a tile’s cover on hand, it was easy to visualize this information by borrowing XCOM’s shield indicators. The small difference being that in XCOM, your soldiers automatically crouch behind low cover, while in Hidden Asset you have to do that manually, so there’s a crouch icon that shows if you have to crouch behind the cover:
Combined with the shadowcasting from all characters, these cover indicators in Tactical Mode make it a lot easier to sneak around and avoid being spotted. So not only did I improve the AI cover finding, I also made it easier for the player to understand the cover system. Two birds with one stone!
Until now, NPCs in Hidden Asset didn’t care whether they walked on the road or sidewalk. When finding a path from one spot to another, they’d always choose the shortest path. But having NPCs walking in the middle of the road without a care in the world is pretty immersion-breaking. So I had to implement some kind of way of having NPCs prefer walking on the sidewalk. This turned out to be much simpler than I had anticipated.
A* pathfinding works by checking potential paths between nodes until the destination is reached. For each node, you store how far you’ve traveled to reach that node. When the destination is reached, you just backtrack through the nodes with the smallest ‘distance traveled’ value. This gives you the shortest path to the destination. For my purposes, each tile is a node.
The distance traveled can just be 1 for each tile. So if you move through 4 tiles to reach the destination, the distance traveled will of course be 4. But because the pathfinding will choose the shortest path, changing this 1 to some other value for specific tiles or situations can have huge effects on the final path found.
One example of this is that I don’t give the same value when moving diagonally as when moving straight. While moving straight results in the ‘distance traveled’ to be increased by 1, moving diagonally increases it by 1.4. This is quite simply because you’re moving further when moving diagonally across a tile that’s a rectangle than when you’re moving straight across it.
A Pascal code example:
IF MovingDirection IN Diagonal
THEN Tile[NextLayer, NextX, NextY].G := Tile[ThisLayer, ThisX, ThisY].G + 14
ELSE Tile[NextLayer, NextX, NextY].G := Tile[ThisLayer, ThisX, ThisY].G + 10;
This code snippet takes the ‘distance traveled’ value (G) of the current tile and applies it to the next tile while increasing it with either 14 or 10 depending on whether or not it’s a diagonal movement (I use 14 and 10 instead of 1.4 and 1 in order to stick to integers — the individual values don’t matter, they’re only important in relation to each other).
However, I can also say that moving across a road tile will increase the ‘distance traveled’ value by an additional 10. The result of this is that an NPC will only choose to move across a road tile if sticking to the sidewalk means that he’ll have to walk at least an extra 10 tiles. Furthermore, I can tell the pathfinding that if there’s a pedestrian crossing on the road tile, don’t add the road ‘penalty’ to the pathfinding.
So I add a function call to the above code. This function returns the ‘penalty value’ for a given tile.
IF MovingDirection IN Diagonal
THEN Tile[NextLayer, NextX, NextY].G := Tile[ThisLayer, ThisX, ThisY].G + 14
ELSE Tile[NextLayer, NextX, NextY].G := Tile[ThisLayer, ThisX, ThisY].G + 10;
IF WeightTiles THEN Tile[NextLayer, NextX, NextY].G := Tile[NextLayer, NextX, NextY].G + TilePathingWeight(NextLayer, NextX, NextY, MovingDirection IN Diagonal);
Note that I only add these tile ‘penalties’ if the pathfinding function was called with WeightTiles defined as TRUE. I don’t add this tile weighting when pathfinding for the player character, for example, as it should be up to the player to decide whether or not to use the sidewalk and pedestrian crossings.
The result of all this is that characters will stick to sidewalks and pedestrian crossings, but if they really need to cross the road and they’re far from a pedestrian crossing, they’ll still choose to cross the road instead of taking a long detour. Here’s my NPCs demonstrating safe traffic behavior (though they don’t care about red lights):
Another use for this would be to have characters walk around corpses instead of over them. If there’s a corpse on a tile, just add 100 (or some other large value) to that tile’s penalty.
There have been a bunch of improvements to Hostile Takeover‘s visuals in the past year. I’ve upped the graphics from 24-bit to 32-bit, since it seemed more and more silly to sacrifice visuals for better performance on low-end computers. And I also abandoned the older, more retro UI in favor of a somewhat more modern and less obtrusive design.
But one of the changes that took the longest to implement was the new and improved lighting system. Not because it’s all that complex, but because it took some tinkering and experimentation to figure out how to do it right. So that’s what I want to talk about in this blog post.
In the below image, you can see the 3 levels of lighting quality that you can set in the game’s options menu. These 3 levels represent the lighting system as it has progressed since I first started working on the game. Instead of just using the highest quality in the game and scrapping the older versions of the system, I decided to allow players to set the lighting quality in the game, since on older computers, lowering the quality can improve frame rate slightly.
The quality 1 lighting just considers the lighting level of a single tile. The floor sprites have the level of lighting of the tile they’re on, and the wall sprites the tile they’re facing.
With quality 2 (which is what was available in the old releases of the game), the lighting is smoothed between adjacent floor and wall sprites. To achieve this smoothed effect, I use vertex coloring. In OpenGL, vertices are basically the corners of the texture you’re drawing to screen. So when drawing a floor or wall sprite to screen, they’re drawn as rectangles (‘quads’ in OpenGL) and therefor each have 4 vertices:
When drawing one of these sprites to screen, I can tell OpenGL how light or dark each corner of the sprite should be, and OpenGL then smoothes out the light level across the entire sprite and between the corners. To make adjacent floor and wall sprites blend more into each other, I set the light levels of the corners to be an average of the sprites that share that corner.
This works fairly well and certainly gives smoother lighting than for lighting quality 1. It’s not perfect, however, and as you can see in the comparison image, there are still some hard edges between the individual floor and wall sprites. The reason for this is that the corners of the actual floor and wall images don’t line up with the corners of the sprite. The floor is diamond shaped, while the texture is a rectangle. And because of the isometric perspective, the corners of the actual walls don’t line up with the corners of the sprites either. To fix this, I split the sprites into sub-sprites when drawing them to screen:
This gives me sprite corners that line up with (some) of the floor and wall corners. And since the floor and wall corners now line up with the sprite’s corners, the vertex coloring makes the smooth lighting match up better between adjacent floors and walls. As a result, the transition between adjacent floors and walls is practically seamless. Here’s how the corners of a wall’s sub-sprites matches up perfectly with the corners of an adjacent wall’s sub-sprites:
I’m really pleased with this system — not least because I’m now getting almost perfectly smooth lighting without any use of shaders or other more demanding methods.
I was hoping to have v0.1.4 out by now, but I’m constantly running into new issues and bugs that need fixing. One of the big causes for all this is allowing the player to shoot at characters in an elevator and being shot at while in an elevator. The actual shooting part isn’t a problem – it’s having a dead body in the elevator along with any dropped weapons.
For the dropped weapons, I just had to make sure that they’re ‘teleported’ to the new level whenever the elevator moves from one level to another. Not such a big issue, but it took some time to get working. With the actual dead bodies, I was experiencing that the bodies would sometimes come back to life and chase down the player that originally killed them. The NPC would basically get snapped out of his dead state for some reason and continue his gunfight with the player (the weirdness of which was exasperated by the fact that the NPC had dropped his gun when dying and would therefore just be firing his empty hand at the player!).
It took a few hours of staring at code and trying different things before I figured out what was going on: When an elevator moves from one level to another, the code runs through a loop and moves all characters in the elevator to this new level. In doing so, the tiles on the new level are set to being occupied by these characters. Except dead bodies shouldn’t occupy (and thus block) a tile. This meant that whenever another NPC tried to move onto this tile, it would tell the dead NPC to get out of the way – and voila, the dead guards would get back on their feet and move out of the way. Since they were now in a ‘walk’ animation instead of the ‘dead’ animation, they were quite literally reanimated.
Also, if you’ve ever wondered why killed characters in the original X-COM always dropped dead by pretty much just collapsing in on themselves, it’s because corpses that stretch across multiple tiles can create all sorts of graphical issues. Such as corpses clipping through walls or doors. This will only become more apparent in Hostile Takeover with corpses in elevators, as the elevators aren’t large enough for the corpse sprites. I will, however, be making more corpse sprites in the future to remedy this, as well as running various checks to make sure corpses aren’t clipping through walls. For now, just know that this isn’t a bug but just something further down on my to-do list.
I spent the entirety of yesterday tracking down a tough bug. Through casual testing, I had discovered that if characters were trying to run through a door, but their path was blocked, they’d run through the wall next to the door instead. And seeing as those characters were neither ghosts nor X-Men, that wouldn’t do.
At first, I thought their paths were getting out of whack. The pathfinding works by instantaneously finding a path from start to end, then having the character store the directions of this path (move forward, move left, move left, move down – you get the picture). But if for some reason these directions went out of sync with where the character actually was, that would account for weird pathfinding, such as running through walls. So I spent a great deal of time searching for places in the code that could potentially push a character out of sync with his pathfinding. I wasn’t able to find any.
So with that theory abandoned, I started running through the actual pathfinding code. Outputting some debug info revealed that the pathfinding did indeed path its way through a wall. So the issue arose from the pathfinding itself. And that’s when the fact that this was always happening next to a door lit a lightbulb above my head (I quickly turned off that lightbulb, as the glare on the screen was distracting). There was a problem with how doors was handled in pathfinding. The process of pathfinding is a 3 step process in my code. First, a path is attempted while respecting locked doors. If no path is found, a path is attempted while ignoring doors completely (since the character will be stopped anyway when trying to move through a locked door). If a path still can’t be achieved, the code will find the closest reachable spot to the destination and find a path to this spot instead.
The issue was with the second step: ignoring locked doors. The code didn’t check properly that the door to ignore was actually in the direction the character was moving, just that the door was associated with the tile the character was currently on. So if a character was trying to run left, through a wall, and there was a door to the right that was associated with this tile, the game would think that the wall obstructing the character on the left was a door as well – and since the second pathfinding step ignores doors, the wall would be ignored. The fix was just to make sure that the door was actually in the direction the character was moving from this tile. 6 hours spent finding a bug, 2 minutes spent fixing it once found. That’s programming, I guess.
Anyway, there’s currently only one AI thing left to do: characters reacting to finding a corpse. After that, I need to polish some minor things, do a bunch of testing, fixing the bugs that are bound to reveal themselves from this testing, and then packing up everything I’ve got so far for release. This is the home stretch!
I got two big AI algorithms implemented this week. Originally, I had planned to have the searching and chasing algorithms combined in one, but it turned out to be easier to do if split into two separate algorithms.
With the chasing algorithm, the AI will remember where the target was last seen and which direction it was moving in. The AI will then move to this tile and continue chasing in the direction the target was last seen escaping in. If no new visible contact is made with the target, the AI can start checking doors or side corridors it moves past, while continuing to chase in the direction it suspects the target may have run off in (if the target could have escaped in two different directions, for example, and one of these directions was outside of the AI’s line-of-sight at the last ‘waypoint’, the AI will continue on in this direction, as the target would likely already have been spotted if escaping in the direction that wasn’t hidden from the AI).
The searching algorithm is a bit simpler: the AI will just choose some random waypoints within a certain distance from the central search tile, then go to these in turn, stand there for a moment and look in different directions before moving onto the next waypoint.
With both of these algorithms, I ran into an issue when the AI’s path was blocked by a locked door. The AI character would just stand there indefinitely. So I also implemented a procedure for checking if an AI’s path is blocked by a locked door or non-moving character – and then abandoning the chase or moving to the next search waypoint if blocked.
The overall structure that governs each AI character’s behavior is made up of separate orders. A character will usually start off with just 1 order, for example patrolling an area, standing at a counter or just walking across the map. Certain events can cause a character to get a new order, such as when a patrolling guard spots the player. Instead of overwriting the old order, this new order is just stacked on top so the character now has 2 orders (the orders for each character are just stored in a dynamic array). When executing orders, the code always executes the last order in a character’s order stack. And when the current order is done (if the guard has lost track of the target he was chasing, for example), this order is just removed and the previous order becomes top of the stack again – meaning that the guard just returns to his previous order and goes back to patrolling.
It works really well and isn’t all that complex coding-wise, so I’m pretty happy with this solution. I haven’t really read up on AI programming theory, so this may all be pretty basic or incredibly stupid, but hey, it works for me!
I’ll probably be making a new development video demonstrating the new AI stuff in a week or so. Stay tuned for that.
This past week I finally managed to squash a bug that would sometimes corrupt map files when I removed world objects in the map editor. I was basically just setting the object type to zero when saving the map, but when loading it, the code wasn’t expecting zero’ed object types, so it would try to place these non-existent objects. It was of course just a matter of not writing these objects to the map file at all. Seems simple, but still took a bit of time to figure out.
And, as always, work continues on the AI. This week I fiddled a bit more with AI shooting – primarily tweaking the checks so that there’s a pretty big chance of a hit when firing at a target right next to or very close to the shooter. My algorithms were working fine for shots at a distance, but the AI would often miss a target right in front of it, which seemed kinda silly at times.
I’ve also started work on characters detecting you from hearing you. My current framework is that characters will always react to gunfire (if they’re a guard that’s within range of hearing it) by going to the spot they think the sound came from and starting a search routine with that spot as the center. If you’re making noise in a yellow area, this can cause characters to turn to face you – and react with suspicion if you’re sneaking or crouching. And any noise made in a red area can cause a guard to start searching for you and attacking you if you don’t leave quickly enough (or if the guard is already hostile towards you).
That’s the basic framework, which I’ll then tweak so that civilians for example react differently than guards. For one, they shouldn’t always come running to investigate the sound of gunfire, but should instead most of the time run away from it in a panic. After that, I’ll start working on NPCs reacting to finding a corpse.
I finally got the last big optimization working last week – well halfway. Explanation follows.
The way I do the x-ray effect in Hostile Takeover is to draw the area around the player character without drawing walls in front of him nor any other objects that you should be able to see through. This is drawn to the back buffer and never actually shown on screen. I then read this from the back buffer into a 128×128 pixel image and change the alpha value of the pixels so that the further they are from the image center, the more transparent they are. I then draw the screen as usual and finally draw this 128×128 image back on top of this.
The problem with this is the part where I read the pixels from the back buffer into the 128×128 texture. The whole rendering pipeline is optimized for writing/uploading information to the GPU and framebuffer, not reading/downloading. So on older systems, this can create a bit of a bottleneck. The solution is to use a framebuffer object (FBO) instead. FBOs were an extension to OpenGL that I believe now has become a core feature. Instead of writing to the back buffer and then reading back the information in order to apply the gradual transparency and writing the whole thing to a texture, I can use FBOs so that a texture can be set up to work like a framebuffer, meaning that I can draw everything directly to the texture – cutting out the bottleneck of reading pixel information from the back buffer.
It all works fine and dandy and quite a bit faster – at least when running in a window. I’m having an issue with the 16-bit depth of the game messing up the FBO in fullscreen. So for now, it’ll just fall back to the slower method when running in fullscreen, and I probably won’t be spending more time on this for the pre-alpha release. But if you’re running the game on a computer that isn’t more than 5 years old, you probably won’t have any issues with this anyway. And if you do, you can just run in window mode for now.
So that’s off my to-do list for now. Which means that I’m now finally fully into AI. The first thing I did was extending the pathfinding code to work across multiple layers, meaning that NPCs now know how to use elevators. Following that, I’m focusing on patrolling guards and how they’ll react to spotting you.
There are three types of areas, color coded green, yellow and red on the map. In a green area, your stealth level can’t decrease. In a yellow area, it’ll decrease if you get spotted doing anything you shouldn’t be doing. And in a red area, you’ll lose stealth as soon as you’re spotted – and if you don’t leave fast enough (of if the guards are already suspicious of you), you’ll get attacked. All that is working, except for the attacking part. So I’m currently teaching my little NPC dudes how to shoot guns. They’re slowly getting the hang of it.
Last week I managed to complete the saving and loading stuff. I also figured out how to use zlib compression, so map files are only about 10 kB in size (compared to more than 1 MB uncompressed) and save files just over a kilobyte. Right now, the entirety of Hostile Takeover is under 20 MB in size, in case you’re wondering. But that will of course grow as more maps and assets get added.
After the saving and loading stuff, I was planning on doing the last big optimization, but plans are made to be broken, so I instead added a ‘current objective’ button (that also provides tutorial hints), as well as a ‘mission restart’ button (won’t yet be fully implemented in the first pre-alpha release) and a timer.
Currently, there’s only one use for the timer: If your stealth level drops all the way down to zero, you’ll have a few minutes to complete your assassination and escape the map or you’ll fail the mission. In the finished game, I’d like for the mission to not just fail automatically when the timer runs out, but instead have extra police/security guards drive onto the map and surround the building. So you’ll still have a chance of completing your mission – getting out alive will just be very hard.
I’m now diving into that optimization thing. For real this time!
Last week was off to a crazy start when I suddenly found Hostile Takeover adorning the front page of Rock, Paper, Shotgun. As you can probably imagine, this brought in a whole bunch of traffic, so a quick welcome to the game’s many new followers is in order! It was also quite the motivational boost – not that my motivation to work on the game really needed boosting, though, but it’s always nice when people seem excited about the thing you’re working on.
Anyway, I’ve mostly been working on saving and loading during the past week. Until now, everything would just be saved in the map file itself (and some stuff, like your inventory, wouldn’t be saved at all) whenever I quit the game. This was fine for developing and testing maps and such, but of course wouldn’t at all be suitable for the actual game. So I’ve had to decide which information should go in the map file and which should be saved in the savegame file. It’s basically boiled down to all static information (such as tiles, walls and most world objects) being in the map file, and everything else being in the savegame file.
This means, however, that the map files are 1+ MB in size, while the savegame files are only 10+ kB. The latter is fine, the former really isn’t. So I’m reading up on the Free Pascal zlib unit to be able to compress the map files with libz – with WinZip, a 1+ MB map file gets shrunk to about the same size as a savegame file. I’m not sure if libz can get the same results, but the more important side effect of compressing the map and savegame files is that they won’t just be plain text that’ll be easy to mess up or screw around with.
I also changed the load/save menu ever so slightly. When you save a game, it’s now optional if you want to type in a savegame description. If you don’t, it’ll just state the location you’re at in the game as well as the real-word time at which the savegame was created.
When I’m done with the save/load stuff, I’ll probably be moving on to an important optimization that I’ve been putting off for some time now. It has to do with how the x-ray effect is created. But I’ll save that for a later post. And after that, it’ll probably be all about AI and level building.
For the past week, I’ve been doing a lot of behind the scenes work on Hostile Takeover. Some of this work has gone into refactoring the code to make it easier to add environmental sprites and data in the future. And a bunch of work has gone into making the map editor as usable and powerful as possible.
Until know, the information for each object making up the game’s environment has been spread out across multiple files, procedures and units. To draw a chair, for example, depending on whether the code was running in-game or in the map editor, information about which specific sprite to use (based on the map and/or object’s rotation), which texture atlas this sprite could be found on, where it should be drawn on the screen relative to the tile it’s on, whether or not the sprite should be flipped vertically or horizontally, and how the object will block line-of-sight – all this information was scattered around in the code. Not only was this highly inefficient from a coding point of view, as information would be duplicated across procedures and files, but adding new stuff to the game would’ve been a nightmare. I’ve now split everything into three highly structured files, ensuring that this process will be a lot simpler and quicker.
Adding the individual sprites and objects to the game is only the first part, though. I then have to build a map with them. Adding ground, walls, interactive objects, light sources and characters into a coherent whole can quickly become a messy process, so having a powerful and easy to use map editor is essential. I believe it takes a lot of iterations to make a good and challenging map – and the easier it is to build and modify a map, the more willing I’ll probably be to scrap something that simply isn’t working, or refine something that isn’t quite there yet.
So while my map editor may not look very flashy – and is probably very fiddly to work with if you don’t happen to be the guy that programmed it – it does everything I need it to do to build a map as comfortably and easily as possible.
When a map has been built, I’ll move to the scripting language to program the effect of the player’s interaction with the map, as well as dialogues (between the player and NPCs) and conversations (between NPCs).
I was considering just writing an “I’m still alive” blog post to let everyone know that I’m, well, still alive and still very much working on Hostile Takeover. But that’s a pretty short and boring topic, so here’s a completely different blog post instead…
I’ve been working on the AI for Hostile Takeover. I like having a clear goal with the stuff I’m doing, so right now I’m putting the final touches on having a guard patrolling an area (by passing through designated waypoints) and being able to spot and attack you. To determine whether or not the guard can see the player, three factors have to be taken into account: field of view, distance and line-of-sight.
Field of view is simple to implement. You just calculate the angle from the guard to the player and check if it’s within the guard’s field of view (95Â° to either side). And distance is applied by having the chance of the guard spotting you decrease as the distance increases. But line-of-sight can be a bit more difficult to implement – especially for a 2D game. With a 3D game, you’ve already got the geometry set up in the game world, so you can “just” send of a ray from the guard to the player and check if it intersects with any world geometry. But the geometry of my 2D isometric world is a bit more abstracted.
Walls don’t automatically have volume, for example. Everything plays out on a flat map with just two dimensions, X and Y. There’s no height. No Z-axis. So when a character stops in front of a wall, it’s not because that character’s hitbox was intersecting with the wall’s geometry, as would probably be the case in a 3D game. Instead, the 2D map tells the character that he can’t move from tile A to tile B because there’s a wall in between.
At first, I tried doing line-of-sight by drawing a Bresenham line from the guard to the player with each point on the line being a whole tile. If this line didn’t intersect any walls, there was line-of-sight. The problem with this solution was that it wasn’t very precise. Characters are often moving between tiles and are sometimes only partially obscured by a wall, and using an entire tile as a point on the Bresenham line was too low resolution:
So the solution was quite simply to increase the resolution by dividing each tile into subtiles and occupying these subtiles with whatever could break line-of-sight. I decided that a 10x resolution would be sufficient (primarily because characters have 10 frames of animation when moving from one tile to another, so they can occupy 10 different spots in between tiles, which can be represented by dividing each tile into 10×10 tiles). I’m now just drawing the Bresenham line by using these subtiles as points on the line. Whenever a subtile is plotted, the game checks whether or not this subtile is empty or not. If not, line-of-sight has been broken:
I don’t have to create subtiles for the entire map, just the single tile that the Bresenham line is currently passing through, so it’s neither memory nor processor intensive. To occupy the subtiles, I have a procedure that starts off by setting all 10×10 subtiles as empty, then checking for character, walls and other world objects on that tile. If any are there, the relevant subtiles are filled in.
I believe good ol’ X-COM did something similar, but also had a Z-axis for individual tiles, so it in essence had subcubes that the ray was passing through. This, for example, allowed a bullet to pass through an open window. I’m considering expanding my own system to that as well, since I’ll most likely be adding functionality for shooting at characters on different levels of the map. But I’ll wait and see if it actually ends up being necessary.
As mentioned briefly in the last blog post, I’ve been working on smooth lighting. That’s now complete and working as well as I think I’ll get it to. Check out these comparison screenshots:
As I also mentioned in the last blog post, this was somewhat inspired by the Project Zomboid programmer adding smooth lighting to that game (and now I notice that they also posted some comparison screenshots just yesterday!). I believe he accomplished this by overlaying a multiplied and untextured polygon that completely matches the shape of each ground tile, with the vertices of that polygon set to color values that create a smooth transition. I tried this as well, but wasn’t quite able to get the same results, and switching the OpenGL blending between normal mode and the multiply mode each frame also added a bit of overhead, so I went for a simpler method where I just supply the smoothing color values to the vertices of the sprite itself, skipping the overlay entirely.
The Project Zomboid method results in much smoother lighting on the ground (there really aren’t any hard edges at all), while my method still retains some of the per tile lighting, which I kinda like. It’s sort of halfway between the old X-COM hard lighting and the completely smooth lighting of Project Zomboid. I’m adding a toggle to the options menu to turn smooth lighting off if you prefer the old school hard edges.
I’m currently optimizing the map drawing code for Hostile Takeover‘s isometric engine. It’s basically all about minimizing the amount of tiles and layers that the code has to loop through each frame to check for stuff to draw – and also trying to minimize the amount of texture bindings for OpenGL. Anyway, as part of this, I tried disabling the wall shadows (which I originally added to create an ambient occlusion effect) to see if they had any substantial impact on the time it takes to draw a frame. They don’t really, which didn’t surprise me much, but the test has left me with doubt as to whether or not wall shadows should even be in the game.
The image on the left is with wall shadows, the image on the right is without:
On one hand, I think the wall shadows make the structures pop a bit more and make everything feel less flat. On the other, I can’t shake the feeling that the wall shadows make it seem as if the walls are floating ever so slightly above the ground. So I can’t decide if I should keep them in or discard them. What do you think?
By the way, the shadow for the “Hensley International” sign is also missing in the right-hand image. That’s just because I completely disabled environment shadows for this test. Stuff like that sign will have shadows – that much I’ve decided on. Of course, it might just be weird if walls don’t have shadows when everything else does…
The world of Hostile Takeover has been pretty flat for a while. There’s been no z-layer, meaning that everything has been on the same level. That’s changed now. In the below images, you can see a building that’s currently 2 stories high (it’ll probably be 4 stories when done, but there’s really no limit to how many z-layers I can add). When you enter such a building, none of the layers above will be drawn, giving you an unhindered view of the level you’re on.
I’ve also added doors that automatically open and close when you need to pass through them, as well as windows. I’m currently working on adding elevators to transport you between the levels of a building (I really don’t want to make an animation for walking up/down stairs — at least not yet). You can already see the elevator doors on the first level of the building.
One final thing I’ve been working on is the map editor. It runs completely in the game engine, so I just press ‘D’ (for ‘developer mode’) when playing to move walls and stuff around. I’ll probably include the map editor with the game to allow players to make their own maps. Along with the scripting language, this will let players create missions for the game, so there will probably be two modes: the campaign with the full story and individual player made missions.
The map editor is still a bit buggy. It works fine for my own purposes as I’m aware of its limitations (such as placement of walls and stuff only working as intended when the map isn’t rotated), but I need to clean it up a bit before it can be used by other people. Look for it in the next development video, which I’ll probably make when I’ve got elevators working.
It’s usually a bad idea to start optimizing your program too early, but I’d been running into some performance issues with my Hostile Takeover engine. The game would slow down considerably when there were many sprites to draw to the screen. The issue was most noticeable when there were many characters on the screen at once, since each character consists of about 14-16 subsprites that are all individually drawn to the screen:
Until now, the subsprites for a single character frame were mostly gathered on a single texture atlas, but there were still some instances where the subsprites were spread out over several atlases – the weapon subsprites, for instance. When a character carrying a weapon was drawn to the screen, the code would first draw all the subsprites “behind” the weapon subsprite, then switch over to the texture altas with the weapon and draw that, and then switching back to the original texture atlas to draw the remaining character subsprites “in front of” the weapon subsprite. Switching texture altases like that is expensive, as the GPU has to upload the new texture altas data.
I had originally just planned on gathering all the subsprites that make up a character frame on one texture atlas, so that the code wouldn’t need to switch back and forth between different atlases when drawing a single character. I did that, and I did get a slight performance increase, but it wasn’t enough. So, what I’ve done now is decrease the image quality of the sprites themselves.
Until now, the sprites were all RGBA8888. This means that for each color (RGB) and the alpha value (A), 8 bits of memory would be used, making the images 32-bit (8×4). This allows for 16,777,216 different colors. This also takes up a lot of memory, though, which again means that the GPU has to push a lot of data for each sprite that needs drawing.
The sprites are now instead 16-bit or RGBA4444. This immediately halves the amount of data and memory used, and provides a pretty significant performance boost. 16-bit “only” allows for 4,096 different colors, though. Compared to the 16,777,216 in 32-bit, you’d think that the difference in quality would be pretty severe. I don’t think it is, though. See for yourself:
At a quick glance, I almost can tell the difference. But if you look closely, you’ll notice that there’s a slight grain to the RGBA4444 image. This is because dithering is used to simulate a higher number of colors. I actually kinda like this grain effect. It adds a texture and feel to the images that I can’t quite put my finger on. So I think I’ll stick with this. Increased performance, lower memory use and a grain effect that I kinda like. It’s a win-win!
What do you think?
The continuation of last month’s Part 1 is long overdue, but here it finally is. In Part 1, I described the process for creating the many subsprites that go into creating the full character sprites. In this blog post, I will focus on how the multiple subsprites are actually drawn to the screen to correctly form the complete character.
Two things need to happen for these subsprites to be drawn correctly. 1) The program needs to know where to bind the subsprite from, and 2) it needs to know where to blit it on the screen.
All these subsprites are pretty small in size and storing them all as individual images would not only result in a mess of thousands of image files, it would also present a problem for the game engine. The engine uses OpenGL for drawing stuff to the screen and OpenGL prefers its textures (the images to draw) to be power of 2 in size, meaning that the height and width of the image must be 16, 32, 64, 128, 256, 512 or 1024 pixels. Extensions exist for OpenGL that allow the usage of any size, but not all video cards support this extension, and there’s actually really no reason for using non-power of 2 textures. Why? Well, because even though the subsprites are all very small, they can be packed together in a so-called texture atlas, and this texture atlas can just be power of 2 sized:
This mean that the extension for non-power of 2 textures isn’t needed, but there’s also another benefit of packing subsprites together in one large image file. Whenever OpenGL has to bind (prepare for drawing to the screen) a texture, it takes time as the texture data must be moved to the GPU. If all the subsprites were in separate image files, I’d have to constantly bind new files, but when related subsprites are packed together on a texture atlas, I only have to bind this texture atlas once and can then just tell OpenGL which part of the texture atlas to blit to the screen.
Telling the program where to draw from
And that’s where we get to the first requirement: Telling the program on which texture atlas each subsprite can be found and what the individual subsprite’s coordinates are on this texture atlas. Thankfully, I don’t have to manually create a texture atlas and paste the subsprites onto it. A lot of free programs exist for doing this automatically. You just set the size of the texture atlas and select which files should be packed in it, and the program does it all for you – packing the subsprites together in the most efficient manner. Furthermore, these programs can then output the coordinates of each subsprite on the texture atlas in various formats. It’s then just a matter of getting this coordinate information into the program and creating a procedure for grabbing the correct information for each subsprite.
In code terms, I’ve created a procedure that gets called with the subsprite’s name and then returns the number of the texture atlas and the coordinates where the subsprite can be found. Here’s a snippet of the Pascal code that returns the subsprite positions of all the sprites that go into making the ten frames of a male character walking south:
PROCEDURE GetWalk1SpritePositionValues(SpriteName : String); BEGIN IF SpriteName = 'walk1-1malehair-1' THEN SetSpritePositionValues(walkmale1, 334, 257, 12, 11) ELSE IF SpriteName = 'walk1-1malehair-2' THEN SetSpritePositionValues(walkmale1, 442, 257, 12, 11) ELSE IF SpriteName = 'walk1-1malehat-1' THEN SetSpritePositionValues(walkmale1, 394, 257, 12, 11) ELSE IF SpriteName = 'walk1-1malehat-2' THEN SetSpritePositionValues(walkmale1, 409, 209, 12, 16) ELSE IF SpriteName = 'walk1-1malehat-3' THEN SetSpritePositionValues(walkmale1, 406, 232, 12, 14) ELSE IF SpriteName = 'walk1-1malehat-4' THEN SetSpritePositionValues(walkmale1, 337, 209, 12, 16) ELSE IF SpriteName = 'walk1-1malehat-5' THEN SetSpritePositionValues(walkmale1, 167, 392, 22, 17) ELSE IF SpriteName = 'walk1-1malehat-6' THEN SetSpritePositionValues(walkmale1, 345, 162, 18, 15) ELSE IF SpriteName = 'walk1-1malehead-1' THEN SetSpritePositionValues(walkmale1, 497, 162, 12, 17) ELSE IF SpriteName = 'walk1-1maleleftarm-1' THEN SetSpritePositionValues(walkmale1, 327, 270, 8, 10) ELSE IF SpriteName = 'walk1-1maleleftarm-2' THEN SetSpritePositionValues(walkmale1, 232, 488, 9, 24) ELSE IF SpriteName = 'walk1-1maleleftarm-3' THEN SetSpritePositionValues(walkmale1, 286, 232, 9, 24) *snip* ELSE IF SpriteName = 'walk1-10malerightfoot' THEN SetSpritePositionValues(walkmale1, 351, 270, 7, 10) ELSE IF SpriteName = 'walk1-10malerighthand' THEN SetSpritePositionValues(walkmale1, 202, 245, 9, 36) ELSE IF SpriteName = 'walk1-10malerightshoe-1' THEN SetSpritePositionValues(walkmale1, 202, 499, 9, 13) ELSE IF SpriteName = 'walk1-10malerightshoe-2' THEN SetSpritePositionValues(walkmale1, 494, 179, 11, 27) ELSE IF SpriteName = 'walk1-10malerightshoe-3' THEN SetSpritePositionValues(walkmale1, 259, 484, 9, 13) ELSE IF SpriteName = 'walk1-10maletorso-1' THEN SetSpritePositionValues(walkmale1, 410, 0, 35, 37) ELSE IF SpriteName = 'walk1-10maletorso-2' THEN SetSpritePositionValues(walkmale1, 48, 409, 35, 36) ELSE IF SpriteName = 'walk1-10maletorso-3' THEN SetSpritePositionValues(walkmale1, 460, 59, 35, 33) ELSE IF SpriteName = 'walk1-10maletorso-4' THEN SetSpritePositionValues(walkmale1, 425, 59, 35, 33) ELSE IF SpriteName = 'walk1-10maletorso-5' THEN SetSpritePositionValues(walkmale1, 390, 59, 35, 33); END;
The SetSpritePositionValues procedure that gets called is just a simple procedure for passing on the values to a set of global variables that I then use when calling the OpenGL blitting procedure:
PROCEDURE SetSpritePositionValues(TextureMap, X, Y, Width, Height : Word); BEGIN GetSpriteMap := TextureReference[TextureMap]; GetSpriteXPosition := X; GetSpriteYPosition := Y; GetSpriteWidth := Width; GetSpriteHeight := Height; END;
So the first value that gets passed is the number of the texture atlas (in this case, it is a constant value called ‘walkmale1’), then the X and Y coordinates, and then the width and height.
All these procedures fill up a lot of lines of code, but they are quickly made since it’s just a matter of formatting the coordinate information given from the image packing program. This is the quick and easy part. The program now knows where to draw from. It’s the second requirement that takes a lot of time.
Telling the program where to draw to
For everything related to drawing and placing of characters, it all starts with the character template sprite. This is just the basic character frame with no clothes on:
This sprite is actually never used in the game, since the character sprites are all drawn by using the subsprites to form a complete character sprite. But the height and width of this template sprite is what’s used for drawing the subsprites to the screen. The upper left corner of this template sprite has the coordinates (0,0). When I for example need to figure out what the coordinates of the character’s pants are in relation to this, I just go into GIMP, paste the pants onto this template sprite, move them to the upper left corner to ‘reset’ the coordinates, and then drag the pants to the correct position on the sprite and read how far the pants have been dragged along the X and Y axis:
This gives me the pants’ relative coordinates (relative to the template sprite’s (0,0) coordinates): (6,43). When the character is drawn to the screen, I already know what the character’s overall coordinates are. Let’s say the character’s coordinates are (201,750). To figure out where the pants subsprite should be drawn, I just add the pants’ relative coordinates to the character’s overall coordinates:
(201,750) + (6,43) = (207,793)
So I just tell OpenGL to blit from the coordinates of the texture atlas previously found and to the screen coordinates (207,793). This will draw the pants in the correct spot. It’s quick and easy when everything’s programmed correctly. What takes time is manually determining what each subsprites’ coordinates are relative to the template sprite. That takes a lot of patience and use of GIMP. So, again, I get to listen to a lot of podcasts while doing this stuff!
There’s still one thing left to do before the character sprite is ready to be drawn. As it consists of multiple subsprites, these all have to be drawn in the correct order or you risk having a character with his legs outside his pants, or his head over his hat. How this is managed will be the subject of Part 3.
When I started work on Hostile Takeover, I had only admired existing isometric games and never thought about programming one myself. So when I decided that isometric would be the graphical style I’d go for, I had to do a lot of reading up on the subject and wrap my head around this way of “faking” a 3D presentation with 2D graphics. I thought I might as well share some of the basics I’ve come to understand – maybe you’re gearing up for your own isometric game, but if not, I hope you still find this somewhat interesting.
There were two main issues I had to figure out to get the most basic stuff up and running: drawing the tile sprites in an isometric grid and calculating which tile the mouse cursor is over. As with pretty much everything in programming, the answers to both issues were some surprisingly simple mathematical formulas.
Drawing an isometric grid
An isometric tile is basically a square tile rotated 45 degress around the z-axis and tilted around the x-axis so it’s twice as wide as it is high:
When drawing a grid made up of non-isometric tiles, it’s pretty easy as all the tiles on the same row have the same y-value, but with an isometric grid all tiles are offset half a tile both vertically and horizontally:
Image borrowed from this article
This is where basic math came to the rescue. The algorithm I use for drawing tiles is:
FOR Y := 1 TO SizeY DO BEGIN FOR X := 1 TO SizeX DO BEGIN DrawX := ((X - Y) * 38) + 400; DrawY := (X + Y) * 19; END; END;
My tiles are 76 pixels wide and 38 pixels high, which is why I multiple with 38 (half of 76) and 19 (half of 38). I add 400 to the x-value to keep the left half of the grid from being drawn at negative x-coordinates (i.e. beyond the left edge of my computer screen). This gives me the x and y coordinates for blitting the tile sprite to the screen. Simple!
Calculating tile mouse over
When it comes to figuring out which tile the mouse is over (so that the game will know which tile the player character should move to when the player clicks the mouse button!), it was slightly harder to figure out, but, again, some simple math came to the rescue.
I already knew what the screen coordinates of the mouse cursor is, since I used that for actually drawing the mouse cursor. The screen coordinate is just which pixel on the screen the mouse cursor is currently at. My screen resolution is 1280×800, so the mouse cursor’s x-coordinate will be 1 – 1280 and the y-coordinate 1 – 800. I also already know where my tiles are on the screen, since I just previously figured out the algorithm for drawing them to the screen. Combining this, I can make an algorithm for translating the mouse cursor’s screen coordinates to an isometric tile coordinate:
ScreenX := ScreenX - 438; TileX := Trunc((ScreenY / 38) + (ScreenX / 76)); TileY := Trunc((ScreenY / 38) - (ScreenX / 76));
So if I know that my mouse is at, say, [410, 235] on the screen, calculating which tile the mouse is over is as easy as:
TileX = (235 / 38) + (-28 / 76) = 5 (rounded down)
TileY = (235 / 38) – (-28 / 76) = 6 (rounded down)
You’ll notice that I start with subtracting 438 from the mouse cursor’s x-coordinate. This is because I added 400 to the x-coordinate when drawing the tiles, so I have to subtract this again (as well as half a tile width) before calculating which tile the mouse is over.
Now, this is all well and good for a map that isn’t larger than the computer monitor’s resolution. What do you do if you want a map that’s larger than that and want to allow the player to scroll the map? Easy, you just add an offset x and y value to the algorithms:
FOR Y := 1 TO SizeY DO BEGIN FOR X := 1 TO SizeX DO BEGIN DrawX := (((X - Y) * 38) + 400) - OffsetX; DrawY := ((X + Y) * 19) - OffsetY; END; END;
ScreenX := (ScreenX - 438) + OffsetY; ScreenY := ScreenY + OffsetY; TileX := Trunc((ScreenY / 38) + (ScreenX / 76)); TileY := Trunc((ScreenY / 38) - (ScreenX / 76));
When the player scrolls the map horizontally, just increase or decrease the OffsetX value accordingly, and do the same with the OffsetY value when the player scrolls vertically.
And there you have it. The basics of drawing and interacting with an isometric map. I had to play around with the values a bit, though, so when I for example add 400 to the DrawX value, that number is just something I arrived at with trial and error. It would differ for different monitor resolutions and depending on where you want the default map position to be at (i.e. before the player does any scrolling).
If you want to read more about isometric programming theory, there’s a bunch of enlightening articles over at GameDev.Net.
Oh, and in case you’re wondering, the code snippets are Pascal. I program in FreePascal with SDL and OpenGL for cross-platform development.