Scheduling is an essential part of game development. It is how the game decides what subsystems and tasks to run, when, and in which order. The three most common game subsystems are physics, graphics rendering, and input handling. These three subsystems are what defines a video game, but they don't have to be everything. Many games also have an audio subsystem and a networking subsystem. Some of these, like the physics engine, must run at at least once per game loop. Some have specific performance requirements to avoid problems, like the input handing and networking systems. Some are less important but still need to be managed well to avoid annoyances, like the audio system. Managing all of these is called scheduling.
Subsystems
The most important subsystem of a game is the physics engine. This handles things like movement of game entities, state changes, and pretty much everything that happens in the game world. It is sometimes split into multiple pieces, to make management easier, but generally all parts of the physics engine should run at the same speed, and there is usually some order of operations requirement for a good game play experience. The physics engine does not have to run at a perfectly consistent rate, but it should never be allowed to get more that a few milliseconds behind or ahead, otherwise the game will run too fast or lag.
The second most important subsystem is input handling. This is often grouped as event handling, which deals with input as well as intersystem communication within the game. The important thing with input handling is that it happens often enough to keep the game highly responsive. When a player clicks the mouse, presses a key, or presses a button, the effect on the physics engine should appear to be immediate. Players can perceive very small amounts of input delay, and laggy input can easily make a great game unplayable. The input handling system never needs to run more frequently than the physics engine, but it should generally run at least as fast as the renderer.
The third most important subsystem is rendering, often called the graphics engine. The graphics engine shows the player what is happening within the game world. The only reason it is less important than physics and input is that the graphics engine's performance requirements are much more flexible. A game that renders slowly can still be playable if the physics engine and the input handling are running at an acceptable rate. If the physics engine is too slow though, the game lags, and if the input handling is slow, the player loses the ability to respond quickly. With a technique called frame skipping, it is acceptable to skip rendering to allow the game engine and input handling to catch up if they get behind. There are two good rules of thumb for choosing a default frame rate. The first is avoid going over around 60 to 75 frames per second. This is because above about 50 frames per second, the human eye is not capable of transmitting data to the brain fast enough to see any difference. This is also because most monitors have a refresh rate of 60Hz or 75Hz, which means that rending faster than this literally will not make any difference. The other rule of thumb is to avoid frame rates lower than around 30 frames per second, and if the game is going much lower than 20 frames per second, the user is going to assume that his or her computer just cannot handle the game.
Other subsystems can depends on a lot of factors. Networking generally needs to be very responsive, to keep things synchronized. If the audio subsystem is not running frequently enough, it can end up with the sound not synchronized with the game, but it can always skip playing less important sounds if necessary. If you are doing something like dynamic terrain generation in the background, you will want it to go fast enough that nothing important gets into an ungenerated area, but not so fast that the physics engine starts to get laggy. In general, other subsystems are not as important as the top three, but when any of the top three rely on them, it may be necessary to give them higher priority.
Game Loop
The game loop is where most of the scheduling occurs. Unless you are multithreading to put graphics or networking in another thread, the game loop decides when to call all of the various subsystems. The order subsystems are called in does matter, though it typically matters the most when you have more than the main three. The input handling should happen directly before the physics, so that the physics is working with the most recent input. This is important for minimizing input delay. The graphics engine should generally render directly after the physics engine runs, to display the new game state to the user as soon as possible. If the game is networked, that may need to happen either right before or right after the the input handling or the physics (depending on the details of the protocol).
The game loop is responsible for timing. This may mean that the game loop is responsible for making sure that the physics engine is kept above a minimum rate, or it might mean that it needs to keep it within a certain range. It should make sure that input is handled regularly and that rendering happens without getting in the way of anything more important. It also needs to deal with running other subsystems regularly. The most important thing is managing priorities. In some ways, a game loop is like the scheduling system in an operating system, though with rather less complexity.
Scheduling Algorithms
The scheduling algorithm is the strategy used by the game loop to keep everything running smoothly. There are two popular scheduling algorithms that can be adjusted to fit most situations.
The first popular scheduling algorithm is time scaling. Long ago, game loops did not do any scheduling. They relied on the speed of the processor to maintain appropriate game speed. This eventually lead to a problem. When Intel's 80386 processor came out, a lot of people noticed that their games would run way too fast. The games were designed on much slower 80286 or older processors, and these were slow enough that limiting speed had never had any value. With the 80386 though, these games ran so fast that they were unplayable. The initial solution was that most computer manufacturers added a "turbo" button to their 80386 and 80486 computers that would reduce the clock rate of the chips to make older games playable. Of course, new games would also have this problem if the developers did not do something about it, and relying on the continued existence of the turbo button, as even faster computers appeared, was clearly not a wise option. One of the solutions to this problem was to scale the speed of the physics engine to the time passing.
The time scaling scheduling algorithm is simple. Any time any state change occurs in the game world, the magnitude of the change is scaled by how much time has passed since the previous game loop. If a game entity was set to move at 10 pixels per second, and one tenth of a second had passed, the entity would move 1 pixel. This provides a major advantage in two ways. If the game is running extremely fast, the game world still runs at the expected rate, just with finer precision. If the game is running very slow though, it still seems to run at the correct rate, and the slowness is only apparent in rendering and input delay. In other words, for any given computer, the game physics appear to run at the same rate, and only the visual representation and responsiveness suffer. There are also several costs to this scheduling strategy. The first is that it requires many physics variables to be floating point variables, to deal with the very fine granularity of the game running on a very fast system. This can degrade performance on slower systems. The second is that floating point math is inherently inaccurate. While these inaccuracies generally only occur on an extremely small scale, which will not be noticed by a player, they add up enough to make it quite difficult to keep networked games synchronized. The problem is that floating point error adds up faster on fast machines than slow machines, which means that two networked machines doing the same math will probably not end up with the same cumulative results. This problem can be mitigated, but another scheduling strategy exists that makes it unnecessary.
The fixed rate scheduling algorithm runs the physics engine at a fixed rate. The physics engine does not need to do any time math, because it can assume that each time it is called, the same amount of time has passed. The game loop typically handles this by keeping track of how much time a given loop has taken, and then sleeping for the remaining time at the end. One problem with fixed rate scheduling is that if the operations in the loop take longer than the allotted time, the game will get behind and start to lag. This makes it not very resilient to slower computers. In addition, this can make collision detection more difficult, as game entities generally move in larger fixed distances than time scaling. The problem arises when a game entity is moving fast enough that it traverses the distance across another entity in one frame, effectively moving right through it. The benefits of fixed rate scheduling include the fact that it does not waste CPU cycles unnecessarily, and as long as the ends of a networked game are on the same physics frame, their state should be identical.
In modern games, fixed rate scheduling is generally preferred, because so many games are networked, and it is the best algorithm for networked games. Its downsides are pretty serious though. It does not scale down well, which can limit the game to only faster systems. The length of intervals cannot be guaranteed, because no sleep function provides a 100% guarantee that it will sleep for exactly as long as requested. These are some pretty serious problems, as they mean that the game can get behind or ahead at the whim of the operating system, and slow systems will just have to deal with chronically laggy games (which does not work at all for networked games). Fixed rate scheduling comes with more serious problems than time scaled scheduling, but it turns out they are also easier to deal with.
The typical fixed rate game loop does more than just running all of the subsystems and then sleeping for the remaining time. It also prioritizes subsystems, ensuring that the most important ones always get as much time as they need, at the cost of the less important ones. In addition to this, it keeps track of how much time passes during sleeps, so that it can mitigate cases where a sleep takes longer than expected. A fixed rate game loop in Python might look like this:
while(run):
new_t = time.time()
pending_time = pending_time + new_t - t # Add time elapsed
t = new_t
input_handler()
while (pending_time > 0.020): # 20ms is 50 times per second
update() # A common function name for physics
pending_time = pending_time - 0.020
render() # A common function name for graphics
sleep(0.20 - (time.time() - t) - pending_time)
Here are some things to note:
We only get the current time to store once per loop. There is no reason to do it more often, and in fact, if we do, we may end up frequently losing small amounts of time. An important note is that time.time() is not a high precision timer, and thus it should not be used for real games. Pygame has a Clock object that is much more suited to this purpose. Likewise, sleep() is also too low resolution to be used for games, but Pygame's Clock object can manage this functionality as well.
Instead of keeping track of only the time elapsed since the last loop, we are keeping track of all time elapsed, and then we are subtracting the expected time per loop from that for each time we run the physics engine. This ensures that if the OS sleeps for too long, we eventually get that time back when it adds up to enough to matter.
The physics engine runs in a while loop. This allows us to catch up if we get behind. If enough sleeps taking too long adds up to a full game loop of time, we will just run the physics engine an extra time during the next loop, which will catch us right up.
The render and input handling are outside of the while loop. This essentially skips frames if the game is behind, and since rendering tends to take a long time, skipping a frame now and then can allow the game to run smoothly even on slower systems. If the game has to skip a lot of frames, this may result in significant input delay though, so it may be better to put the input handler inside the loop as well, if you expect a lot of customers to be running your game on computers that can only barely handle it.
The while loop subtracts 20ms from pending_time each time it runs the physics engine. This is the second part of what allows us to keep track of smaller amounts of leftover time.
The sleep function only sleeps for 20ms minus the time elapsed since we began the last game loop and the pending time. The first part is there, because we only want to sleep for the remaining time. It accounts for the part of the 20ms that has already been used running the various subsystems. The second part is there to keep the time intervals as consistent as possible. Ideally, this means that if there is any time pending, we only sleep just long enough that we have accumulated exactly 20ms. In practice, this is unlikely to happen often, but doing this will help to avoid skipped frames as much as possible, by reducing the chances that pending_time will ever accumulate enough time to run the physics more than once a loop.
This example is fairly simple, and it does not include any extra subsystems. Most subsystems will need to be included in the same priority level as the physics engine or renderer, but lower priority subsystems may need a new priority level. This might be done by checking how much of the 20ms (or whatever period you give your game loop) is remaining after the first and second priorities have run, and if there is not enough time, skip them and go straight to the loop. Even more complex priority systems can be created as well, ensuring that the top priority subsystems run every loop, and then rotating through the lower priority subsystems, running or skipping a few each loop, depending on timing. It is important to keep in mind though, that the game loop is the top level bottleneck for the game, and it is possible for a complex scheduling system to end up taking up more time than it ultimately saves. You may need a complex scheduling system, if you have a lot of different subsystems, and if each subsystem needs a certain level of reliability to be guaranteed, but if you don't actually need this, don't do it. The scheduling system's job is to make sure the game runs smoothly and responsively, and if the scheduling system itself becomes too heavy, it will get in its own way.
Good scheduling is an essential element in the development of a good game. The scheduler budgets time, ideally making sure that everything gets as much time as it needs, but rationing time effectively when there is not enough to go around. Choosing or developing a good scheduling algorithm for your game can be just as important as any other part of development, because it really does not matter how awesome your game is if it is unplayable. In addition, a good scheduler can allow your game to run smoothly on a much wider range of systems, dramatically increasing the size of your market.
No comments:
Post a Comment