Multi-threading is a complex enough subject to warrant an entire course on its own, so we are not going to go into extensive detail here. Multi-threading does have some very valuable benefits in the realm of game development though.
Multi-threading allows different parts of a program to run at the same time. In modern multi-core computer systems, this can mean managing the game physics on one core, rendering graphics on another core, and handling audio on a third. It could also mean updating one game unit on one core and another on a different core, though doing this is significantly more complex.
There are two things that are essential to understand when discussing any kind of parallel programming. The first is concurrency. Concurrency is when the program is doing different tasks at the same time. The second is parallelism. Parallelism is when the program is doing the same thing to different sets of data at the same time. An example of concurrency with respect to video games is rendering graphics in one thread while processing physics in another. The two tasks are different, but they are happening at the same time. This can often be beneficial, even on systems that do not have multiple processor cores, because graphics rendering spends a lot of time waiting for the video card to be ready for more data. During that wait time, other processing, like game physics, can take place without slowing down the rendering. Parallelism requires hardware designed for multi-threading, like multi-core processors. With parallelism, a set of data is broken up into smaller sets, and these sets are worked on at the same time, using the same algorithm. For example, if my game has 100 game entities that need to be updated, instead of updating each one, one at a time, I could break the list into two lists of 50 each. Then, I could process 50 entities in one thread and 50 in another, at the same time. This could reduce the time required to update all of the entities by up to 50% (in practice, the benefit is a bit smaller than this, due to overhead of multi-threading).
In games, we generally stick to concurrency, due to certain issues with multi-threading. When multi-threading we have to share memory between threads to allow communication. In a game, we might be sharing animation frame data between the rendering thread and the physics thread. When an entity updates, the physics engine might also update that entity's animation frame. For example, a character in the game might be walking, and when the character steps forward, the physics engine sets the animation frame to show the step being taken. The rendering thread also needs to use this data when it actually renders the animation frame on the screen. So, what happens if the rendering thread starts to render the animation frame, but then halfway through, the physics thread changes it? The result is that we have the top half of the frame as one image and the bottom as another, and we can probably see a horizontal seam in the middle where the change happened (this is called tearing, though it does not generally happen in this way). You can also run into problems where two threads load something from memory at about the same time, both change it, and then both write it back. This results in what is called a "race condition," because whoever writes second wins. An example of this is a variable that is incremented each game loop, by two different threads. If the variable is an 8, and then both threads load it (both getting 8s), and then both increment it (now both are 9s), and then both write it (still 9s), the result will be a 9, even though it should be a 10, because it was incremented twice. These bugs can be incredibly hard to find, and there is no easy way to prevent them, except not sharing anything between threads, which defeats the point.
There are a lot of complications with multi-threading, and they all come down to making sure that memory accesses are carefully managed to avoid unexpected problems. This is usually done by "locking" memory accesses. A function that needs to modify some memory will lock that memory, read it, modify it, and write it, and then unlock it, and since it is locked while in use, nothing else can use it. These locks take time to set and remove though, which reduces the efficiency of the program and can even eliminate the benefits of multi-threading if too many have to be used. The best strategy is to avoid the need for locks in the first place, and minimize their use when they are absolutely necessary. Even with locks though, you can run into issues.
This is a very complicated topic. More information can be found on Wikipedia and other resources on the internet. The takeaway here is that multi-threading can be extremely valuable in optimizing video game performance, but it requires a great deal of care and complete understanding to avoid running into very difficult problems.
No comments:
Post a Comment