Cheating at Online Video Games and What It Can Teach Us About AppSec (Part 1)
For as long as video games have existed, people have been competing to see who is the best. And wherever there is competition, someone will find a way to cheat. This is especially true with modern video games. Esports are big business now, and even regular gaming is firmly within the realm of pop culture. But how does cheating at online video games work, and what can we learn from this as an appsec (application security) community?
Looking at how people cheat can be very educational; it shows you the limits of what is possible and forces you to consider the overall design of a system. Over the next series of blog posts, let’s take a look at some classic examples of video game cheats and explore the computer security lessons these cheats reveal.
REVEALING HIDDEN INFORMATION
Many games are games of partial information, which means that the whole state of the game is not revealed to each player, and instead they have to deal with a limited view of the world. A great example of this is the real-time strategy (RTS) game StarCraft II:
In a game like chess, both players look at the same board. No information is withheld from either player (it is a game of perfect information in game theory parlance). In fact, it’s not unheard of for chess players to stand up and walk around a bit to see the board from new perspectives. This isn’t considered cheating because no new information is actually gained. But in StarCraft II, a player cannot see past the fog of war. This asymmetry plays a huge strategic role that leads to deep and fascinating information warfare, where players try to manage not only the in-game resources but also the information available to their opponent about their own strategies.
In fact, competitive IRL StarCraft games are typically played in soundproof booths to prevent the live audience from giving away information to the players:
If one player could see the entire map, they would gain a significant competitive advantage, effectively shutting down any opposing strategy before it could take shape. Playing against such an opponent would be wildly unfair. It should, therefore, be no surprise then that StarCraft II has had many illicit map-hack programs created for it, which can reveal the full state of the game to an underhanded player. But how does that work?
HOW THE CHEAT WORKS
This is where application security starts to come into the picture. You see, the way StarCraft II is designed, a player’s computer always has full knowledge of the entire game state. The game’s interface doesn’t show it, but the full view of the map and the opponent’s unit locations are there in your computer. So to cheat, you simply need to find out where that information is stored (in memory and/or on disk) and display it to the user.
The flaw in this design isn’t hard to spot. It will always be possible to map hack in StarCraft II as long as the game is designed to have the full map present on the player’s machine. There is no magical way of storing information so that a person’s computer can read the data, but the person can’t. The only mechanism that the game has to prevent map hacks is obfuscation and spyware:
- Obfuscation makes the location and format of the hidden information as confusing as possible.
- Spyware finds common map-hack programs and bans users that run them.
But this is obviously a losing strategy. Motivated adversaries will always be clever enough to bypass the obfuscation, and the spyware is at best a deterrent to unsophisticated attackers. Such spyware programs also have a history of going haywire and breaking users’ computers.
COMMON GAMING NETWORK ARCHITECTURE
In order to truly prevent cheating, the system would have to be designed to not send game information to players that they should not possess. This sounds like an obvious statement, but it has profound implications for system design. Can you imagine a bank application that stored other users’ financial information on your computer and installed spyware to keep you from reading it? There’s no way to make such a system secure without a complete overhaul.
So let’s take a look at how online game engines are typically architected to get a sense of how they work and why. The two primary ways to architect a networked game are client/server and decentralized:
CLIENT / SERVER
The way the client/server system works is that one machine is the host (typically one of the players in the game). This machine is running the authoritative version of the game, and all other players connect as clients and receive game-state information. This has two major drawbacks:
- The host player has complete control over all aspects of the game and can easily cheat.
- Because players have to communicate indirectly through the host, there is an added delay.
Game cheats back in the old days used to involve having the host player simply modify the rules of the game at their whim. Host feels like flying? Done. Host wants to be invincible? Easy. It’s a bit like having the umpire of a baseball game play for one of the teams. This problem can be solved by having a neutral third party, such as the game’s developer, run the central server, but this is a big expense.
DECENTRALIZED
The other main way to design an online game is in a decentralized fashion:
The way this works is that all players separately run their local games and communicate to each other through inputs. Player A sends raw inputs to player B, and vice versa. (An input can be different things for different games but usually indicates button presses and clicks.) All players then process each other’s inputs through their local game engine and, because the game engine is designed to be deterministic, come up with matching game states. This method has a number of improvements:
- There is no host that can modify the game rules. Game rules are self-enforcing. Anyone can change the game rules locally, but it will cause the game to desynchronize.
- Communications are direct, which lowers latency between players.
This kind of architecture works very well when the game in question is a two-player game in the perfect-information format. Since players are sending their raw inputs to each other, they necessarily have full game-state information. There’s no way to hide information from players when their raw inputs are being sent back and forth. For example, fighting games tend to be excellent candidates for this architecture.
However, this model tends not to scale especially well for larger groups of players. Every player needs the inputs from every other player. So if you’re in a 10-person online game, you have to send your inputs to the nine other players and receive inputs from all of them in return. And all those other players need to do the same. The total bandwidth to sustain a game increases geometrically with the number of players (n * (n-1)). The game can’t proceed (very far anyway) without all the users. So if even a single player drops from the game or lags behind, everything breaks. So a fast-paced two-person fighting game would work very well with a decentralized architecture, but a 100-person battle royale would not.
To play games with hidden information or that scale far beyond a two-player scenario, a trusted third party in the form of a client/server architecture is required.
A SECURE REDESIGN
The original StarCraft: Brood War game was designed to be decentralized. This is fairly easy to tell by just playing a game and firing up WireShark. You’ll notice that the game communicates directly with the other players. StarCraft had all the drawbacks described above with regard to the scaling problems and the inability to prevent map hacking. So when StarCraft II came around, it was time for an upgrade!
StarCraft II uses a client/server architecture, not a peer-to-peer architecture. Again, this is simple to verify on your own by just looking at some packets in WireShark.
But wait…. If StarCraft II works on a client/server architecture, why is it possible to map hack?! Shouldn’t this new architecture have solved the problem? Well, not so fast. It appears that early on in StarCraft II’s development, the game used a decentralized architecture. In fact, during the game’s public Beta test, clients communicated directly with each other without a central server. At some point in development, this structure shifted to a client/server architecture, but only superficially. The server appears to act as a simple hub for communications, but the actual content of the messages being transmitted still reveals information that ought to be hidden. So basically, StarCraft II sends all of its information through an intermediary central server, but the core logic of the game still demanded that raw input be transmitted.
And now it should start to become clear why StarCraft II has not fixed the map-hack problem after all these years: redesigning the system in this way just isn’t feasible after it’s already in place. Fixing it isn’t a simple patch; it would mean totally revamping how almost every aspect of the game works. Meanwhile, League of Legends (a game in a similar genre) does not suffer from map hacking because it was programmed to not inform players about information they shouldn’t know.
WHAT WE CAN LEARN FROM THE CHEAT
This should serve as a cautionary tale: having a secure design from the start can save you a lot of trouble in the future. Once your product or service is launched, fundamentally changing how core functionality works may not be feasible.
It also teaches us an important lesson around setting and enforcing trust boundaries in applications. Be explicit with where the trust boundaries of your application lie. It will force you to confront otherwise implicit design flaws such as trusting users to not malform data. Many defenses against video game cheats fail because the trust boundary has been extended to include essentially every aspect of the system.
In our next segment, we’ll be exploring how video games and web applications deal with bots and anti-automation.
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Dec 12, 2024
Our Favorite Pen Testing Tools: 2024 Edition
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)