Cheating at Online Video Games and What It Can Teach Us About AppSec (Part 2)
In our last segment, we talked about networking design and trust boundaries in video games and web applications. We pick back up with a story about bots and the struggle to find adequate anti-automation that can work in our systems.
AUTOMATING ACTIONS WITH BOTS
Competitive games are a test of a person’s ability, often including a certain amount of physical ability. For as much strategy as a well-thought-out play in football can entail, being able to just run faster than everyone else is a big advantage! So too in video games. There is an immense amount of technical skill involved in any serious esport. These skills can be fast reaction speeds, precise timing, and accurate movement — all things that computers excel at.
One common way of cheating at video games is to have a computer (or bot) automate technically demanding tasks for the player. A bot can press buttons faster than any human and can do it with precision and lightning-fast reactions. For example, a real-life Defense of the Ancients (DotA) team was banned from a $25 million tournament, The International, in 2018 for using a mouse that had an illegal macro capability that gave its user superhuman control.
Similarly, in the fighting game community, it’s standard for players to bring their own fight sticks, which can be complex pieces of hardware, to competitive tournaments:
Fight stick controller's electrical components
All that circuitry can potentially include a lot of logic for playing games in a semi-automated fashion. Just check out this fight stick with the ability to record and play back macros of button combinations:
If the game in question involves a test of technical skill, then this kind of computer-aided automation system can give a player a significant advantage.
ANTI-AUTOMATION
The struggle to ensure that human beings, rather than bots, are playing games might sound familiar to you, as web applications suffer the same fate. When bots automate some sensitive process of a web application workflow, bad things can happen. For example, a web application may commonly allow users to store small files on the backend. Perhaps as an image to use as an avatar. But if left unchecked, bots may abuse this feature and turn it into unlimited free cloud storage. In this case, the core functionality of the feature is not exactly being hacked in an ordinary sense, but the bots are abusing it at a scale it was not intended for.
The video game community’s approach to anti-automation has taken two major forms:
- Spyware — What’s with all the spyware, video game industry? Stop it.
- Anomaly Detection — Through results and key metrics
The second approach is far more interesting and doesn’t get talked about enough in the context of detecting and preventing automation in the information security community. Essentially, it comes down to finding known expected behaviors and identifying outcomes that are likely cheats because they deviate too far from expectation. For example, if you start winning too much money over time at a casino blackjack table, you will be escorted out of the building. They don’t need to know how you performed your cheat; the fact that you’re consistently winning is evidence enough of it.
Another way of thinking about it is to stop focusing so much on the bot and focus more about the adverse behavior you’re trying to prevent. Whether the user is made of silicon or carbon is outside the scope of your security.
As an application developer, make explicit security goals around anti-automation. “Disallowing bots” is not sufficient; be specific. It turns out that depending on what you want to accomplish, there may be very different solutions.
- Do you want to prevent DDoS attacks? There are good strategies for this that don’t involve CAPTCHAs.
- Do you want to avoid having your disk space being taken up? Some simple heuristics may be enough.
- Do you want to keep users from running up your AWS bill by repeatedly exercising certain functionality? Perhaps some basic rate-limiting is the answer.
Slapping a CAPTCHA on an application and calling it a day used to work back in the early 2000s, but not in the 2020s.
ANOMALY DETECTION
Sometimes the behavior you want to prevent doesn’t fit nicely into an existing category with known solutions. For example, if your application runs a marketplace and you want to prevent arbitrage, there is no standard solution to lean on. One appealing possibility is anomaly detection: define a set of expected behaviors and then flag users who deviate from them.
Video games typically have similar expected behaviors, such as an actions-per-minute (APM) count, which is a measure of how fast someone is playing. It’s not hard to play “Spot the Bot” in a game of StarCraft II when humans average several hundred APM in a game, but bots like DeepMind's AlphaStar peak in the thousands.
As CAPTCHAs continue to lose the war against automation, this fascinating bend-but-don’t-break kind of defense becomes enticing. Don’t try to make your system impossible to ever cheat at, but rather make it impossible to do so repeatedly and get away with it. This is a very different approach to anti-automation than what we typically see in web application security, and it’s one that requires customization to the particular business logic of your application.
The first ingredient you’ll need, however, is a good sense of what normal behavior looks like for your application’s key metrics. This could be the number of concurrent logins, HTTP request frequency, the number of distinct IP addresses associated with an account, or dollar amounts. Once you have a good sense of what’s normal, you can set some boundaries around cases that stray too far from the norm.
To tell if it’s a bot playing a video game or using a web application, we can validate key metrics like:
- How quickly is the user acting? APM will tell you if the player is acting superhumanly fast, much like requests per second for a web application.
- How diversely is the user acting? Bots made to use websites will typically only exercise a small portion of the total site’s functionality (in contrast with a regular user). Similarly, bots in video games will often only do one action such as gold mining.
- How perfectly is the user acting? For web applications, this could mean many things, such as the timing of requests (e.g., making an API call exactly once per second). In games, it can mean any number of performance-related metrics such as accuracy or effective APM (a measure of wasted actions).
None of these defenses are ironclad individually, but enforcing these rules means that the attacker now has to create their bot and also make it appear human according to arbitrary and unknown heuristics. In practice, this defense method can be very effective at preventing cheating.
THE LAST CHAPTER
Coming up in our third and last installment, we’ll be diving into race conditions and how they affect both video games and web application security. Stay tuned!
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)
Aug 13, 2024
Manipulating the Mind: The Strategy and Practice of Social Engineering
Aug 01, 2024
Adversarial Controls Testing: A Step to Cybersecurity Resilience