In Vegas I bought Herman “Exploiting Online Games” by Greg Hoglund and Gary McGraw. Being the saint that I am, I looked at the book thoroughly on the plane on the way home. Fortunately I was able to verify that most of the pages were there and intact and that were no blatant spelling or grammatical errors – it wouldn’t do to give Herman a broken book.
Whilst I was checking the Herman’s gift *anyway* I figured it wouldn’t hurt to also read and absorb some of the content – just to make sure I wasn’t giving him nonsense (with all due respect to Greg and Gary). In particular what interested me was whether their thinking on online games held any lessons for the work we more traditionally do on online financial and e-commerce systems. I thought the book was fascinating, particularly in this context. What follows is a mind dump of some of the thoughts I had as I was reading.
All of my descriptions of the in-game attacks are horrible kludges as I’ve never gamed for one hour in my adult life and I only speed read Greg and Gary’s book. My point is to make the comparison. For a detailed description of the gaming exploits, I’m afraid you’ll have to read Herman’s book yourself.
– Hacking *inside* the game: This a trend we’ve been aware of for some time in application hacking. Rather than try and gain access to the host or underlying data, we attack the logic of the application to gain some advantage within the logical constraints of the application. The better we understand the logic, rules etc of the application, the more chance we have of succeeding at this kind of attack. In some applications this is easy. For example, within Internet Banking an obvious attack objective is to make an unauthorized transfer of funds. Within WoW, it may be more subtle. Here an attack objective would be to copy (or ‘dupe’) resources like gold or weapons, to play without paying or to move to parts of the game you’re not supposed to go etc. To really understand the attack objectives you’d probably have to really understand the game objectives (Jeremy is loving this). This introduces an interesting hidden cost into an application assessment. Remember phase 1 of our methodology – ‘Understand the business’? If you’re assessing a poker application you probably have to have a pretty good understanding of poker. In some cases, such an understanding may take years of play to obtain. The same is true for more complex applications in other sectors, like the financial industry. Can you really assess an online trading site if you don’t understand the stock market? Hmmm. This is a tricky problem given that we probably can’t afford to go to business school for each new security assessment we conduct. This challenge is probably best addressed by Threat Modeling. Specifically, I’m referring to the component of ourThreat Modeling methodology that brings developers, business owners and security people together to hypothesize about the threats a system or application might face. The whole idea behind this phase is that the owners of the system, who already have a deep understanding of play rules and objectives, are coaxed into thinking about possible attacker objectives. This process of ‘coaxing’ is a field of study that possibly warrants some more attention.
When two players collaborate on a gambling game its almost impossible to detect them. Are there parallels for this in the financial world, e.g. in trading? Collaboration could also be used to artificially drive prices up in an online auction, for example.
CAPTCHA and similar mechanisms are used to detect bots on games. Interesting how pervasive this trend is. Bot detection is seen everywhere from Google to SMS notifications. Bot detection is a big problem.
– DoS: In an online auction DoS is used to remove a competitor from the bidding.
– ‘Edge’ cases: Haroon and I have both often argued that security breaks down at the ‘edges’. In a complex system the ‘edge’ is typically where information is handed over from one component to another. In online gaming this seems to occur most frequently where, for some reason, the player moves from one server to another. In WoW, for example, there are exploits whereby a player A makes a payment to player B then quickly jump to another location (and therefore server) and drops his connection. On reconnecting player A’s state is returned to what it was before he jumped (and still had the money) whilst player B retains the money he received (or something like that) – a collaborative ‘duping’ attack. A common example of this in the web application world is with SSO. You log in to server A, that somehow has to inform servers B, C and D of your state. The same happens when you log out, get locked out etc. This hand over is tricky and can often be made to break.
- Race conditions: Related to the edge cases Greg and co describe a series of exploits against race conditions. In fact, the attack described above actually involves artificially loading a sever so as to create an exploitable race condition. However, the example he gives that’s the most interesting to me involves a race between automated and manual processes – in his example the race is between account creation and billing. He describes an attack whereby an account is created in the game before the billing is processed, thereby allowing free game play for some period. Carefully exploited this free window can apparently be extended indefinitely. Now this I find interesting, because I imagine there are race conditions like this (between process and technology) in some of the environments that we assess that might also be exploitable. I don’t think its something we’ve ever really considered.
– Holding state on the client side: Apparently some of the games maintain some state on the client. This could include (bizarrely to my mind) the X,Y and Z coordinates of the player on the 3D map. Obviously, by manipulating these values within the client a player can achieve all kinds of funkiness, including walking under ground, etc. Can anyone think of an example where holding state on the client side is a bad idea? Hmmm. There’s a more complex example that I also found interesting. Apparently, in WoW, a slain monster sometimes ‘drops’ valuable items when its killed. So imagine you’re a WoW developer. Items held by a monster are attributes of that monster’s data structure, along with other attributes, like its life, powers energy etc. You need to send this information to a player’s client when the monster first appears on his screen, so why not send the list of ‘drop’ items to the client the first time the monster appears also. Only one query to the DB, only one packet to the client. All in all a better idea. However, that means a savvy player can determine what a monster will drop *before* it is killed, by examining the relevant data structure within the client. This information is useful for something called ‘drop farming’. This reminds me of a web application vulnerability Haroon discovered ages ago. It involved a simple password reset feature upon failed login. The logic worked like this. “If the user provides a valid user name but an incorrect password, then we provide him with ‘password reset’ button. Clicking this button causes the user’s password to be mailed to his registered email address”. Imagine you’re the website developer. You have to make a query to the database anyway, to retrieve the username and password, so why not fetch the email address at the same time. That way, if the login fails, we already have the email address to which the password should be sent. Only one query to the DB, only one packet to the client. In Haroon’s example the developers stored the email address in the client also – as a hidden field in the HTML form. The attack? Attempt to login as a legitimate user, then replace the ‘hidden’ email address in the ensuing form with your own and have the system mail the password to you. I find the similarity between these two attacks uncanny.
Unfortunately I had to give Herman his present before I could finish examining it fully. However, when he’s done with it I’ll borrow it back and maybe blog some more interestingness….