Security Watch

Can SQL Inections Be Blocked?

A cool trick with a Cisco router, plus why job hunters might be the hunted.

Reports exist that a botnet is using search engines to discover sites that use .ASP code to generate Web pages. The botnet then attempts to inject SQL into HTTP GET requests such that they modify the SQL command executed by the site. In the case of ASPROX, it attempts to install an iFrame, which downloads a Trojan to a visitor's system.

Blocking botnets via your Cisco router works fine when you know precisely what you're looking for. You can construct a class-map that defines the bad component of a GET request that you want to catch and then drops the request.

The better way is what we've recommended for years -- to vet your code to ensure you're not allowing unsanitized parameters to reach your SQL commands. Microsoft has written up some basic instructions on how to do it one way, which is to pass user-supplied data as parameters to queries. Click here to read the paper.

However, actually testing user-supplied data to validate that it conforms to what is expected is a more complete method. In the examples that Microsoft supplies, it shows user-supplied, criminally crafted code that expects you are appending a single quote to either end of what you expect to be a user-supplied string. The criminal embeds a single quote and appends further SQL commands. If your code simply searched the supplied data for a quote (in any format) and truncated the data at that point, none of the additional SQL commands would be considered.

Even better: Finding a single quote in the user-supplied data should indicate the presence of a criminal attempting to exploit your site. Why not fire off an e-mail to yourself (or your SOC) with the entire request in it so it can be inspected and appropriate actions can then be taken? Pre-parsing, the idea of vetting data to ensure it conforms to expectations, may seem to add processing overhead and is generally believed to be a lot of work. However, a carefully constructed routine need not waste enormous amounts of time on good requests, and since the vast majority will be good requests, it won't bog down your site. When you consider the benefits, hopefully you'll come to accept it as a necessity to any well managed site.

Harvesting Recruits with Trojans
According to PrevX, a Russian gang has crafted a tool targeting a wide variety of online job hunting sites and, uses the stolen but legitimate employer credentials to collects CVs. It can then extract pertinent personal information from those CVs, including name, physical and e-mail address, as well as current and former employers.

When it comes to privacy risk, this one is far more than the typical data loss. It's fairly easy to get a new credit card number and not that difficult to get a new email address. Getting a new physical address, employer, or phone number is far more difficult. No doubt the criminals looking for this information are the scam artists who call up and tell you that you've won something, in the hopes you'll send them some money to be able to collect a much more valuable prize. Such information also provides the basis for a far more successful identity theft, be it opening a new bank account or obtaining a mortgage in your name.

Consider limiting the information you put on a CV which isn't easily changed, and using a CV-specific (or job hunting site specific) e-mail address to help you identify when your information has been leaked.

About the Author

Russ Cooper is a senior information security analyst with Verizon Business, Inc. He's also founder and editor of NTBugtraq,, one of the industry's most influential mailing lists dedicated to Microsoft security. One of the world's most-recognized security experts, he's often quoted by major media outlets on security issues.

comments powered by Disqus
Most   Popular