Security Watch

.NET Web Apps: Bad Input Can Lead to Bad Security

Plus: A look at cross-site request forgery and examining the spread of malware through peer-to-peer networks.

Despite numerous articles from Microsoft regarding the inadequacy of using .NET’s own user input validation classes, it has been suggested that many sites solely rely upon those classes. This makes those who use them vulnerable to a variety of attacks which the uneducated may have thought they were protected against. Here's a whole list of resources that point to this problem:

There is no vulnerability here, per se, other than the fact that a programmer may think they have protected themselves when they actually haven't done so completely. As the links above show, the issue is well documented, especially from Microsoft. Microsoft supplied classes which were intended to provide some basic vetting of user input. Those classes are not, however, guaranteed to prevent all invalid and/or potentially harmful client data from passing through to the routines the programmer intends to protect.

ValidateRequest is a Regular Expression that attempts to identify a suite of known bad input. It cannot detect all potentially harmful input, as much of what may be harmful is site specific. As we have always said about vendor-supplied Web pages, every site owner needs to take the steps necessary to verify that site input routines will only allow input that's expected. More often than not this will require additional validation routines than those supplied by .NET’s ValidateRequest.

Cross-Site Request Forgery: Scarier Than Cross-Site Scripting
A hacker’s group has suggested that cross-site request forgery is the next big thing in hacking on the Internet. CSRF relies upon a number of factors:

  1. An unsuspecting victim visits a malicious site.
  2. The malicious site has coded requests to a "good" service site, which the victim has an account with.
  3. The "good" service site allows a visitor to specify everything within an HTTP request.

Here's a good candidate for a CSRF: Your bank’s site allows you to specify a money transfer -- including how much and to whom -- entirely within an HTTP request.

Guess what? Too many sites allow this type of request, of which MySpace and NetFlix are but two examples. To prevent such abuses, abuse a site would have to allow the creation of, say, a money transfer via a series of HTTP transmissions, with each one separated by some sort of confirmation by the client. It also involves maintaining state with the client in order to ensure that three separate pieces of a request are not sent simultaneously.

As is pointed out in the article, the CSRF problem is not something that can trivially be fixed. It does not involve a vulnerability or compromise on either the client or the server. Suggested methods of preventing such an attack involve ensuring that you are completely logged out of any sites that might be abused, such as your banking Web site. This way, should another site attempt to cause you to make an unseen request, you’d be prompted to log into your bank’s site in the process -- making the action visible to you.

Two Papers on Peer-to-Peer Networks
Andrew Kalafut, Abhinav Acharya and Minaxi Gupta did a study on Limewire and OpenFT peer-to-peer networks in April 2006. The authors determined that 68 percent of all downloadable responses resulted in malware on Limewire, while OpenFT contained only three percent. Both used the same criteria.

Anyone who’s surprised the number is so high, raise your hand. Anyone who thinks it’s surprisingly low gets a pat on the back.

In another study, authors Seungwon Shin, Jaeyeon Jung and Hari Balakrishan examined the KaZaA network during two separate timeframes, February 2006 and May 2006. They came up with some interesting statistics:

  • 22 percent of files trolled in February were infected, while only 15 percent in May were infected.
  • 12 percent of KaZaA clients appear to be infected during both observations.
  • Only 4.8 percent of clients infected in February were still or again infected in May.
The most interesting part of this study was the 4.8 percent of re-infectees. This number is considerably lower than we might have expected. It shows that the vast majority of people who become infected do get themselves cleaned, and only a few become infected again. Interestingly, 70 percent of the 4.8 percent of re-infectees were listed in one or more DNS Realtime Black Lists, strongly suggesting that malware is being used to relay spam through these particular systems.

About the Author

Russ Cooper is a senior information security analyst with Verizon Business, Inc. He's also founder and editor of NTBugtraq, www.ntbugtraq.com, one of the industry's most influential mailing lists dedicated to Microsoft security. One of the world's most-recognized security experts, he's often quoted by major media outlets on security issues.

comments powered by Disqus
Most   Popular