开发者

How to prevent automated AJAX attacks

How to prevent USER from doing automated posts/spam?

Here is my way of doing it, new php session for each page request, which has its own limitations, no multitabing.

I used new session for each page as defense against both CSRF and automated attacks. Lets say we have forum that uses AJAX to post threads and its validated by PHP SESSION.

add_answer.php?id=123

<?php
if(!is_ajax()){// function that determines whether the request is from ajax (http header stuff)
$_SESSION['token'] = md5(rand());
}
//some ajax request to ajax.php?id=123
?>

ajax.php?id=123

<?php
if($_SESSION['token'] == $_GET['token']){
echo 'MYSQL INSERT stuff';
}else{
echo 'Invalid Request';
}
?>

Every thing works fine until the user opens page.php?id=456 on another开发者_开发问答 tab, the ajax returns 'invalid request' on ajax.php?id=123 This is related to another question I asked. They suggested to use only one session hash all the time, until he/she logs out - only then the session is regenerated. If the token is the same USER could simply bypass it and do the automated attacks. Any ideas on that?

Anyhow whats your way of preventing Automated AJAX attacks?

PS:

  1. Dont torture users with captchas.
  2. Google failed to show me something useful on this.
  3. Take this as a challenge.
  4. Or at least vote the answers of the experts which you think is brilliant way of doing this


It sounds like your objection to letting the session stay open as long as the browser is open is the issue of automated attacks. Unfortunately, refreshing the token on each page load only deters the most amateur attackers.

First, I assume we're talking about attacks specifically targeted at your site. (If we're talking about the bots that just roam around and submit various forms, not only would this not stop them, but there are far better and easier ways to do so.) If that's the case, and I'm targeting my site, here's what my bot would do:

  1. Load form page.
  2. Read token on form page.
  3. Submit automated request with that token.
  4. Go to step 1.

(Or, if I investigated your system enough, I'd realize that if I included the "this is AJAX" header on each request, I could keep one token forever. Or I'd realize that the token is my session ID, and send my own PHPSESSID cookie.)

This method of changing the token on each page load would do absolutely nothing to stop someone who actually wanted to attack you all that badly. Therefore, since the token has no effect on automation, focus on its effects on CSRF.

From the perspective of blocking CSRF, creating one token and maintaining it until the user closes the browser seems to accomplish all goals. Simple CSRF attacks are defeated, and the user is able to open multiple tabs.

TL;DR: Refreshing the token once on each request doesn't boost security. Go for usability and do one token per session.


However! If you're extremely concerned about duplicate form submissions, accidental or otherwise, this issue can still easily be resolved. The answer is simple: use two tokens for two different jobs.

The first token will stay the same until the browser session ends. This token exists to prevent CSRF attacks. Any submission from this user with this token will be accepted.

The second token will be uniquely generated for each form loaded, and will be stored in a list in the user's session data of open form tokens. This token is unique and is invalidated once it is used. Submissions from this user with this token will be accepted once and only once.

This way, if I open a tab to Form A and a tab to Form B, each one has my personal anti-CSRF token (CSRF taken care of), and my one-time form token (form resubmission taken care of). Both issues are resolved without any ill effect on the user experience.

Of course, you may decide that that's too much to implement for such a simple feature. I think it is, anyway. Regardless, a solid solution exists if you want it.


If you're trying to prevent having one client DoS you, an uncommon but workable strategy would be to include a hashcash token in the request (there are already PHP and JavaScript implementations).

In order to prevent breaking tabbed browsing and back buttoning, ideally you'd want the hashcash token's challenge to contain both a per-session anti-forgery token and a uniqueness portion newly generated for each request. In order to minimize the impact on usability if you have a large token cost, start precomputing the next token in your page as soon as you've expended the previous one.

Doing this limits the rate at which a client can produce valid requests, because each hashcash token can only be used once (which means you'll need to keep a cache of valid, already-spent hashcash tokens attached to the session to prevent endless reuse of a single token), they can't be computed in advance of the session start (because of the random anti-forgery value), and it costs nontrivial amounts of CPU time to generate each new valid token but only a trivial amount of time to validate one.

While this doesn't prevent automation of your AJAX API per se, it does constrain high-volume hammering of the API.


How to prevent USER from doing automated posts/spam?

This could likely be solved in the same manner as regular requests. A token per page load and stopping new tabs may be overkill. Certainly a time-sensitive token per form may mitigate CSRF attacks to some degree. But otherwise, instead of restricting the user experience, it may be best to define and implement a submission policy engine.

At the risk of sounding pompous or demeaning to everyone: Often sites use a points-based reward system, such as "karma" or "badges". Such systems actually add to the user experience as submissions then become a sort of game for users. They may often restrict the ability to post submissions to only trusted users or by a max number during a given time-frame. Take a look at SO's system for a good use case.

A very basic answer just demonstrating some common site policies:

  • If the user exceeded a count of x number of posts in the past y minutes, deny DB insert and display a "Sorry, too soon since your last post" warning. This can be achieved by querying the DB for a count of users's posts over a given recent time period before allowing the new record insert.
  • If the user doesn't have a certain karma threshold - for example, new users or those repeatedly marked as spammers - deny DB write and display a "Sorry, you haven't been here long enough" or a "Sorry, you spam too much" warning. This can be achieved by querying the DB for a total of users's "karma", which is managed in a separate table or site module, before allowing the new record insert.
  • If the site is small and manageable enough to be moderated by just one or two users, have all new user requests and posts reviewed and approved first. This can be achieved by holding new entries in a separate table for review before moving to the live table, or by having an "approved" flag column on the main table.

Furthermore, a count of policy violations can be kept on each user, and if it exceeds a certain point over a given time period, you may opt to have them automatically banned for a certain time period. The ban can be put into effect by denying all db writes related to that user if you wish.

On the note about "http header stuff", headers are for only working off a best guess and courtesy at what the client is requesting. They are only as difficult to forge as cookies, and forging cookies only takes a click of the mouse. And honestly, I personally wouldn't have it any other way.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜