开发者

How to prevent XSS (Cross Site Scripting) whilst allowing HTML input

I have a website that allows to enter HTML through a TinyMCE rich editor control. It's purpose is to allow users to format text using HTML.

This user entered content is then outputted to other 开发者_如何转开发users of the system.

However this means someone could insert JavaScript into the HTML in order to perform a XSS attack on other users of the system.

What is the best way to filter out JavaScript code from a HTML string?

If I perform a Regular Expression check for <SCRIPT> tags it's a good start, but an evil doer could still attach JavaScript to the onclick attribute of a tag.

Is there a fool-proof way to script out all JavaScript code, whilst leaving the rest of the HTML untouched?

For my particular implementation, I'm using C#


Microsoft have produced their own anti-XSS library, Microsoft Anti-Cross Site Scripting Library V4.0:

The Microsoft Anti-Cross Site Scripting Library V4.0 (AntiXSS V4.0) is an encoding library designed to help developers protect their ASP.NET web-based applications from XSS attacks. It differs from most encoding libraries in that it uses the white-listing technique -- sometimes referred to as the principle of inclusions -- to provide protection against XSS attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). The white-listing approach provides several advantages over other encoding schemes. New features in this version of the Microsoft Anti-Cross Site Scripting Library include:- A customizable safe list for HTML and XML encoding- Performance improvements- Support for Medium Trust ASP.NET applications- HTML Named Entity Support- Invalid Unicode detection- Improved Surrogate Character Support for HTML and XML encoding- LDAP Encoding Improvements- application/x-www-form-urlencoded encoding support

It uses a whitelist approach to strip out potential XSS content.

Here are some relevant links related to AntiXSS:

  • Anti-Cross Site Scripting Library
  • Microsoft Anti-Cross Site Scripting Library V4.2 (AntiXSS V4.2)
  • Microsoft Web Protection Library


Peter, I'd like to introduce you to two concepts in security;

Blacklisting - Disallow things you know are bad.

Whitelisting - Allow things you know are good.

While both have their uses, blacklisting is insecure by design.

What you are asking, is in fact blacklisting. If there had to be an alternative to <script> (such as <img src="bad" onerror="hack()"/>), you won't be able to avoid this issue.

Whitelisting, on the other hand, allows you to specify the exact conditions you are allowing.

For example, you would have the following rules:

  • allow only these tags: b, i, u, img
  • allow only these attributes: src, href, style

That is just the theory. In practice, you must parse the HTML accordingly, hence the need of a proper HTML parser.


If you want to allow some HTML but not all, you should use something like OWASP AntiSamy, which allows you to build a whitelisted policy over which tags and attributes you allow.

HTMLPurifier might also be an alternative.

It's of key importance that it is a whitelist approach, as new attributes and events are added to HTML5 all the time, so any blacklisting would fail within short time, and knowing all "bad" attributes is also difficult.

Edit: Oh, and regex is a bit hard to do here. HTML can have lots of different formats. Tags can be unclosed, attributes can start with or without quotes (single or double), you can have line breaks and all kinds of spaces within the tags to name a few issues. I would rely on a welltested library like the ones I mentioned above.


Regular expressions are the wrong tool for the job, you need a real HTML parser or things will turn bad. You need to parse the HTML string and then remove all elements and attributes but the allowed ones (whitelist approach, blacklists are inherently insecure). You can take the lists used by Mozilla as a starting point. There you also have a list of attributes that take URL values - you need to verify that these are either relative URLs or use an allowed protocol (typically only http:/https:/ftp:, in particular no javascript: or data:). Once you've removed everything that isn't allowed you serialize your data back to HTML - now you have something that is safe to insert on your web page.


I try to replace tag element format like this:

public class Utility
{
    public static string PreventXSS(string sInput) {
        if (sInput == null)
            return string.Empty;
        string sResult = string.Empty;
        sResult = Regex.Replace(sInput, "<", "< ");
        sResult = Regex.Replace(sResult, @"<\s*", "< ");
        return sResult;
    }
}

Usage before save to db:

    string sResultNoXSS = Utility.PreventXSS(varName)

I have test that I have input data like :

<script>alert('hello XSS')</script>

How to prevent XSS (Cross Site Scripting) whilst allowing HTML input

it will be run on browser. After I add Anti XSS the code above will be:

< script>alert('hello XSS')< /script>

(There is a space after <)

And the result, the script won't be run on browser.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜