Giter Site home page Giter Site logo

microcosm-cc / bluemonday Goto Github PK

View Code? Open in Web Editor NEW
3.0K 36.0 175.0 588 KB

bluemonday: a fast golang HTML sanitizer (inspired by the OWASP Java HTML Sanitizer) to scrub user generated content of XSS

Home Page: https://github.com/microcosm-cc/bluemonday

License: Other

Go 99.75% Makefile 0.25%
sanitization html security xss go owasp allowlist golang

bluemonday's Introduction

bluemonday GoDoc Sourcegraph

bluemonday is a HTML sanitizer implemented in Go. It is fast and highly configurable.

bluemonday takes untrusted user generated content as an input, and will return HTML that has been sanitised against an allowlist of approved HTML elements and attributes so that you can safely include the content in your web page.

If you accept user generated content, and your server uses Go, you need bluemonday.

The default policy for user generated content (bluemonday.UGCPolicy().Sanitize()) turns this:

Hello <STYLE>.XSS{background-image:url("javascript:alert('XSS')");}</STYLE><A CLASS=XSS></A>World

Into a harmless:

Hello World

And it turns this:

<a href="javascript:alert('XSS1')" onmouseover="alert('XSS2')">XSS<a>

Into this:

XSS

Whilst still allowing this:

<a href="http://www.google.com/">
  <img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>

To pass through mostly unaltered (it gained a rel="nofollow" which is a good thing for user generated content):

<a href="http://www.google.com/" rel="nofollow">
  <img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/>
</a>

It protects sites from XSS attacks. There are many vectors for an XSS attack and the best way to mitigate the risk is to sanitize user input against a known safe list of HTML elements and attributes.

You should always run bluemonday after any other processing.

If you use blackfriday or Pandoc then bluemonday should be run after these steps. This ensures that no insecure HTML is introduced later in your process.

bluemonday is heavily inspired by both the OWASP Java HTML Sanitizer and the HTML Purifier.

Technical Summary

Allowlist based, you need to either build a policy describing the HTML elements and attributes to permit (and the regexp patterns of attributes), or use one of the supplied policies representing good defaults.

The policy containing the allowlist is applied using a fast non-validating, forward only, token-based parser implemented in the Go net/html library by the core Go team.

We expect to be supplied with well-formatted HTML (closing elements for every applicable open element, nested correctly) and so we do not focus on repairing badly nested or incomplete HTML. We focus on simply ensuring that whatever elements do exist are described in the policy allowlist and that attributes and links are safe for use on your web page. GIGO does apply and if you feed it bad HTML bluemonday is not tasked with figuring out how to make it good again.

Supported Go Versions

bluemonday is tested on all versions since Go 1.2 including tip.

We do not support Go 1.0 as we depend on golang.org/x/net/html which includes a reference to io.ErrNoProgress which did not exist in Go 1.0.

We support Go 1.1 but Travis no longer tests against it.

Is it production ready?

Yes

We are using bluemonday in production having migrated from the widely used and heavily field tested OWASP Java HTML Sanitizer.

We are passing our extensive test suite (including AntiSamy tests as well as tests for any issues raised). Check for any unresolved issues to see whether anything may be a blocker for you.

We invite pull requests and issues to help us ensure we are offering comprehensive protection against various attacks via user generated content.

Usage

Install in your ${GOPATH} using go get -u github.com/microcosm-cc/bluemonday

Then call it:

package main

import (
	"fmt"

	"github.com/microcosm-cc/bluemonday"
)

func main() {
	// Do this once for each unique policy, and use the policy for the life of the program
	// Policy creation/editing is not safe to use in multiple goroutines
	p := bluemonday.UGCPolicy()

	// The policy can then be used to sanitize lots of input and it is safe to use the policy in multiple goroutines
	html := p.Sanitize(
		`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`,
	)

	// Output:
	// <a href="http://www.google.com" rel="nofollow">Google</a>
	fmt.Println(html)
}

We offer three ways to call Sanitize:

p.Sanitize(string) string
p.SanitizeBytes([]byte) []byte
p.SanitizeReader(io.Reader) bytes.Buffer

If you are obsessed about performance, p.SanitizeReader(r).Bytes() will return a []byte without performing any unnecessary casting of the inputs or outputs. Though the difference is so negligible you should never need to care.

You can build your own policies:

package main

import (
	"fmt"

	"github.com/microcosm-cc/bluemonday"
)

func main() {
	p := bluemonday.NewPolicy()

	// Require URLs to be parseable by net/url.Parse and either:
	//   mailto: http:// or https://
	p.AllowStandardURLs()

	// We only allow <p> and <a href="">
	p.AllowAttrs("href").OnElements("a")
	p.AllowElements("p")

	html := p.Sanitize(
		`<a onblur="alert(secret)" href="http://www.google.com">Google</a>`,
	)

	// Output:
	// <a href="http://www.google.com">Google</a>
	fmt.Println(html)
}

We ship two default policies:

  1. bluemonday.StrictPolicy() which can be thought of as equivalent to stripping all HTML elements and their attributes as it has nothing on its allowlist. An example usage scenario would be blog post titles where HTML tags are not expected at all and if they are then the elements and the content of the elements should be stripped. This is a very strict policy.
  2. bluemonday.UGCPolicy() which allows a broad selection of HTML elements and attributes that are safe for user generated content. Note that this policy does not allow iframes, object, embed, styles, script, etc. An example usage scenario would be blog post bodies where a variety of formatting is expected along with the potential for TABLEs and IMGs.

Policy Building

The essence of building a policy is to determine which HTML elements and attributes are considered safe for your scenario. OWASP provide an XSS prevention cheat sheet to help explain the risks, but essentially:

  1. Avoid anything other than the standard HTML elements
  2. Avoid script, style, iframe, object, embed, base elements that allow code to be executed by the client or third party content to be included that can execute code
  3. Avoid anything other than plain HTML attributes with values matched to a regexp

Basically, you should be able to describe what HTML is fine for your scenario. If you do not have confidence that you can describe your policy please consider using one of the shipped policies such as bluemonday.UGCPolicy().

To create a new policy:

p := bluemonday.NewPolicy()

To add elements to a policy either add just the elements:

p.AllowElements("b", "strong")

Or using a regex:

Note: if an element is added by name as shown above, any matching regex will be ignored

It is also recommended to ensure multiple patterns don't overlap as order of execution is not guaranteed and can result in some rules being missed.

p.AllowElementsMatching(regex.MustCompile(`^my-element-`))

Or add elements as a virtue of adding an attribute:

// Note the recommended pattern, see the recommendation on using .Matching() below
p.AllowAttrs("nowrap").OnElements("td", "th")

Again, this also supports a regex pattern match alternative:

p.AllowAttrs("nowrap").OnElementsMatching(regex.MustCompile(`^my-element-`))

Attributes can either be added to all elements:

p.AllowAttrs("dir").Matching(regexp.MustCompile("(?i)rtl|ltr")).Globally()

Or attributes can be added to specific elements:

// Not the recommended pattern, see the recommendation on using .Matching() below
p.AllowAttrs("value").OnElements("li")

It is always recommended that an attribute be made to match a pattern. XSS in HTML attributes is very easy otherwise:

// \p{L} matches unicode letters, \p{N} matches unicode numbers
p.AllowAttrs("title").Matching(regexp.MustCompile(`[\p{L}\p{N}\s\-_',:\[\]!\./\\\(\)&]*`)).Globally()

You can stop at any time and call .Sanitize():

// string htmlIn passed in from a HTTP POST
htmlOut := p.Sanitize(htmlIn)

And you can take any existing policy and extend it:

p := bluemonday.UGCPolicy()
p.AllowElements("fieldset", "select", "option")

Inline CSS

Although it's possible to handle inline CSS using AllowAttrs with a Matching rule, writing a single monolithic regular expression to safely process all inline CSS which you wish to allow is not a trivial task. Instead of attempting to do so, you can allow the style attribute on whichever element(s) you desire and use style policies to control and sanitize inline styles.

It is strongly recommended that you use Matching (with a suitable regular expression) MatchingEnum, or MatchingHandler to ensure each style matches your needs, but default handlers are supplied for most widely used styles.

Similar to attributes, you can allow specific CSS properties to be set inline:

p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'color' property with valid RGB(A) hex values only (on any element allowed a 'style' attribute)
p.AllowStyles("color").Matching(regexp.MustCompile("(?i)^#([0-9a-f]{3,4}|[0-9a-f]{6}|[0-9a-f]{8})$")).Globally()

Additionally, you can allow a CSS property to be set only to an allowed value:

p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'text-decoration' property to be set to 'underline', 'line-through' or 'none'
// on 'span' elements only
p.AllowStyles("text-decoration").MatchingEnum("underline", "line-through", "none").OnElements("span")

Or you can specify elements based on a regex pattern match:

p.AllowAttrs("style").OnElementsMatching(regex.MustCompile(`^my-element-`))
// Allow the 'text-decoration' property to be set to 'underline', 'line-through' or 'none'
// on 'span' elements only
p.AllowStyles("text-decoration").MatchingEnum("underline", "line-through", "none").OnElementsMatching(regex.MustCompile(`^my-element-`))

If you need more specific checking, you can create a handler that takes in a string and returns a bool to validate the values for a given property. The string parameter has been converted to lowercase and unicode code points have been converted.

myHandler := func(value string) bool{
	// Validate your input here
	return true
}
p.AllowAttrs("style").OnElements("span", "p")
// Allow the 'color' property with values validated by the handler (on any element allowed a 'style' attribute)
p.AllowStyles("color").MatchingHandler(myHandler).Globally()

Links

Links are difficult beasts to sanitise safely and also one of the biggest attack vectors for malicious content.

It is possible to do this:

p.AllowAttrs("href").Matching(regexp.MustCompile(`(?i)mailto|https?`)).OnElements("a")

But that will not protect you as the regular expression is insufficient in this case to have prevented a malformed value doing something unexpected.

We provide some additional global options for safely working with links.

RequireParseableURLs will ensure that URLs are parseable by Go's net/url package:

p.RequireParseableURLs(true)

If you have enabled parseable URLs then the following option will AllowRelativeURLs. By default this is disabled (bluemonday is an allowlist tool... you need to explicitly tell us to permit things) and when disabled it will prevent all local and scheme relative URLs (i.e. href="localpage.html", href="../home.html" and even href="//www.google.com" are relative):

p.AllowRelativeURLs(true)

If you have enabled parseable URLs then you can allow the schemes (commonly called protocol when thinking of http and https) that are permitted. Bear in mind that allowing relative URLs in the above option will allow for a blank scheme:

p.AllowURLSchemes("mailto", "http", "https")

Regardless of whether you have enabled parseable URLs, you can force all URLs to have a rel="nofollow" attribute. This will be added if it does not exist, but only when the href is valid:

// This applies to "a" "area" "link" elements that have a "href" attribute
p.RequireNoFollowOnLinks(true)

Similarly, you can force all URLs to have "noreferrer" in their rel attribute.

// This applies to "a" "area" "link" elements that have a "href" attribute
p.RequireNoReferrerOnLinks(true)

We provide a convenience method that applies all of the above, but you will still need to allow the linkable elements for the URL rules to be applied to:

p.AllowStandardURLs()
p.AllowAttrs("cite").OnElements("blockquote", "q")
p.AllowAttrs("href").OnElements("a", "area")
p.AllowAttrs("src").OnElements("img")

An additional complexity regarding links is the data URI as defined in RFC2397. The data URI allows for images to be served inline using this format:

<img src="data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAAAAAAfQ//73v/+BiOh/AAA=">

We have provided a helper to verify the mimetype followed by base64 content of data URIs links:

p.AllowDataURIImages()

That helper will enable GIF, JPEG, PNG and WEBP images.

It should be noted that there is a potential security risk with the use of data URI links. You should only enable data URI links if you already trust the content.

We also have some features to help deal with user generated content:

p.AddTargetBlankToFullyQualifiedLinks(true)

This will ensure that anchor <a href="" /> links that are fully qualified (the href destination includes a host name) will get target="_blank" added to them.

Additionally any link that has target="_blank" after the policy has been applied will also have the rel attribute adjusted to add noopener. This means a link may start like <a href="//host/path"/> and will end up as <a href="//host/path" rel="noopener" target="_blank">. It is important to note that the addition of noopener is a security feature and not an issue. There is an unfortunate feature to browsers that a browser window opened as a result of target="_blank" can still control the opener (your web page) and this protects against that. The background to this can be found here: https://dev.to/ben/the-targetblank-vulnerability-by-example

Policy Building Helpers

We also bundle some helpers to simplify policy building:

// Permits the "dir", "id", "lang", "title" attributes globally
p.AllowStandardAttributes()

// Permits the "img" element and its standard attributes
p.AllowImages()

// Permits ordered and unordered lists, and also definition lists
p.AllowLists()

// Permits HTML tables and all applicable elements and non-styling attributes
p.AllowTables()

Invalid Instructions

The following are invalid:

// This does not say where the attributes are allowed, you need to add
// .Globally() or .OnElements(...)
// This will be ignored without error.
p.AllowAttrs("value")

// This does not say where the attributes are allowed, you need to add
// .Globally() or .OnElements(...)
// This will be ignored without error.
p.AllowAttrs(
	"type",
).Matching(
	regexp.MustCompile("(?i)^(circle|disc|square|a|A|i|I|1)$"),
)

Both examples exhibit the same issue, they declare attributes but do not then specify whether they are allowed globally or only on specific elements (and which elements). Attributes belong to one or more elements, and the policy needs to declare this.

Limitations

We are not yet including any tools to help allow and sanitize CSS. Which means that unless you wish to do the heavy lifting in a single regular expression (inadvisable), you should not allow the "style" attribute anywhere.

In the same theme, both <script> and <style> are considered harmful. These elements (and their content) will not be rendered by default, and require you to explicitly set p.AllowUnsafe(true). You should be aware that allowing these elements defeats the purpose of using a HTML sanitizer as you would be explicitly allowing either JavaScript (and any plainly written XSS) and CSS (which can modify a DOM to insert JS), and additionally but limitations in this library mean it is not aware of whether HTML is validly structured and that can allow these elements to bypass some of the safety mechanisms built into the WhatWG HTML parser standard.

It is not the job of bluemonday to fix your bad HTML, it is merely the job of bluemonday to prevent malicious HTML getting through. If you have mismatched HTML elements, or non-conforming nesting of elements, those will remain. But if you have well-structured HTML bluemonday will not break it.

TODO

  • Investigate whether devs want to blacklist elements and attributes. This would allow devs to take an existing policy (such as the bluemonday.UGCPolicy() ) that encapsulates 90% of what they're looking for but does more than they need, and to remove the extra things they do not want to make it 100% what they want
  • Investigate whether devs want a validating HTML mode, in which the HTML elements are not just transformed into a balanced tree (every start tag has a closing tag at the correct depth) but also that elements and character data appear only in their allowed context (i.e. that a table element isn't a descendent of a caption, that colgroup, thead, tbody, tfoot and tr are permitted, and that character data is not permitted)

Development

If you have cloned this repo you will probably need the dependency:

go get golang.org/x/net/html

Gophers can use their familiar tools:

go build

go test

I personally use a Makefile as it spares typing the same args over and over whilst providing consistency for those of us who jump from language to language and enjoy just typing make in a project directory and watch magic happen.

make will build, vet, test and install the library.

make clean will remove the library from a single ${GOPATH}/pkg directory tree

make test will run the tests

make cover will run the tests and open a browser window with the coverage report

make lint will run golint (install via go get github.com/golang/lint/golint)

Long term goals

  1. Open the code to adversarial peer review similar to the Attack Review Ground Rules
  2. Raise funds and pay for an external security review

bluemonday's People

Contributors

6543 avatar arp242 avatar bigshika avatar buddhamagnet avatar buro9 avatar dependabot[bot] avatar dmitshur avatar fewstera avatar gufran avatar gusted avatar hochhaus avatar jgrahamc avatar kiwiz avatar kn4ck3r avatar lunny avatar mungrel avatar pauln avatar platinummonkey avatar riking avatar samwhited avatar sergeyfedotov avatar stevengutzwiller avatar twpayne avatar yardenshoham avatar yyewolf avatar zeripath avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bluemonday's Issues

Allow `RequireParsableURL` method to be applied selectively on tags.

Hi, currently it looks like RequireParsableURL cannot be applied to tags selectively.

i.e. something like -> policy.AllowParsabelURL(true).OnElements(<tag>).

I want to allow any url in src attr on img tag and nowhere else. Please update if you think this is a feature worth having.

Thanks.

Some AllowAttrs.Matching regexp are not anchored

It looks like this was already fixed in #3. Here are some examples, but my point is, all regexps should be reviewed and anchored unless there is a really good reason to not do this:

func (p *Policy) AllowStandardAttributes() {
	p.AllowAttrs(
		"lang",
	).Matching(regexp.MustCompile(`[a-zA-Z]{2,20}`)).Globally()
	p.AllowAttrs("id").Matching(
		regexp.MustCompile(`[a-zA-Z0-9\:\-_\.]+`),
	).Globally()

func (p *Policy) AllowTables() {
	p.AllowAttrs(
		"scope",
	).Matching(
		regexp.MustCompile(`(?i)(?:row|col)(?:group)?`),
	).OnElements("td", "th")
	p.AllowAttrs("nowrap").Matching(
		regexp.MustCompile(`(?i)|nowrap`),
	).OnElements("td", "th")

Also it's probably good idea to fix examples in README in same way.

Use of strings.ToLower() incorrectly escapes chars and allows for insertion of scripts

Reported by email:

package main

import (
	"fmt"

	"github.com/microcosm-cc/bluemonday"
)

func TestEncoding() {
	p := bluemonday.NewPolicy()

	original := "<scr\u0130pt>&lt;script>alert(/XSS/)&lt;/script>"
	html := p.Sanitize(original)

	// Output:
	// Original: <scrฤฐpt>&lt;script>alert(/XSS/)&lt;/script>
	// Sanitized: <script>alert(/XSS/)</script>
	fmt.Printf("Original: %s\nSanitized: %s\n",
		original, html)
}

func main() {
	TestEncoding()
}

Note that this is more severe than even the original reporter realised as this works on the NewPolicy which is a blank policy.

An explanation was provided:

This one occurs because you doesn't escape script/style tag contents:
https://github.com/microcosm-cc/bluemonday/blob/master/sanitize.go#L219-L228
The trick is using symbol \u0130 (ฤฐ) that converts to i by
strings.ToLower, how to find it: https://play.golang.org/p/jDMRCSNigR7

Investigation reveals that strings.ToLower() was not even required, and could be omitted which results in the expected (safe) behaviour.

A change is coming in a moment.

Credit to Yandex and @buglloc for reporting this.

Turn disallowed tags into html entities

It would be useful to have the option to turn stripped tags into entities instead of completely removing them. So

<script></script>

would be turned into

&lt;script&gt;&lt;/script&gt;

Was Sanitized Flag

Is there a way to check if the input was sanitized? Maybe something like the code below?

// . . .
sanitized := p.Sanitize(unSanitized)
if p.WasSanitized {
    fmt.Println("Needed to be Sanitized")
}
// . . .

If there isn't a way to do this, could someone add it? Or would you recommend comparing the unsanitized input to the sanitized input?

Thanks! :)

Force html attribute to specific values

It would be useful to be able to forcibly add an attribute to elements. You have similar functionality with the rel="nofollow" on links. Something like:

Policy.AllowAttrs("rel").Force("nofollow").OnElements("a")

or:

Policy.ForceAttrs("rel").Value("nofollow").OnElements("a")

apostrophes get turned into HTML entities - take 2

I realize this was asked before, but I would like to renew the discussion for my use case.

I am getting data that is being entered into an html form by a user, and that data will be saved in a database eventually and redisplayed later. Obviously such data needs to be sanitized, so I turned to bluemonday. If a user enters an apostrophe in the data, the apostrophe is coming through to my application in the form data as an apostrophe, so GO is not converting it. When I run it through bluemonday, it gets converted to an html entity.

bluemonday states that it is designed for sanitizing html that will be displayed as html, so I understand why this is correct behavior from bluemonday's perspective.

Soo..., I am asking for a new sanitizer policy that allows bluemonday to be used as a basic text sanitizer for my scenario, where I am not trying to have the end result be html, but still I want the goal that XSS attacks are cleaned out. I imagine it could be a simple process of running the result through UnescapeString, which I will do for now, but you guys are the experts and might have other thoughts as to why this may or may not be adequate.

Feature request: blacklist elements from policy

Hi there,

Forgive me if this feels like an inappropriate way to cast a vote, and feel free to close if so!

I noticed in the TODO that the Blacklist concept is mentioned, and that you were interested in investigating whether this would be useful to devs. So I wanted to cast a vote in favor of this request. I'd love to be able to use the UGCPolicy and then simply take out the parts that aren't applicable to my use case, such as UGCPolicy without Blockquotes.

Thanks for your work!

An undocumented difference between the three Sanitize* funcs.

There are now 3 funcs to perform sanitization.

Their signatures and documentation suggest they differ only in the types for input/output.

However, func (p *Policy) Sanitize(s string) string differs from the other two in behavior:

$ goe 'bluemonday.UGCPolicy().Sanitize("Hi.\n")'
(string)("Hi.")
$ goe 'string(bluemonday.UGCPolicy().SanitizeBytes([]byte("Hi.\n")))'
(string)("Hi.\n")
$ goe 'bluemonday.UGCPolicy().SanitizeReader(strings.NewReader("Hi.\n")).String()'
(string)("Hi.\n")

This is because Sanitize performs s = strings.TrimSpace(s), while the other two don't do the equivalent action.

Is this difference intended? I am guessing it is (and I like the current behavior, it seems appropriate). But then the documentation should reflect it.

Add functionality to set rel="noreferrer" on a,area,link

Right now we have the capability to set rel="nofollow", and we add rel="noopener" if target="_blank". Another potential useful piece of functionality would be to add rel="noreferrer" if the user makes that part of the policy.

I would be willing to write the functionality. I was thinking of the API consisting of two functions:
(p *Policy) RequireNoReferrerOnLinks() *policy
(p *Policy) RequireNoReferrerOnFullyQualifiedLinks() *policy

Is Sanitize* safe to use by multiple goroutines

At a glance it looks like there is no reason to create new policy per each sanitized value (well, it's per goroutine, but in practice this often means - per each value), after initial policy setup it should be safe to use by multiple goroutines, but doc doesn't mention this (which apply default meaning: not safe). So, if it's safe, please document this.

AllowElements iframe doesn't work

Hey

I've tried to add iframe to the whitelist, but it still sanitize iframes.

Example code:

package main

import (
	"github.com/microcosm-cc/bluemonday"
	"fmt"
)

func main() {
	raw := "<iframe></iframe>"
	p := bluemonday.NewPolicy()
	p.AllowElements("iframe")
	 res := p.Sanitize(raw)
	 if res != raw {
	 	fmt.Printf("got: %s\n", res)
	 } else {
	 	fmt.Println("happy, happy, joy, joy!")
	}
}

Any help will be awesome.

Thanks,
Jonathan

Closing anchor and font tags mixed up

With the following sample program, in the output, the closing anchor tag is missing and in its place is an erroneous closing font tag:

package main

import (
    "fmt"

    "github.com/microcosm-cc/bluemonday"
)

func main() {
    in := `<font face="Arial">No link here. <a href="http://link.com">link here</a>.</font> Should not be linked here.`
    p := bluemonday.UGCPolicy()
    p.AllowAttrs("color").OnElements("font")
    fmt.Printf("'%s'\n", p.Sanitize(in))
}

Am I doing something wrong, or is this a bug?

`make test` fails on master

I just checked out master and ran make test and it fails.

$ make test
...
=== RUN   TestXSS
--- FAIL: TestXSS (0.00s)
	sanitize_test.go:1138: test 74 failed;
		input   : <IMG SRC=`javascript:alert("RSnake says, 'XSS'")`>
		output  : 
		expected: <img src="%60javascript:alert%28%22RSnake">
	sanitize_test.go:1138: test 61 failed;
		input   : <IMG SRC="jav&#x0D;ascript:alert('XSS');">
		output  : 
		expected: <img src="jav%0Dascript:alert%28%27XSS%27%29;">
...

make lint also fails:

$ make lint
example_test.go is in package bluemonday_test, not bluemonday

I'm running go1.8.3.

$ go version
go version go1.8.3 darwin/amd64

Url with ascii char encoded to hex

Hi,

thank you for this awesome package. It helped me a lot on my project.
There is just one issue that I try to solve. When sanitizing the following string:
http://my-server.com/index.php?name=<script>window.onload = function() {var link=document.getElementsByTagName("a");link[0].href="http://not-real-xssattackexamples.com/";}</script>

it result in:
http://my-server.com/index.php?name=

which is find for me.

But, when the evil part of the URL is encoded in hex like:
http://your-server/index.php?name=%3c%73%63%72%69%70%74%3e%77%69%6e%64%6f%77%2e%6f%6e%6c%6f%61%64%20%3d%20%66%75%6e%63%74%69%6f%6e%28%29%20%7b%76%61%72%20%6c%69%6e%6b%3d%64%6f%63%75%6d%65%6e%74%2e%67%65%74%45%6c%65%6d%65%6e%74%73%42%79%54%61%67%4e%61%6d%65%28%22%61%22%29%3b%6c%69%6e%6b%5b%30%5d%2e%68%72%65%66%3d%22%68%74%74%70%3a%2f%2f%61%74%74%61%63%6b%65%72%2d%73%69%74%65%2e%63%6f%6d%2f%22%3b%7d%3c%2f%73%63%72%69%70%74%3e

the sanitizer didn't work.

Do I miss something on the way to use the sanitizer? Is there a way to detect such situation and sanitize the string?

Thank you.

Inline Images get stripped

Using this content:

<img src="data:image/gif;base64,R0lGODdhEAAQAMwAAPj7+FmhUYjNfGuxYY
    DJdYTIeanOpT+DOTuANXi/bGOrWj6CONzv2sPjv2CmV1unU4zPgISg6DJnJ3ImTh8Mtbs00aNP1CZSGy0YqLEn47RgXW8amasW
    7XWsmmvX2iuXiwAAAAAEAAQAAAFVyAgjmRpnihqGCkpDQPbGkNUOFk6DZqgHCNGg2T4QAQBoIiRSAwBE4VA4FACKgkB5NGReAS
    FZEmxsQ0whPDi9BiACYQAInXhwOUtgCUQoORFCGt/g4QAIQA7">

and this policy:

	// Define a policy, we are using the UGC policy as a base.
	p := bluemonday.UGCPolicy()

	// Allow images to be embedded via data-uri
	p.AllowDataURIImages()

i get an empty string back.... whats wrong with my policy?

cheers max

css sanitization in style attributes

Per the bluemonday docs:

We are not yet including any tools to help whitelist and sanitize CSS. Which means that unless you wish to do the heavy lifting in a single regular expression (inadvisable), you should not allow the "style" attribute anywhere.

We use bluemonday and would like allow a limited subset of CSS (essentially limit to a handful of property names) to be allowed within the style attribute.

I can dedicate a few cycles to enhancing bluemonday with this functionality, but per the contributing guidelines, it asks that I create an issue first.

Since we have the issue, I figure it's worth proposing an API and an approach.

api

For the API, I'd propose something similar to allowAttrs(), for example allowStyles('font-size', 'text-align'). We have no need for tag-specific styles, so I'd propose foregoing the complexity of the builder initially and just allowing the styles globally. This would cause an API change down the line though, should such functionality be necessary.

approach

gorilla's css tokenizer is a useful starting point - https://github.com/gorilla/css and it is a reputable and community accepted source

unfortunately, you really need a parser built on top of the tokenizer to do this work. building a simple one is fairly trivial, but only for this use case (declarations in style attributes). If you want to sanitize inline css in <style> tags, or external css, then the task becomes much more difficult.

https://github.com/aymerick/douceur has a parser built on top of gorilla's tokenizer that purports to accomplish this, but it's unclear how well supported it is or how high the quality isโ€ฆย one issue points out an infinite loop

fortunately, it has an acceptable implementation of declaration parsing for our use case: https://github.com/aymerick/douceur/blob/master/parser/parser.go#L163-L201

given that, it seems like it's worth using for now. if the scope of css sanitization increases, then it may be worth re-evaluating options on whether starting fresh, forking, or contributing back upstream are better alternatives

skip elements content nested bug

Hi,

p := bluemonday.NewPolicy()
p.SkipElementsContent("tag1", "tag2")
res := p.Sanitize(`<tag1>cut<tag2></tag2>harm</tag1><tag1>123</tag1><tag2>234</tag2>`)
fmt.Println(res)

Output is: harm
But result must be an empty string.
I will submit pull request for this issue soon.

My question is:

p := bluemonday.NewPolicy()
p.SkipElementsContent("tag")
p.AllowElements("p")
res := p.Sanitize(`<tag>234<p>asd</p></tag>`)
fmt.Println(res)

Output is: <p></p>.
Is that correct to skip nested tag's content but to keep tag itself? Should it be empty string result or <p>asd</p>?

Add support for dataset attributes

Hey :)

In my use of the library I need to support arbitrary data attributes. However, I do not know which data attributes will be sent so I cannot whitelist them.

Can you please add support for arbitrary dataset attributes which will all start with "data-"
For reference of data attributes you can refer to MDN
They have no meaning for the browser only to pass data to certain elements.

Love to hear your thoughts

Thanks,
Jonathan

skip nested tags by attrs bug

Example:

p := bluemonday.NewPolicy()
p.AllowElements("tag1", "tag2")
res := p.Sanitize("<tag1><tag2>abc</tag2></tag1><p>test</p>")
fmt.Println(res)

Output:
abc</tag1>test
Correct output:
abctest

rel="noopener" should be added if target="_blank" is on a link

Background:
https://dev.to/ben/the-targetblank-vulnerability-by-example
https://lists.w3.org/Archives/Public/public-whatwg-archive/2015Jan/0002.html

If rel="noopener" is not present on a link that has target="_blank" then we should add it.

We already have an option to force adding target="_blank" via AddTargetBlankToFullyQualifiedLinks and should extend the functionality of that option to add the rel="noopener" to mitigate the risk of this attack.

fatal error: concurrent map writes

We use bluemonday in our golang service and under high load and traffic we are noticing the error below
fatal error: concurrent map writes

fatal error: concurrent map writes

goroutine 239204 [running]:
runtime.throw(0xbea50a, 0x15)
#011/usr/lib/go-1.9/src/runtime/panic.go:605 +0x95 fp=0xc42157a508 sp=0xc42157a4e8 pc=0x42bd85
runtime.mapassign_faststr(0xaf4580, 0xc4201c3380, 0xbdc0da, 0x5, 0xc42025e168)
#011/usr/lib/go-1.9/src/runtime/hashmap_fast.go:861 +0x4da fp=0xc42157a588 sp=0xc42157a508 pc=0x40d41a
git.corp.adobe.com/service/vendor/github.com/microcosm-cc/bluemonday.(*attrPolicyBuilder).OnElements(0xc420321e60, 0xc42157a650, 0x1, 0x1, 0xc420321e60)
#011/go/src/git.corp.adobe.com/service/vendor/github.com/microcosm-cc/bluemonday/policy.go:176 +0x139 fp=0xc42157a620 sp=0xc42157a588 pc=0x9be669
git.corp.adobe.com/service/jobs.sanitizeComment(0xc4203edec0, 0xc, 0x32, 0xc421041e80)

README says supported versions 1.1โ€“1.9, but .travis.yml tests 1.1โ€“1.11

Compare:

bluemonday/README.md

Lines 59 to 63 in 506f3da

### Supported Go Versions
bluemonday is tested against Go 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, and tip.
We do not support Go 1.0 as we depend on `golang.org/x/net/html` which includes a reference to `io.ErrNoProgress` which did not exist in Go 1.0.

bluemonday/.travis.yml

Lines 3 to 13 in 506f3da

- 1.1.x
- 1.2.x
- 1.3.x
- 1.4.x
- 1.5.x
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x

Also, Go 1.12 is out now. Is it supported?

Add OmitSkipElements method

Recently, I felt a need to create a policy in my project that should allow iframe, embed and script. But I could not get the desired results . I suspect the reason being this.

Can you suggest any other workaround for this Or we can have one more function like OmitSkipElements which omits the given elements(like 'iframe') if present from the setOfElementsToSkipContent map.

How to allow emojis?

Using sanitization rule below converts emojies like ๐Ÿ˜€ to ? :

func StripHtml(s string) string {
	p := bluemonday.UGCPolicy()
	p.AllowAttrs("class").OnElements("img")
	return p.Sanitize(s)
}

I'm wondering how can I allow emojis to be saved without lowering the sanitization bar too much?

p.AllowDocType() opens a vector for inserting unsanitized HTML

Reported via email:

package main

import (
	"fmt"

	"github.com/microcosm-cc/bluemonday"
)

func TestDoctype() {
	p := bluemonday.UGCPolicy()
	p.AllowDocType(true)

	original := `<!DOCTYPE html="&#34;&#62;<img src=x onerror=&#34;alert(/XSS/)">`
	html := p.Sanitize(original)

	// Output:
	// Original: <!DOCTYPE html="&#34;&#62;<img src=x onerror=&#34;alert(/XSS/)">
	// Sanitized: <!DOCTYPE html=""><img src=x onerror="alert(/XSS/)">
	fmt.Printf("Original: %s\nSanitized: %s\n",
		original, html)
}

func main() {
	TestDoctype()
}

Allow all body/head/title and only do xss removal

Whats the easiest way to allow all default/basic html tags
especially body/head/title is always removed, even if i allow them.

    p.AllowElements("html", "head", "title")

looking for a quick&dirty xss remover

tag with semver for vgo

Hi,

Maybe it's time to tag it to make it friendly with vgo and make our go.mod more readable ?

Logo proposal

Hey @buro9 . I desinged a logo for you. I put the initials of the project together and I chose a blue color. WDYT? Please tell me your thoughts.

bm

Custom handlers

What about adding custom handlers for html elements?
Without it user is forced to parse html second time by himself. Also custom handler allows to use more complicated content-dependent sanitizing.

For example, emails. Emails stored as original, but displayed sanitized and to display embed images it is needed to replace content-id ("cid:url") with real url.

p.SetCustomElementHandler(
    func(token html.Token) bluemonday.HandlerResult {
        for i := range token.Attr {
            // possible image locations
            if token.Attr[i].Key == "src" || token.Attr[i].Key == "background" {
                cid := token.Attr[i].Val // get content-id
                url := GetUrlFromCid(cid)
                token.Attr[i].Val = url
            }
        }

        return bluemonday.HandlerResult{
            Token:         token,
            SkipContent:   false,
            SkipTag:       false,
            DoNotSanitize: false,
        }
    },
)

Or blog-posts, editor can form html with special values in custom attributes (e.g. x-my-attribute) and while saving to database, backend can process these attributes while sanitizing without needs to parse html again -> speedup.

I suggest this syntax for custom handlers, it receives html.Token and returns struct with modified token and flags:

  • SkipContent - skip content of current html-element only
  • SkipTag - skip closing tag for current html-element only
  • DoNotSanitize - do not apply sanitizing rules for this html-element only

These flags allows to sanitize with more complex rules that Regexp cannot handle. For example, golang regexp cannot use forward lookup.

AllowElements without attributes

AllowElement don't work with elements that don't have any attributes, for example, ,

. If these attributes are not listed in policy.go.

p = bluemonday.Policy{}
p.AllowElements("big")
output := p.Sanitize("test")

output is "test"

It is not said that bluemonday is only html5-sanitizer and it is good to allow users to customize their policies more.

Suggestion: Insert white space when stripping tags

I'm using the StrictPolicy() to strip tags from text in order to feed mongoDB full text search. The text content of adjacent elements may be visually separated by html rendering even though there is no whitespace in the text. Stripping the tags therefore merges words potentially altering search results. Here's an example:

package main

import (
    "fmt"
    "github.com/microcosm-cc/bluemonday"
)

func main() {
    userInput := "<p>Why oh why</p><p>she swallowed a fly</p>"
    searchableText := bluemonday.StrictPolicy().Sanitize(userInput)

    fmt.Println(searchableText) // Why oh whyshe swallowed a fly
}

I can easily solve this in my own code, e.g. by inserting a space before or after every block-level html element before stripping the tags.

I wondered whether this would be a generally useful feature. A general case might need configuration given that even adjacent inline elements can be visually separated through CSS.

Request: DisallowElements function

I'd love to be able to do something like:

p := bluemonday.UGCPolicy().DisallowElements("h1", "h2")

Would this be a plausible PR, and if so would you be interested in it? I completely get it if you don't want to support it so asking here first.

Feature Request: Ability to filter URLs on a finer grained level.

Suppose I would like to allow using data URI scheme for image urls. For example:

<img src="data:image/png;base64,iVBORw0KGgoAAAANS...K5CYII=">

Currently, I can achieve that by doing:

p := bluemonday.UGCPolicy()
p.AllowURLSchemes("data")

However, that will allow all kinds of things, including "data:text/javascript;charset=utf-8,alert('hi');" or other unexpected values.

What I'd like to do is be able to filter on a finer level, similarly to what's possible with attributes and elements.

For example, I would imagine an API something like this:

p.RequireBase64().AllowMimeTypes("image/png", "image/jpeg").OnURLSchemes("data")

And it would make sure to filter out anything that's not one of those two mime types, not base64, or contains charset, and is valid base64 encoding (i.e., doesn't contain other characters, no query and no fragment).

What are your thoughts on this proposal?

Stripping non-tags

bluemonday.StrictPolicy().Sanitize("a<b")

returns "a".

Is there a reason its not looking for an actual tag, or is this a mistake?

Feature Request: Ability for external packages to perform custom URL filtering.

I would like to consider/try out using data URI scheme for image urls for my own Markdown files (not user generated content). After the discussion in #5, I understand it's not appropriate to increase the API size of this package significantly trying to add support for data URIs, as it's not something people would want to use when sanitizing user generated content and it's not possible for bluemonday to do a sufficient job of sanitizing it anyway.

However, in the spirit of keeping this package highly configurable and usable for custom needs, I propose a very simple single API addition that would allow external packages to bring in their own validation logic for URLs into bluemonday.

// Given a correctly parsed URL, this func should return true if the URL is to be allowed,
// and return false if this URL should not be allowed.
type UrlFilter func (*url.URL) bool

func (p *Policy) AllowURLSchemeWithCustomFiltering(scheme string, urlFilter UrlFilter)  *Policy

(Rough draft of the API, naming could use improvement and I'm very open to other changes.)

It would imply/automatically set RequireParseableURLs to true.

Next, before being allowed, each URL would be passed through the filter func, which can return true if the URL is to be allowed, and false if it should be filtered out.

That will allow me to use my own logic to decide which URLs I want to keep (implemented outside of this package), and not bloat the API of bluemonday, allowing it to remain secure and high quality.

Links are stripped when they shouldn't

package main

import (
	"fmt"
	"github.com/microcosm-cc/bluemonday"
)

func main() {
	policy := bluemonday.NewPolicy()
	policy.AllowElements("div", "a")
	fmt.Println(policy.Sanitize(`<div><a href="/">link</a></div>`))
}

Output: <div>link</div>

I expect it to output <div><a>link</a></div>

Prevent escaping special characters

Hello,

We have this snippet:

package main

import (
        "fmt"

        "github.com/microcosm-cc/bluemonday"
)

func main() {
        p := bluemonday.UGCPolicy()
        html := p.Sanitize(
                `"Hello world!"  <script>alert(document.cookie)</script>`,
        )

        // Output:
        // &#34;Hello world!&#34;
        fmt.Println(html)
}

Which produces the following output:

&#34;Hello world!&#34;  

We'd like to prevent this script from escaping the special characters like ", is there any way we can tell bluemonday to not escape special chars by default?

"javascript" in sanitize.go.

Hi I was using bluemonday lib and in the sanitize method I noticed you are expecting the tagname to be "javascript" and not "script". I was unable to understand why that was?

For my case I thought of changing it to "script" but after I do that I get some failing test cases.

--- FAIL: TestAntiSamy (0.00s)
    sanitize_test.go:769: test 59 failed;
        input   : <SCRIPT>document.write("<SCRI");</SCRIPT>PT SRC="http://ha.ckers.org/xss.js"></SCRIPT>
        output  : PT SRC="http://ha.ckers.org/xss.js">
        expected: PT SRC=&#34;http://ha.ckers.org/xss.js&#34;&gt;
=== RUN   TestXSS
--- FAIL: TestXSS (0.00s)
    sanitize_test.go:1138: test 2 failed;
        input   : <SCRIPT>document.write("<SCRI");</SCRIPT>PT SRC="http://ha.ckers.org/xss.js"></SCRIPT>
        output  : PT SRC="http://ha.ckers.org/xss.js">
        expected: PT SRC=&#34;http://ha.ckers.org/xss.js&#34;&gt;
    sanitize_test.go:1138: test 73 failed;
        input   : <IMG """><SCRIPT>alert("XSS")</SCRIPT>">
        output  : ">
        expected: &#34;&gt;

Which should not happen as far as I understand. Can you help me understand this behaviour? And if required investigate it further?

I apologise if it's something obvious that I have missed here.

URLs with multiple query parameters escape the `&` delimiter incorrectly

Example the following test case exists which tests 1 query parameter

{
    in:       `<a href="?q=1">`,
    expected: `<a href="?q=1" rel="nofollow">`,
},

Add the following test case to produce a failure with more than one query parameter.

{
    in:       `<a href="?q=1&r=2">`,
    expected: `<a href="?q=1&r=2" rel="nofollow">`,
},

the result would be <a href="?q=1&amp;r=2" rel="nofollow"> which breaks the queries

Attribute Filter/Transform Callbacks

It would be helpful to have a hook to allow custom attribute filtering. I propose something much simpler than #24 that would integrate with the existing builder syntax:

// AttrTransform is a user provided function to manipulate the value
// of an attribute
type AttrTransform func(v string) string

func main() {
  tf := func(v string) string {
    return strings.ToUpper(v)
  }

  p := bluemonday.NewPolicy()
  p.TransformAttrs(tf, "style").OnElements("div")
}

This would provide a convenient way to extend bluemonday without requiring the user to parse the HTML before or after calling Sanitize()

My goal is to implement CSS style attribute filtering for my project without forking bluemonday.

Matching() does not do what's expected.

Hi @buro9,

I think there's a bug in bluemonday. According to your comment,

So class="foo bar bash" is fine and would be allowed, but class="javascript:alert('XSS')" would fail and be stripped, and class="><script src='http://hackers.org/XSS.js'></script>" would also fail and be stripped.

I've added tests for that and noticed they failed.

I tried using bluemonday directly, and saw behavior counter to what you suggested. All three class="foo bar bash", class="javascript:alert(123)", and class="><script src='http://hackers.org/XSS.js'></script>" were passed through, without failing and being stripped.

$ goe --quiet 'p:=bluemonday.UGCPolicy(); p.AllowAttrs("class").Matching(bluemonday.SpaceSeparatedTokens).OnElements("span"); println(p.Sanitize(`Hello <span class="foo bar bash">there</span> world.`))'
Hello <span class="foo bar bash">there</span> world.

$ goe --quiet 'p:=bluemonday.UGCPolicy(); p.AllowAttrs("class").Matching(bluemonday.SpaceSeparatedTokens).OnElements("span"); println(p.Sanitize(`Hello <span class="javascript:alert(123)">there</span> world.`))'
Hello <span class="javascript:alert(123)">there</span> world.

$ goe --quiet 'p:=bluemonday.UGCPolicy(); p.AllowAttrs("class").Matching(bluemonday.SpaceSeparatedTokens).OnElements("span"); println(p.Sanitize(`Hello <span class="><script src='"'http://hackers.org/XSS.js'"'></script>">there</span> world.`))'
Hello <span class="&gt;&lt;script src=&#39;http://hackers.org/XSS.js&#39;&gt;&lt;/script&gt;">there</span> world.

Looking at the code, the reason is clear.

SpaceSeparatedTokens = regexp.MustCompile(`[\s\p{L}\p{N}_-]+`)

if ap.regexp.MatchString(htmlAttr.Val) {
    cleanAttrs = append(cleanAttrs, htmlAttr)
}

The regex doesn't have ^ at the front nor $ at the end, so it matches any substring. That's why <span class="javascript:alert(123)">there</span> is not stripped, but <span class=":::::">there</span> is stripped as expected.

How to fix that... I leave to you (there's more than one way, and I'm not a fan of regexes).

A tag unnormal output

	p := bluemonday.UGCPolicy()
	//p.AllowAttrs("src").Matching(regexp.MustCompile(`(?i)mailto|https?`)).OnElements("img")
	centents := []string{
		`<script>alert(/xss/);</script>;`,
		`<script src="http://xxx.xx/xx.js"></script>`,
		`<body onload=alert('test1')>`,
		`<b onmouseover=alert('Wufff!')>click me</b>;`,
		`<b onmouseover=alert('Wufff!')>click me!</b>`,
		`<a href="http://www.google.com/"><img src="https://ssl.gstatic.com/accounts/ui/logo_2x.png"/></a>`,
		`<a href="javascript:alert('XSS1')" onmouseover="alert('XSS2')">XSS<a>`,
		`<a href="http://www.google.com/" onmouseover="alert('XSS2')">XSS<a>`,
	}

	for _, v := range centents {
		t.Log(p.Sanitize(v))
	}

<a href="http://www.google.com/" onmouseover="alert('XSS2')">XSS<a>
ouput
<a href="http://www.google.com/" rel="nofollow">XSS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.