Giter Site home page Giter Site logo

Comments (22)

presidentbeef avatar presidentbeef commented on May 16, 2024

There is no system (yet).

First, where are false positives tracked?

You mention adding comments to the source code of the project. Assuming people are willing to modify their code to hide some warnings, at some point do you end up with a bunch of meaningless and outdated comments just to manage the use of an external tool?

Alternatively, there could be a database of marked false positives. Perhaps it could be stored in a hidden file in the application's root directory. This would allow it to be checked into source control without changing the code of the project.

Then there needs to be a way of getting the false positives into the database (or creating the comments to ignore them in the source code). What would be the interface for this?

Lastly, there needs to be a reliable way of identifying and matching the reported false positives when doing the scans. This takes some work, because you don't want simple things like changing line numbers or whitespace to undo your work hiding the false positive.

Anyhow, that is kind of why there is no system yet for dealing with false positives.

If you think you have a false positive that is more general (for example, an attribute of a model that would never be an XSS problem), then I would be happy to work towards eliminating those entirely from Brakeman's warnings.

from brakeman.

philcox avatar philcox commented on May 16, 2024

A couple thoughts (from one of our developers, xegar - check out lint_fu) for consideration whenever you get to this ...

What I've learned from doing this stuff:

Annotating the source code to assist static analysis can be helpful, but agreed that simply adding comments at the site of the FP that say "this is not a bug" is ugly -- and in the long run leads to dangerous false negatives. More to say on annotations below.

Most commercial tools have a simple database to track false positives (or for that matter, false negatives). It may live parallel to the app being scanned, or it may be on a server somewhere. For Brakeman I'd probably use DataMapper in order to make my existing domain-model objects serializable into a SQLite DB which lives as a hidden file in the app root. Later on I would be free to allow for config files that point DataMapper at a real mysql/postgres/whatever server.

Identifying an issue in a way that you'll be able to classify it as the same issue later, is a classically tricky problem. I have had great success with storing the unique issue identifier as a tuple of (enclosing_class+method, checker_that_found_problem, erasure_of_source_code_fragment) where the "erasure" is a function that operates on the AST and replaces easily variable nodes of the AST with nils or a dummy value. For instance, all integer literals get translated to 0, and so forth. Serialize the AST to an s-expression, and you have a compact "fingerprint" of the issue.

What is the role of source code annotations? They're not a good way to suppress false positives, but they make an excellent mechanism for configuring scanners. For instance, I might decorate a method in a controller with:

@brakeman xss disable

or

@brakeman safe

Which doesn't suppress false positives; rather, it keeps the false positives from being generated in the first place because it informs the checker about "external" factors such as a controller that's part of a restricted internal admin interface, or is part of a back-end REST service and doesn't care about XSS, or whatever.

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

Thank you for the great suggestions.

from brakeman.

philcox avatar philcox commented on May 16, 2024

One other point that the person taking the brakeman output and fixing issues brought up for consideration:

There are many cases where we have this in a .rhtml (or .html.erb) file:

<%=  instance_of_some_object.method %>

And this an a .rb file:

class SomeObject
    def method
        ERB::Util.h( value )
    end
end

So, calling the "method" method in the .rhtml file is completely safe in this case. However, the tool marks it as a high level warning... it doesn't seem to inspect the SomeObject.method code. This is a common issue with Ruby tools as everything is dynamic and not typed. So I'm sure it's a very hard problem to solve. Many IDEs have yet to solve it.

Any thoughts?

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

If it were obvious that instance_of_some_object was an instance of SomeObject and the method were really that simple, it would not be so hard, but that is rarely the case in Ruby :)

However, if instance_of_some_object.method does not involve user input or database values, then there should be no warning about it. Please let me know if you are seeing warnings for arbitrary method calls.

If user input or database values are used as arguments to a method whose result is output, there will be a warning for that, but it should be at weak confidence (unless it is a method known to not escape its output). If these warnings are not useful to you, --report-direct will only warn on instances of user input being output directly.

from brakeman.

daveworth avatar daveworth commented on May 16, 2024

My quick, half-baked idea for ignoring "known safe" false-positive scenarios is to have a .yml file somewhere in the app that simply denotes Object/Module, Method, and (optional) type of safety. I.e. I know that User#update does not actually have an unprotected mass-assignment bug so the file could describe just that. It's really coarse because there could be one non-bug and one bug near each other, in the same method (say on different branches) and this would not alert then but it could be a first step. Annotating in the code goes against the eloquent principles of ruby that the code should be obvious and not require comments. Comments themselves are often a code-smell.

User:
  update:
    no-SQL-injection

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

Hi Dave,

This is an interesting approach, but I agree that it needs to be a little more fine-grained. I would rather report false positives than potentially hide real problems.

I do like the idea of having a separate file for describing what warnings to ignore, though. I'm warming to the idea of having something in the app's directory for Brakeman to read.

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

Any progression on this topic?

from brakeman.

daveworth avatar daveworth commented on May 16, 2024

Ken and Justin, I've lately been a little out of the bag in terms of time but am hoping to get some time on this unless someone else has just wrangled it in the mean time. If not, definitely count me in and I will carve some time to at least get a MVP done an ready in a pull-request

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

@daveworth - Awesome. Well, I was asking because it's important enough to us to contribute to writing that code BUT, no need to do it if somebody else is biting that bullet :-). Besides, probably wouldn't be until after April 5 that it would get a lot of TLC from me.

So I guess the question is, when do you plan on working on it and is Justin already working on it (no need to duplicate efforts I'd imagine).

Cheers

from brakeman.

daveworth avatar daveworth commented on May 16, 2024

@cktricky Well lets wait until we hear back from Justin and then make some decisions. By "us" do you mean the whole Carnale0wnage crew? If so that's just kinda exciting... :-)

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

hehe, no, I meant more in terms of my professional life/crew :-)

Agreed, let's see where Justin is at and if he needs us to pitch in, I'm up to helping out wherever it is needed.

Thanks!

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

I'm not currently doing any work on a system for supporting this. However, what I will be working on is better support for "matching" warnings - that is, if the code has only changed line numbers then it should not be reported as a new warning. I think this essentially means storing the "context" of code (module/class/method) instead of just the line number.

My thoughts:

  • This should be a post-scan filtering step. It should only hide warnings after they are found.
  • I like the idea of a .brakeman_ignore file (or something similar) in the app's root directory.
  • How will users "mark" something as a false positive? What's the interface for this?
  • What are the chances that code marked as a false positive will somehow occlude vulnerable code in the future?
  • It needs to be obvious that Brakeman is ignoring warnings based on users's directions (i.e., there should be a message saying "x ignored warnings" or "Ignoring warnings based on .brakeman_ignore file").
  • There should be an option to report all warnings, even those that would be ignored.

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

Q: What are the chances that code marked as a false positive will somehow occlude vulnerable code in the future?
A: Could there be a unique hash attached to the finding itself, generated by Brakeman?

I'm all up for a .brakeman_ignore file. Maybe comma separated? The values would be those unique values attached to the finding?

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

Could there be a unique hash attached to the finding itself, generated by Brakeman?

That's what I'm hoping to do.

from brakeman.

daveworth avatar daveworth commented on May 16, 2024

Justin,

These are all good points, and I've mentally addressed many of them when thinking about the problem at hand.

  • Post-filtering - absolutely correct.
  • .brakeman_ignore - I think the .brakeman_ignore file should be a YAML file so we can put rich data inside to address occlusions concerns
  • Interface- If we're doing YAML the interface would probably have to be ${EDITOR}
  • Vulnerability occlusion/aliasing - This is hard. My thoughts to addressing this were hashing some amount of context such that when a given function changes, potentially exposing more vulnerabilities, or patching a perviously marked "ok" function, the annotation would be ignored. For example, suppose someone has a chunk of code marked as a SQL injection but they believe that they have sanitized the data sufficiently to have prevented the vuln, and have annotated it as such. If they then wrap that code in something else that changes the context it should be alerted against again. Does this make sense?
    • My big worry here is that the data needed for annotations is complicated. My feeling is that if Brakeman generates a file on every run that contains all the necessary information to annotate a vulnerability then a user can copy/paste the correct block from the generated file into their .brakeman_ignore file. When they change context as described above they will likely have to do the same once again. This is probably better than getting compromised later.
  • The latter two issues of stating that X ignored warnings and to report all warnings, or even another option "report only ignored warnings" should go with the above changes. The big meat of the problem is preventing vulnerability aliasing through annotation.

from brakeman.

daveworth avatar daveworth commented on May 16, 2024

Just a status update. Work is awesome and demands that we do FOSS contributions. As such I have carved out some time. At this point I have hooks in place on my branch (locally, not on github yet) that produce annotations. I am biasing towards action and realizing that the first pass will not be amazingly simple but in time I think will produce good results. I hope to have an alpha branch soon.

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

ping :-)

Justin, any forward progress on this front in terms of including Dave's work into the product? I'm going to ask for time so that our appsec team can help out if need be. Let me know if we can help.

Cheers,

Ken

from brakeman.

oreoshake avatar oreoshake commented on May 16, 2024

We've had some discussion lately on the pull request, chime in? #73

from brakeman.

cktricky avatar cktricky commented on May 16, 2024

taking a look now, also, sending some info to link up on this

from brakeman.

cmer avatar cmer commented on May 16, 2024

+1

from brakeman.

presidentbeef avatar presidentbeef commented on May 16, 2024

Added with #368.

from brakeman.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.