Giter Site home page Giter Site logo

rub-nds / rest-attacker Goto Github PK

View Code? Open in Web Editor NEW
76.0 76.0 15.0 98 KB

REST-Attacker is designed as a proof-of-concept for the feasibility of testing generic real-world REST implementations. Its goal is to provide a framework for REST security research.

License: GNU Lesser General Public License v3.0

Python 99.28% HTML 0.72%

rest-attacker's People

Contributors

heinezen avatar injcristianrojas avatar iphoneintosh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rest-attacker's Issues

Use Python generators for test generation

In the current implementation, the test generation logic puts all generated tests into a big list and passes this list to the engine for execution. This is fine for smaller test runs, but could result on a heavy memory footprint if runs contain a large number of tests or if individual TestCase objects is high. Generating the test
cases using generators could improve this situation, since tests would be generated on-the-fly.

It should still be possible to generate all tests at once, so that a run can be saved to file and executed later. The test engine should accept a list of tests or a list of generators.

Implementing Python generators would probably involve the following tasks:

  • Redesign all existing TestCase.generate as generators (should be trivial)
  • Store generators and pass them to the engine
  • Test IDs need to be generated on-the-fly during test execution (a simple counter should be enough).
  • Test results need to be stored after test execution, so that we can destroy the test object after execution to save memory.
  • Tests should not be destroyed immediately, since the engine may want to execute it a second time. This can happen if the engine detects that the rate limit of a service has been reached and the last X tests have to be redone.

You can use the Python module tracemalloc to monitor memory usage.

Pass Authorization Token/Cookie Headers.

Not all the test cases mentioned in the documentation are present in the generated report like the ones starting with scopes and related to OAuth.

Is there any way to check if the information provided inside the info.json and auth.json is correct and was used successfully to run appropriate test cases?

Can the dev environment be tested using an authorization token with cookies as headers? I need to run it like a simple curl command curl -X GET [API_LINK] -H "Authorization: Bearer [token]" --cookie "Name=Value"

Browser GUI integration for docker mode

Currently, the most robust method for automated authentication (Browser Authentication) does not work well when running the tool inside a docker container. The reason for this is that this method requires access to the Browser GUI for manual log in at the targeted service. There are workarounds for getting a Firefox/Chrome GUI to work in Docker, but they currently are not consistent enough as a stable solution.

As an alternative, we could run a full Ubuntu image (including a GUI) inside the docker container and start the browser inside its GUI. We can then access the Browser GUI by connecting to the Ubuntu client session via network, e.g. by using noVNC.

Compare test results

TestCase should implement a compare() method that can be used to compare the results of two checks. Comparing results can be useful for the following scenarios:

  1. Reproducing a test run (e.g. for checking if a detected issue has been fixed)
  2. Diffing results of two tests of the same type (e.g. for comparing the APIs reaction to slighlty different API requests)

compare() should be implemented as a classmethod that gets passed two report objects. It should then do the following:

  • Check for differences in the issue field.
  • Diff the value field. Complexity for this may vary depending on the test case, since there can be optional fields and nested values.
  • Return a comparison object (as a dict). The object should contain a flag that indicates whether the results are a match/mismatch and the created diff.

Dry runs

The tool should have an option for "dry runs" that can be found in similar tools. In a dry run, the tool would only execute the configuration stage and/or the test generation. Test execution is skipped. This can be useful to determine whether a given test or service configuration is valid before it's let loose on the targeted REST API.

Dry run functionality could also be used to generate test runs and then save them to file, rather than executing them directly. Essentially, running a dry run with test generation should create a run configuration file that can be passed to the tool at a later time.

Implementing dry runs would probably involve the following tasks:

  • New CLI flag --dry-run
  • Skip engine.run() if a dry run is currently being executed
  • Output run configuration file if test generation is used (with the --generate flag). Test configs can be retrieved from generated tests with the serialize() function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.