Giter Site home page Giter Site logo

liamg / scout Goto Github PK

View Code? Open in Web Editor NEW
513.0 12.0 64.0 1.57 MB

🔭 Lightweight URL fuzzer and spider: Discover a web server's undisclosed files, directories and VHOSTs

License: The Unlicense

Makefile 0.40% Go 94.62% Shell 4.98%
fuzzer url url-fuzzer security pentesting hackthebox

scout's Introduction

Scout

Travis Build Status

Scout is a URL fuzzer and spider for discovering undisclosed VHOSTS, files and directories on a web server.

A full word list is included in the binary, meaning maximum portability and minimal configuration. Aim and fire!

Usage

Usage:
  scout [command]

Available Commands:
  help        Help about any command
  url         Discover URLs on a given web server.
  version     Display scout version.
  vhost       Discover VHOSTs on a given web server.

Flags:
  -d, --debug             Enable debug logging.
  -h, --help              help for scout
  -n, --no-colours        Disable coloured output.
  -p, --parallelism int   Parallel routines to use for sending requests. (default 10)
  -k, --skip-ssl-verify   Skip SSL certificate verification.
  -w, --wordlist string   Path to wordlist file. If this is not specified an internal wordlist will be used.

Discover URLs

Flags

-x, --extensions

File extensions to detect. (default php,htm,html,txt])

-f, --filename

Filename to seek in the directory being searched. Useful when all directories report 404 status.

-H, --header

Extra header to send with requests e.g. -H "Cookie: PHPSESSID=blah"

-c, --status-codes

HTTP status codes which indicate a positive find. (default 200,400,403,500,405,204,401,301,302)

-m, --method

HTTP method to use.

-s, --spider

Scan page content for links and confirm their existence.

Full example

$ scout url http://192.168.1.1
  
  [+] Target URL      http://192.168.1.1
  [+] Routines        10 
  [+] Extensions      php,htm,html 
  [+] Positive Codes  200,302,301,400,403,500,405,204,401,301,302
  
  [302] http://192.168.1.1/css
  [302] http://192.168.1.1/js
  [302] http://192.168.1.1/language
  [302] http://192.168.1.1/style
  [302] http://192.168.1.1/help
  [401] http://192.168.1.1/index.htm
  [302] http://192.168.1.1/image
  [200] http://192.168.1.1/log.htm
  [302] http://192.168.1.1/script
  [401] http://192.168.1.1/top.html
  [200] http://192.168.1.1/shares
  [200] http://192.168.1.1/shares.php
  [200] http://192.168.1.1/shares.htm
  [200] http://192.168.1.1/shares.html
  [401] http://192.168.1.1/traffic.htm
  [401] http://192.168.1.1/reboot.htm
  [302] http://192.168.1.1/debug
  [401] http://192.168.1.1/debug.htm
  [401] http://192.168.1.1/debug.html
  [401] http://192.168.1.1/start.htm
  
  Scan complete. 28 results found. 

Discover VHOSTs

$ scout vhost https://google.com
  
  [+] Base Domain     google.com
  [+] Routines        10 
  [+] IP              -
  [+] Port            - 
  [+] Using SSL       true
  
  account.google.com
  accounts.google.com
  blog.google.com
  code.google.com
  dev.google.com
  local.google.com
  m.google.com
  mail.google.com
  mobile.google.com
  www.google.com
  admin.google.com
  chat.google.com
  
  Scan complete. 12 results found.

Installation

curl -s "https://raw.githubusercontent.com/liamg/scout/master/scripts/install.sh" | bash

scout's People

Contributors

liamg avatar owenrumney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scout's Issues

License statement

To be able to create distribution packages, the source should have a license statement and a LICENSE file should be part of the source.

Thanks in advance to add it.

When scout gets to ".backup.html", it stops (using the default wordlist).

For example, when using the default wordlist to scan https://www.apple.com/, it gets stuck on:
Checking https://www.apple.com/.backup.html...

I've encountered this bug (I think it's a bug) with several other websites too and it only stops on .backup.html for some reason, but I suspect it's because there are two periods in ".backup.html" which makes it unique from all the other pages in the wordlist.

Output to file

First, I think this is a great tool. Installation and initial runs couldn't have been easier.

Is there a recommended way output to a file to only include valid results, and omit attempts? I went with the typical:
scout url <url> >> out.txt

...which included every attempt, instead of only valid results that you'd get in the terminal. Thanks!

Example of out.txt contents:

�[0mChecking url/41.html...�[0m�[2K
�[0mChecking url/41.txt...�[0m�[2K
�[0mChecking url/35...�[0m�[2K```

Update readme

I was try to run installation script and then it tell me that it needs jq.
You should mention this in readme before the installation part.

Add option to display full redirect chain (http codes 301 and 302) as separate lines

Currently, Scout displays only the final http code for URLs that direct. It would be useful to know when a page redirects, and to where.. I would like to suggest if the URL they redirect to is a positive match, add it as a separate line. For example, the URL https://www.strangecode.com/wm response is code 301, and redirects to https://webmail.strangecode.com/ which has response code 200. When a page redirects successfully, you can display it separately, like this:

…
[301] https://www.strangecode.com/wm
[200] https://webmail.strangecode.com/
…

This might result in a chain of interesting redirects; in the following example, it might be useful to learn about the existence of the host oauth.example.com:

…
[301] https://example.com/admin
[302] https://oauth.example.com/interesting/url
[200] https://destination.example.com/
…

I would even enable this by default, but you can hide it under an option such as --expand-redirects, -e. :)

Problem installing assets

Hi there - Thanks for the work on this project.

I'm a bit of a noob and I have googled this issue many times. I can't get the file installation to work, so I run into a command not found error. Can you help?

Thanks!
2020-09-07_11-01-33

If internet connection is lost while scanning, every URL has a code 403

While running a scan, if internet connectivity is lost, scout marks every URL was a code 403. Output will contain hundreds of lines like this:

…
[403] https://example.com/cheers
[403] https://example.com/cheesecake
[403] https://example.com/cheers.php
[403] https://example.com/cheers.html
[403] https://example.com/cheesecake.html
[403] https://example.com/cheers.htm
[403] https://example.com/cheesecake.htm
[403] https://example.com/cheddar
[403] https://example.com/cheddar.php
[403] https://example.com/checks.php
[403] https://example.com/checkov.php
[403] https://example.com/checks.html
[403] https://example.com/cheddar.html
[403] https://example.com/checks
[403] https://example.com/checkov.htm
[403] https://example.com/checks.htm
[403] https://example.com/cheddar.htm
[403] https://example.com/checkov
[403] https://example.com/checkov.html
[403] https://example.com/checkin
[403] https://example.com/checkin.php
…

Network timeout should be a special case, which is not displayed as a positive match.

Delay on 429 status code

Most servers now set a request rate and return the 429 status code on a bump limit. Despite this, the fuzzer will continue to operate, producing quite a few 500s. Does it make sense? I think there should be a time delay.

goes forever?

This has been fuzzing a url for about an hour and 46 mins but also shows me all outputs like 404, 503, 200 despite some code 200's being blank pages. and also in the windows CLI version a bunch of the found links end with kkk? like /onlinekkk & /termskkk. whilst looking very odd in CLI the design at least.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.