Giter Site home page Giter Site logo

projectdiscovery / httpx Goto Github PK

View Code? Open in Web Editor NEW
6.8K 79.0 763.0 9.45 MB

httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryablehttp library.

Home Page: https://docs.projectdiscovery.io/tools/httpx

License: MIT License

Go 96.78% Dockerfile 0.16% Makefile 0.16% Shell 0.50% HTML 2.40%
http bugbounty ssl-certificate pipeline pentest-tool cybersecurity osint hacktoberfest cli lib

httpx's Introduction

httpx

FeaturesInstallationUsageDocumentationNotesJoin Discord

httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryablehttp library. It is designed to maintain result reliability with an increased number of threads.

Features

httpx

  • Simple and modular code base making it easy to contribute.
  • Fast And fully configurable flags to probe multiple elements.
  • Supports multiple HTTP based probings.
  • Smart auto fallback from https to http as default.
  • Supports hosts, URLs and CIDR as input.
  • Handles edge cases doing retries, backoffs etc for handling WAFs.

Supported probes

Probes Default check Probes Default check
URL true IP true
Title true CNAME true
Status Code true Raw HTTP false
Content Length true HTTP2 false
TLS Certificate true HTTP Pipeline false
CSP Header true Virtual host false
Line Count true Word Count true
Location Header true CDN false
Web Server true Paths false
Web Socket true Ports false
Response Time true Request Method true
Favicon Hash false Probe Status false
Body Hash true Header Hash true
Redirect chain false URL Scheme true
JARM Hash false ASN false

Installation Instructions

httpx requires go1.21 to install successfully. Run the following command to get the repo:

go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest

To learn more about installing httpx, see https://docs.projectdiscovery.io/tools/httpx/install.

Disclaimer
This project is in active development. Expect breaking changes with releases. Review the changelog before updating.
This project was primarily built to be used as a standalone CLI tool. Running it as a service may pose security risks. It's recommended to use with caution and additional security measures.

Usage

httpx -h

This will display help for the tool. Here are all the switches it supports.

Usage:
  ./httpx [flags]

Flags:
INPUT:
   -l, -list string      input file containing list of hosts to process
   -rr, -request string  file containing raw request
   -u, -target string[]  input target host(s) to probe

PROBES:
   -sc, -status-code     display response status-code
   -cl, -content-length  display response content-length
   -ct, -content-type    display response content-type
   -location             display response redirect location
   -favicon              display mmh3 hash for '/favicon.ico' file
   -hash string          display response body hash (supported: md5,mmh3,simhash,sha1,sha256,sha512)
   -jarm                 display jarm fingerprint hash
   -rt, -response-time   display response time
   -lc, -line-count      display response body line count
   -wc, -word-count      display response body word count
   -title                display page title
   -bp, -body-preview    display first N characters of response body (default 100)
   -server, -web-server  display server name
   -td, -tech-detect     display technology in use based on wappalyzer dataset
   -method               display http request method
   -websocket            display server using websocket
   -ip                   display host ip
   -cname                display host cname
   -asn                  display host asn information
   -cdn                  display cdn/waf in use
   -probe                display probe status

HEADLESS:
   -ss, -screenshot                 enable saving screenshot of the page using headless browser
   -system-chrome                   enable using local installed chrome for screenshot
   -ho, -headless-options string[]  start headless chrome with additional options
   -esb, -exclude-screenshot-bytes  enable excluding screenshot bytes from json output
   -ehb, -exclude-headless-body     enable excluding headless header from json output
   -st, -screenshot-timeout int     set timeout for screenshot in seconds (default 10)

MATCHERS:
   -mc, -match-code string            match response with specified status code (-mc 200,302)
   -ml, -match-length string          match response with specified content length (-ml 100,102)
   -mlc, -match-line-count string     match response body with specified line count (-mlc 423,532)
   -mwc, -match-word-count string     match response body with specified word count (-mwc 43,55)
   -mfc, -match-favicon string[]      match response with specified favicon hash (-mfc 1494302000)
   -ms, -match-string string          match response with specified string (-ms admin)
   -mr, -match-regex string           match response with specified regex (-mr admin)
   -mcdn, -match-cdn string[]         match host with specified cdn provider (cloudfront, fastly, google, leaseweb, stackpath)
   -mrt, -match-response-time string  match response with specified response time in seconds (-mrt '< 1')
   -mdc, -match-condition string      match response with dsl expression condition

EXTRACTOR:
   -er, -extract-regex string[]   display response content with matched regex
   -ep, -extract-preset string[]  display response content matched by a pre-defined regex (url,ipv4,mail)

FILTERS:
   -fc, -filter-code string            filter response with specified status code (-fc 403,401)
   -fep, -filter-error-page            filter response with ML based error page detection
   -fl, -filter-length string          filter response with specified content length (-fl 23,33)
   -flc, -filter-line-count string     filter response body with specified line count (-flc 423,532)
   -fwc, -filter-word-count string     filter response body with specified word count (-fwc 423,532)
   -ffc, -filter-favicon string[]      filter response with specified favicon hash (-ffc 1494302000)
   -fs, -filter-string string          filter response with specified string (-fs admin)
   -fe, -filter-regex string           filter response with specified regex (-fe admin)
   -fcdn, -filter-cdn string[]         filter host with specified cdn provider (cloudfront, fastly, google, leaseweb, stackpath)
   -frt, -filter-response-time string  filter response with specified response time in seconds (-frt '> 1')
   -fdc, -filter-condition string      filter response with dsl expression condition
   -strip                              strips all tags in response. supported formats: html,xml (default html)

RATE-LIMIT:
   -t, -threads int              number of threads to use (default 50)
   -rl, -rate-limit int          maximum requests to send per second (default 150)
   -rlm, -rate-limit-minute int  maximum number of requests to send per minute

MISCELLANEOUS:
   -pa, -probe-all-ips        probe all the ips associated with same host
   -p, -ports string[]        ports to probe (nmap syntax: eg http:1,2-10,11,https:80)
   -path string               path or list of paths to probe (comma-separated, file)
   -tls-probe                 send http probes on the extracted TLS domains (dns_name)
   -csp-probe                 send http probes on the extracted CSP domains
   -tls-grab                  perform TLS(SSL) data grabbing
   -pipeline                  probe and display server supporting HTTP1.1 pipeline
   -http2                     probe and display server supporting HTTP2
   -vhost                     probe and display server supporting VHOST
   -ldv, -list-dsl-variables  list json output field keys name that support dsl matcher/filter

UPDATE:
   -up, -update                 update httpx to latest version
   -duc, -disable-update-check  disable automatic httpx update check

OUTPUT:
   -o, -output string                  file to write output results
   -oa, -output-all                    filename to write output results in all formats
   -sr, -store-response                store http response to output directory
   -srd, -store-response-dir string    store http response to custom directory
   -csv                                store output in csv format
   -csvo, -csv-output-encoding string  define output encoding
   -j, -json                           store output in JSONL(ines) format
   -irh, -include-response-header      include http response (headers) in JSON output (-json only)
   -irr, -include-response             include http request/response (headers + body) in JSON output (-json only)
   -irrb, -include-response-base64     include base64 encoded http request/response in JSON output (-json only)
   -include-chain                      include redirect http chain in JSON output (-json only)
   -store-chain                        include http redirect chain in responses (-sr only)
   -svrc, -store-vision-recon-cluster  include visual recon clusters (-ss and -sr only)

CONFIGURATIONS:
   -config string                path to the httpx configuration file (default $HOME/.config/httpx/config.yaml)
   -r, -resolvers string[]       list of custom resolver (file or comma separated)
   -allow string[]               allowed list of IP/CIDR's to process (file or comma separated)
   -deny string[]                denied list of IP/CIDR's to process (file or comma separated)
   -sni, -sni-name string        custom TLS SNI name
   -random-agent                 enable Random User-Agent to use (default true)
   -H, -header string[]          custom http headers to send with request
   -http-proxy, -proxy string    http proxy to use (eg http://127.0.0.1:8080)
   -unsafe                       send raw requests skipping golang normalization
   -resume                       resume scan using resume.cfg
   -fr, -follow-redirects        follow http redirects
   -maxr, -max-redirects int     max number of redirects to follow per host (default 10)
   -fhr, -follow-host-redirects  follow redirects on the same host
   -rhsts, -respect-hsts         respect HSTS response headers for redirect requests
   -vhost-input                  get a list of vhosts as input
   -x string                     request methods to probe, use 'all' to probe all HTTP methods
   -body string                  post body to include in http request
   -s, -stream                   stream mode - start elaborating input targets without sorting
   -sd, -skip-dedupe             disable dedupe input items (only used with stream mode)
   -ldp, -leave-default-ports    leave default http/https ports in host header (eg. http://host:80 - https://host:443
   -ztls                         use ztls library with autofallback to standard one for tls13
   -no-decode                    avoid decoding body
   -tlsi, -tls-impersonate       enable experimental client hello (ja3) tls randomization
   -no-stdin                     Disable Stdin processing

DEBUG:
   -health-check, -hc        run diagnostic check up
   -debug                    display request/response content in cli
   -debug-req                display request content in cli
   -debug-resp               display response content in cli
   -version                  display httpx version
   -stats                    display scan statistic
   -profile-mem string       optional httpx memory profile dump file
   -silent                   silent mode
   -v, -verbose              verbose mode
   -si, -stats-interval int  number of seconds to wait between showing a statistics update (default: 5)
   -nc, -no-color            disable colors in cli output

OPTIMIZATIONS:
   -nf, -no-fallback                  display both probed protocol (HTTPS and HTTP)
   -nfs, -no-fallback-scheme          probe with protocol scheme specified in input 
   -maxhr, -max-host-error int        max error count per host before skipping remaining path/s (default 30)
   -e, -exclude string[]              exclude host matching specified filter ('cdn', 'private-ips', cidr, ip, regex)
   -retries int                       number of retries
   -timeout int                       timeout in seconds (default 10)
   -delay value                       duration between each http request (eg: 200ms, 1s) (default -1ns)
   -rsts, -response-size-to-save int  max response size to save in bytes (default 2147483647)
   -rstr, -response-size-to-read int  max response size to read in bytes (default 2147483647)

Running httpx

For details about running httpx, see https://docs.projectdiscovery.io/tools/httpx/running.

Using httpx as a library

httpx can be used as a library by creating an instance of the Option struct and populating it with the same options that would be specified via CLI. Once validated, the struct should be passed to a runner instance (to be closed at the end of the program) and the RunEnumeration method should be called. A minimal example of how to do it is in the examples folder

Notes

  • As default, httpx probe with HTTPS scheme and fall-back to HTTP only if HTTPS is not reachable.
  • The -no-fallback flag can be used to probe and display both HTTP and HTTPS result.
  • Custom scheme for ports can be defined, for example -ports http:443,http:80,https:8443
  • Custom resolver supports multiple protocol (doh|tcp|udp) in form of protocol:resolver:port (e.g. udp:127.0.0.1:53)
  • The following flags should be used for specific use cases instead of running them as default with other probes:
    • -ports
    • -path
    • -vhost
    • -screenshot
    • -csp-probe
    • -tls-probe
    • -favicon
    • -http2
    • -pipeline
    • -tls-impersonate

Acknowledgement

Probing feature is inspired by @tomnomnom/httprobe work ❤️


httpx is made with 💙 by the projectdiscovery team and distributed under MIT License.

Join Discord

httpx's People

Contributors

0xdu avatar anhnmt avatar chenrui333 avatar dependabot[bot] avatar dogancanbakir avatar ehsandeep avatar forgedhallpass avatar foxcores avatar ice3man543 avatar jimen0 avatar joshuamart avatar jsav0 avatar karelorigin avatar luitelsamikshya avatar m09ic avatar melardev avatar mzack9999 avatar parrasajad avatar ramanareddy0m avatar secinto avatar seeyarh avatar shubhamrasal avatar skraxberger avatar st3rv04ka avatar tarunkoyalwar avatar timoles avatar vzamanillo avatar wux1an avatar xm1k3 avatar zerodivisi0n avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

httpx's Issues

How can I scan http first?

Hello,

I am trying to scan some websites and detect its title and status code.

Those website only serve at http, even it have valid SSL certificate (*.example.com) but it is a Nginx default page for https.

I fed httpx with list of domains starting with http://, but currently httpx will try https first, if fail, fallback to http.

How can I force httpx to scan the list in http protocol without auto upgrade to https?

Thanks in advance.

saving http responses for same host on multiple ports makes the previous response override

If you scan http://example.com on multiple ports, let say 80,443,8443

The final response in example.com.txt will be that returned from port 8443. Previous responses from port 443 and 80 will be overwritten.

Morever, it also lacks the functionality to specify whether I want to run https or http on port 8443. May be this functionality can be adopted from httprobe by tomnomnom. The only thing with httprobe is it does not allow to save responses.

Improvement suggestion: flag to print redirection path

Currently, there are flags to tell httpx to follow redirections (-follow-redirects, -follow-host-redirects). However, it would be useful sometimes to don't follow the redirection, but to only print the redirection URI.

Adding flags to support matcher and filters

Flag name:- mc = match status code (from fuff)
Flag name:- ml = match content length

Flag name:- fc = filter status code
Flag name:- fl = filter content length

Example of matchers:-

> subfinder -d uber.com | httpx -status-code -mc 200,401

https://blogcdn.uber.com [200]
https://bliss-events.uber.com [200]
https://blackswan.uber.com [401]
https://developer.uber.com [200]

> subfinder -d uber.com | httpx -content-length -ml 146,0

https://cn-geo1.uber.com [146]
https://io.uber.com [146]
https://uber.com [0]
https://brand.uber.com [0]
https://lert.uber.com [146]

Example of Filters:-

> subfinder -d uber.com | httpx -status-code -fc 200,401

https://guest.uber.com [302]
https://businesses.uber.com [302]
https://love.uber.com [302]

> subfinder -d uber.com | httpx -content-length -fl 146,0

https://cn-geo1.uber.com [36]
https://io.uber.com [2783]
https://uber.com [166]
https://brand.uber.com [9]
https://lert.uber.com [5246]

Validation:-

  1. mc and fc flag will only work or can be used when the status-code flag is used.
  2. ml and fl flag will only work or can be used when the content-length flag is used.

Custom host header

Hi,

It could be awesome if I could overwrite the host header like that:

echo "http://example.com" | httpx -http-proxy http://127.0.0.1:8080 -H "Host: test"

Other header are well rewritten (User-Agent, etc.), except this one.
I know you have a vhost option but it would be much easier to use if I could check it like that.

Love your tool, thanks a lot.

Invalid URL passed through URL probing

> chaos -d tesla.com -silent | httpx -tls-probe -silent | grep -v tesla.com 

https://sipfed.online.lync.com
https://meet.lync.com
https://aaronsinc.gcs-web.com
https://akamaisecure3.qualtrics.com
https://api.toolbox.tb.tesla.services
https://sipfed.online.lync.com
https://akamaisecure3.qualtrics.com
https://sched.lync.com
https://a248.e.akamai.net
https://acuitybrands.gcs-web.com
https://a248.e.akamai.net
https://toolbox.tb.tesla.services
https://alaskaairgroupinc.gcs-web.com
https://Amazon
https://amd.gcs-web.com
https://annexonbiosciences.gcs-web.com
https://api.toolbox.tb.tesla.services
https://ataxfund.gcs-web.com

Notice:- https://Amazon

Porting TLSGrab in httpx as additional flag

Porting the TLSGrab code into httpx as an additional flag, so it can be used to extract subdomains from the SSL Certificates of the input list.

Flag:- -tlsgrab and -tlsprobe

  1. tlsgrab will list all the additional and unique subdomains from SSL DNS name.
  2. tlsprobe will probe found subdomains using tlsgrab.

Notes:-

  1. when tlsgrab is used, HTTP probing will be disabled.
  2. both flag can not be used at the same time, when both flags is provided, tlsprobe will be used as preferred one.

Example:-

cat input.txt 

starbucks.com
tesla.com
hackerone.com
cat input.txt | ./httpx -tlsgrab

starbucks.com
beta.starbucks.com
app.starbucks.fr
www.starbucks.fr
app.starbucks.co.uk
starbucks.ie
www.starbucks.com
app.starbucks.com
fr.starbucks.ca
preview.starbucks.com
starbucks.fr
starbucks.ca
app.starbucks.com.br
www.starbucks.ca
www.starbucks.ie
app.starbucks.ie
starbucks.com.br
fr.app.starbucks.ca
starbucks.de
www.starbucks.co.uk
app.starbucks.de
www.starbucks.de
app.starbucks.ca
www.starbucks.com.br
starbucks.co.uk
tesla.com
hackerone.com
www.hackerone.com
api.hackerone.com
cat input.txt | ./httpx -tlsprobe 

https://starbucks.ca
https://preview.starbucks.com
https://starbucks.com
https://starbucks.fr
https://app.starbucks.ie
https://www.starbucks.com
https://hackerone.com
https://www.hackerone.com
https://starbucks.com.br
https://api.hackerone.com
https://www.starbucks.co.uk
https://www.starbucks.com.br
https://www.starbucks.ie
https://starbucks.ie
https://starbucks.co.uk
https://fr.starbucks.ca
https://www.starbucks.de
https://beta.starbucks.com
https://starbucks.de
https://tesla.com
https://www.starbucks.ca
https://app.starbucks.com
https://www.starbucks.fr
https://app.starbucks.ca

bug with output http title

Hi, I find a bug, httpx can not correct output none ASCII string, like Chinese

▶ echo https://tnw.qq.com  | httpx -title -silent 
https://tnw.qq.com [The Next Web����վ_��Ѷ�Ƽ�_��Ѷ��]

I try to fix it by myself, but I'm not good at go

the correct output is:

echo https://tnw.qq.com  | httpx -title -silent | iconv -f gbk
https://tnw.qq.com [The Next Web中文站_腾讯科技_腾讯网]

adding CIDR support

CIDR input is possible using stdin and with -l flag directly or using the file.

Adding color support for more visibility

default output in the terminal is easy to get mixed with other data, to get a clear sense of differentiation a fixed color to each data type can be added status-code,content-length,title

[issue] method flag is enabled on default

Describe the bug
My command was -

subfinder -d redacted.com -silent | httpx -silent | nuclei -t /root/nuclei-templates/cves/ -o /root/Desktop/redacted/ResultsN/cvelikeharsh.txt -v

and i am getting o/p as -

[WRN] Could not execute step: could not build http request: parse "https://supporttest.service.redacted.com [\x1b[35mGET\x1b[0m]": net/url: invalid control character in URL
[WRN] Could not execute step: could not build http request: parse "https://supporttest.service.redacted.com [\x1b[35mGET\x1b[0m]": net/url: invalid control character in URL
[WRN] Could not execute step: could not build http request: parse "https://supporttest.service.redacted.com
so on.....
[INF] No results found. Happy hacking!

Nuclei version
Version is 2.1.0

bash script hanging

Hi,

i`m running httpx against 310383 lines in vps server with 1 GB of ram and 1 vcpu .
BUT the bash script is hanging after while when it comes to run httpx against this file .
the question is dose this has something to do with linux (ubuntu 20.04) itself ?
because i am running htop and everything seems to be normal !

output is not grab-able using grep

when either using -o for store output or even pipe the output using | tee blah
the output result is just appear normal in bash but if u open it using text editor you will find it
unicoded and not grab-able using grep later if i want to filter certain response code

flag updates for storing response data

Updating store-response with sr

Removing store-response-dir as it can be handled with a single flag.

Updating the default output path from the current directory to output, a custom directory can be provided as well.

subfinder -d uber.com -silent | httpx -sr uber-output

Feature: Store only Response header

Hey,

thanks for your great tool. Would it be possible to implement a feature to store only the HTTP response header instead of the whole response like -store-response does?

Installation issue with go modules

Description

Hello Team

Thank you very much for making this tool for us it's been very helpful for us
as I was poking around some features and flag I noticed that there is an issue with the mc flag it actually renders an error when you try to match a 200 or 302 status code

Test

Let's say I want to check the js file that att.com has in the archive I came with this oneliner
echo "att.com" | ~/go/bin/gau | grep '\.js$' | httpx -mc 200

Expected behavior

Checking results that has 200 status code and display

Issue

issue

What I tried

I tried to copy paste the same command you are using in the example section
 httpx -status-code -mc 200,302

Ended up with the same message
issue2

Config

Ubuntu 18.04
Golang1.14
GO111MODULE=on
httpx version 0.0.7

Could not write response

Hi, when using the -sr flag to save the response, it often tells me this because of too long file names as the files are called by the URLs:
Could not write response, at path 'output/target.com_tooloooooooongpath?loonparams=true
As a result you should try to solve this by saving the responses with shorter names.
Thanks for your amazing tools,
kurohost

rolling back newly added flags

in a recent change, newly added flags (v1.0.0) were updated/removed.

$ httpx -version

    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   / 
 / / / / /_/ /_/ /_/ /   |  
/_/ /_/\__/\__/ .___/_/|_|  
             /_/            

		projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
[INF] Current Version: 1.0.2
echo 54.150.66.59 | httpx -silent -unsafe
flag provided but not defined: -unsafe

[feature] Adding flag to probe HTTP verbs

so the expected behavior is like this, i give the HTTPX the list of urls
and then HTTPX fetches all given url with all available HTTP METHOD request refer to this

expected raw output :

linux$: cat givenurl | httpx <multiplemethodoptionargs>
[GET] https://crotcrot.com/aduhcrot
[POST] https://crotcrot.com/aduhcrot
[PUT] https://crotcrot.com/aduhcrot
[etc] .. . . . .

unable to use internt while running httpx tool

i ranned httpx tool for 1000 urls and concurrently i tried to browse in browser....but im unable to browse anything .. i faced the same issue when masscan, ...i think the tool is using so many threads...... but when i used ffuf with many threads too i can browse and can able to access internet...once look into this. i juse didnt used any threads in httpx but also ....im unable to browse...

Flag to ignore errors and keep running (or -sr bug fix)

Hello guys, could you add a flag to ignore errors like this and keep running httpx?

http://www.target.com:80/xxx
https://target.com/xx.xx
https://www.target.com/xxxxxx/?hl=nl/
http://www.target.com:80/xxxx
[FTL] Could not write response, at path 'output/target.com:80_%22:269092299880021,%224456808911257%22:117599551732038,%224456808791254%22:219099868222532,%224456808631250%22:117241395101609,%224456808471246%22:379290365485680,%224456808271241%22:217348818399031,%224456808071236%22:512726018751670,%224456807991234%22:291913630927125,%224456807831230%22:291001267685617,%224456807791229%22:378181278930281%7D.txt', to disc.

...and httpx quit.

This is a problem when we are processing a giant url listing and at some point due to illegal character or very long url it cannot write to disk.

thanks!
awesome tool.

[issue] bug with http-proxy flag

> chaos -silent -d hackerone.com | httpx -silent -status-code -mc 200

https://docs.hackerone.com [200]
https://www.hackerone.com [200]
https://api.hackerone.com [200]

Results with http-proxy flag is not correct.

> chaos -silent -d hackerone.com | httpx -silent -status-code -mc 200 -http-proxy http://127.0.0.1:8080

http://info.hackerone.com [200]
http://email.hackerone.com [200]
http://ns.hackerone.com [200]
http://go.hackerone.com [200]
http://links.hackerone.com [200]
http://o3.email.hackerone.com [200]
http://o1.email.hackerone.com [200]
http://mta-sts.managed.hackerone.com [200]
http://o2.email.hackerone.com [200]
http://mta-sts.hackerone.com [200]
http://docs.hackerone.com [200]
https://www.hackerone.com [200]
https://api.hackerone.com [200]

Reproducible crash with core

Apologies, I don't have experience developing with Go to take this any farther, but happy to send you the core if needed.

twitter.com.probe.gz

$ uname -a
Linux juniper 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ GOTRACEBACK=crash httpx -silent -l twitter.com.probe
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x6ff90c]

goroutine 38 [running]:
panic(0x757340, 0xa79ed0)
/usr/local/go/src/runtime/panic.go:722 +0x2c2 fp=0xc000045c30 sp=0xc000045ba0 pc=0x42e6d2
runtime.panicmem(...)
/usr/local/go/src/runtime/panic.go:199
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:394 +0x3ec fp=0xc000045c60 sp=0xc000045c30 pc=0x442ffc
github.com/projectdiscovery/httpx/common/httpx.(*HTTPX).NewRequest(0xc000012c00, 0x7c08db, 0x3, 0xc000d342e0, 0x18, 0xc000d342e0, 0x18, 0x0)
/home/josh/go/pkg/mod/github.com/projectdiscovery/[email protected]/common/httpx/httpx.go:151 +0x7c fp=0xc000045d00 sp=0xc000045c60 pc=0x6ff90c
main.analyze(0xc000012c00, 0x7c1784, 0x5, 0xc00001c3f0, 0x10, 0x0, 0xc0000730e0, 0xc0000681e0)
/home/josh/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:165 +0x17c fp=0xc000045f18 sp=0xc000045d00 pc=0x704dcc
main.main.func2(0xc00000e100, 0xc000012c00, 0x7c1784, 0x5, 0xc0000730e0, 0xc0000681e0, 0xc00001c3f0, 0x10)
/home/josh/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:121 +0xbc fp=0xc000045fa0 sp=0xc000045f18 pc=0x706d1c
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000045fa8 sp=0xc000045fa0 pc=0x45ab51
created by main.main
/home/josh/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:119 +0x87a

Adding flag to append a custom path based on given url

i have the given url list

> cat urls.txt
https://asdsadsa.com/yeye
https://halalala/yuya

so i want HTTPX can append a custom path so the expected options is like this

> cat urls.txt | httpx -append-custom-path-after-authority-uri "/lola" 
https://asdsadsa.com/lola/yeye GET
https://halalala/lola/yuya GET

but for this flag it seems the HTTPX should parse the given domain refer to this

 foo://example.com:8042/over/there?name=ferret#nose
         \_/   \______________/\_________/ \_________/ \__/
          |           |            |            |        |
       scheme     authority       path        query   fragment
          |   _____________________|__
         / \ /                        \
       
so in this flag we can custom the port,path,key,value,fragment uri

so the user can append the custom add path following in above URI scheme
reference : https://tools.ietf.org/html/rfc3986#page-15
u can use other reference if this reference not relevant for adding this flag.

so the flag will be 2 here:

  1. append custom uri scheme
  2. replacing the custom uri scheme

for replacing the custom uri scheme the poc is like this

> cat urls.txt | httpx -replace-path "crot"

https://asdsadsa.com/crot [GET]
https://halalala/crot [GET]

panic: runtime error: invalid memory address or nil pointer dereference

OS

Linux scan 5.3.0-1022-azure #23~18.04.1-Ubuntu SMP Mon May 11 11:55:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

cmd

wc -l subdomain.txt
1969 subdomain.txt

cat subdomain.txt | httpx -silent -title -json -o result.json

Get Error

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x6faccc]

goroutine 517 [running]:
github.com/projectdiscovery/httpx/common/httpx.(*HTTPX).NewRequest(0xc000022b80, 0x7a8ff1, 0x3, 0xc000eb4570, 0x2c, 0xc000eb4570, 0x2c, 0x0)
        /root/go/pkg/mod/github.com/projectdiscovery/[email protected]/common/httpx/httpx.go:151 +0x7c
main.analyze(0xc000022b80, 0x7a9e90, 0x5, 0xc000eb4510, 0x24, 0x0, 0xc00007d0e0, 0xc0000721e0)
        /root/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:165 +0x173
main.main.func2(0xc00000e0e0, 0xc000022b80, 0x7a9e90, 0x5, 0xc00007d0e0, 0xc0000721e0, 0xc000eb4510, 0x24)
        /root/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:121 +0xab
created by main.main
        /root/go/pkg/mod/github.com/projectdiscovery/[email protected]/cmd/httpx/httpx.go:119 +0x88a

remove color code with -o flag

Hey guys,Thank you for making such a useful tool !
could you remove color code whit -o flag ? when i run as :echo https://projectdiscovery.io|./httpx -title -o 1.txt we can see color code in our output file:
https://projectdiscovery.io [�[36mProjectDiscovery | Security Through Intelligent Automation�[0m]
if with -no-color flag it's ok.
echo https://projectdiscovery.io|./httpx -title -no-color -o 1.txt
https://projectdiscovery.io [ProjectDiscovery | Security Through Intelligent Automation]
but in our terminal has no color -_-\ ,
i just want to output file wihtout color code but terminal with color ,
can it be done?
Forgive my poor English!

[BUG] ports in combination with path

The httpx will return the same result even if the port is not open.

echo 1.1.1.1 | httpx -status-code -title -ports 8081,1111,1,21,9090,1111111111 -path /test -silent
https://1.1.1.1:8081/test [301] [301 Moved Permanently]
https://1.1.1.1:1/test [301] [301 Moved Permanently]
https://1.1.1.1:9090/test [301] [301 Moved Permanently]
https://1.1.1.1:1111/test [301] [301 Moved Permanently]
https://1.1.1.1:1111111111/test [301] [301 Moved Permanently]
https://1.1.1.1:21/test [301] [301 Moved Permanently]

[BUG] CIDR in combination with -ports

To replicate the issue of missing valid host with CIDR input

with prips:-

> prips 1.1.1.0/24 | httpx -title -content-length -status-code -ports 80,443 -silent | grep 1.1.1.1:80

http://1.1.1.1:80 [301] [186] [301 Moved Permanently]

with httpx internal CIDR handler: -

> echo 1.1.1.0/24 | httpx -title -content-length -status-code -ports 80,443 -silent | grep 1.1.1.1:80

To replicate the duplication issue with CIDR input

> echo 1.1.1.0/24 | httpx -title -content-length -status-code -ports 80,443 -silent | sort | grep http://1.1.1.24:80
http://1.1.1.24:80 [403] [16] []
echo 1.1.1.0/24 | httpx -title -content-length -status-code -ports 80,443 -silent | sort | grep http://1.1.1.24:80

http://1.1.1.24:80 [403] [16] []
http://1.1.1.24:80 [403] [16] []
http://1.1.1.24:80 [403] [16] []

I have opened ports 80 and 443 on 192.168.8.1

echo 192.168.8.1 | ./httpx  -title -content-length -status-code -ports 80,443

    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   / 
 / / / / /_/ /_/ /_/ /   |  
/_/ /_/\__/\__/ .___/_/|_|  
             /_/              v1           

		projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
https://192.168.8.1:443 [307] [13] []
http://192.168.8.1:80 [307] [13] []

But when I run CIDR scan, the httpx does not return any open port on 192.168.8.1

echo 192.168.8.0/24 | ./httpx  -title -content-length -status-code -ports 80,443

    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   / 
 / / / / /_/ /_/ /_/ /   |  
/_/ /_/\__/\__/ .___/_/|_|  
             /_/              v1           

		projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.

http-proxy still ignored, user-agent trimming issue

Hello,

I'm using httpx v0.8, the command:
cat urls.txt | httpx -http-proxy http://*.*.*.*:3128 -mc 200,302 -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0' -H 'Cookie: ...' -o result.txt

For some reason httpx still fetches given URL list with my default IP which results in 403 errors from the server. What is interesting is that when I use ffuf with the same proxy setting and those 2 same headers, everything works as expected.

I tried to check the source code of cmd/httpx.go and find the issue myself, but I don't have go knowledge at all so after a few comparisons of how it's done in ffuf and in httpx I didn't spot the reason why proxy still ignored in httpx.

Content type doesn't work when no-color is used.

chaos -d hackerone.com | httpx -content-type -no-color
    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   / 
 / / / / /_/ /_/ /_/ /   |  
/_/ /_/\__/\__/ .___/_/|_|  
             /_/              v1           

		projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
https://mta-sts.managed.hackerone.com []
https://docs.hackerone.com []
https://mta-sts.hackerone.com []
https://mta-sts.forwarding.hackerone.com []
https://www.hackerone.com []
https://api.hackerone.com []
https://support.hackerone.com []
https://resources.hackerone.com []

Bug: -store-response not always working

If the tool tries to write the response of a domain with a slash an error occurs.

Steps to replicate:

# (Tested on Ubuntu 20.04) 
echo "https://www.google.com/"
httpx -l hosts.txt -json -store-response -store-response-dir ./tmp/
# This should write the file "./tmp/google.com.txt"
# > No file is created

Due to the trailing slash (or slashes within the path) the tool tries to write the file to the path "./tmp/google.com/.txt". google.com is interpreted as directory (which does not exist). This leads to an uncaught error here

Add output delimiter option (flag) for easy parsing

It would be helpful if we can specify delimiter for output columns (values).

For example:

  • Currently output looks like this
$ ~ httpx -l urls.txt -title -content-length -status-code -silent
https://example.com [200] [724] [Example Home Page]
  • Improved output
$ ~ httpx -l urls.txt -title -content-length -status-code -silent -delimiter ","
https://example.com,[200],[724],[Example Home Page]

OR without square brackets

$ ~ httpx -l urls.txt -title -content-length -status-code -silent -delimiter ","
https://example.com,200,724,Example Home Page

This will help parsing output with commands like cut, awk, sed, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.