Giter Site home page Giter Site logo

kpcyrd / sn0int Goto Github PK

View Code? Open in Web Editor NEW
1.9K 42.0 172.0 3.83 MB

Semi-automatic OSINT framework and package manager

Home Page: https://sn0int.readthedocs.io/

License: GNU General Public License v3.0

Dockerfile 0.14% Makefile 0.10% Lua 2.23% CSS 0.14% PLpgSQL 0.35% Rust 95.76% Shell 0.16% Python 0.39% Perl 0.10% JavaScript 0.05% Handlebars 0.59%
osint intelligence osint-framework certificate-transparency rust security security-audit security-scanner investigation reconnaissance

sn0int's Introduction

sn0int crates.io Documentation Status irc.hackint.org:6697/#sn0int @sn0int @sn0int@chaos.social registry status

sn0int (pronounced /snoɪnt/) is a semi-automatic OSINT framework and package manager. It's used by IT security professionals, bug bounty hunters, law enforcement agencies and in security awareness trainings to gather intelligence about a given target or about yourself. sn0int is enumerating attack surface by semi-automatically processing public information and mapping the results in a unified format for followup investigations.

Among other things, sn0int is currently able to:

  • Harvest subdomains from certificate transparency logs and passive dns
  • Mass resolve collected subdomains and scan for http or https services
  • Enrich ip addresses with asn and geoip info
  • Harvest emails from pgp keyservers and whois
  • Discover compromised logins in breaches
  • Find somebody's profiles across the internet
  • Enumerate local networks with unique techniques like passive arp
  • Gather information about phonenumbers
  • Harvest activity and images from social media profiles
  • Basic image processing

sn0int is heavily inspired by recon-ng and maltego, but remains more flexible and is fully opensource. None of the investigations listed above are hardcoded in the source, instead they are provided by modules that are executed in a sandbox. You can easily extend sn0int by writing your own modules and share them with other users by publishing them to the sn0int registry. This allows you to ship updates for your modules on your own instead of pull-requesting them into the sn0int codebase.

For questions and support join us on IRC: irc.hackint.org:6697/#sn0int

asciicast

Installation

Packaging status

Archlinux

pacman -S sn0int

Mac OSX

brew install sn0int

Debian/Ubuntu/Kali

There are prebuilt packages signed by a debian maintainer:

sudo apt install curl sq
curl -sSf https://apt.vulns.sexy/kpcyrd.pgp | sq dearmor | sudo tee /etc/apt/trusted.gpg.d/apt-vulns-sexy.gpg > /dev/null
echo deb http://apt.vulns.sexy stable main | sudo tee /etc/apt/sources.list.d/apt-vulns-sexy.list
apt update

Docker

docker run --rm --init -it -v "$PWD/.cache:/cache" -v "$PWD/.data:/data" ghcr.io/kpcyrd/sn0int

Alpine

apk add sn0int

OpenBSD

pkg_add sn0int

Gentoo

layman -a pentoo
emerge --ask net-analyzer/sn0int

NixOS

nix-env -i sn0int

For everything else please have a look at the detailed list.

Getting started

Rationale

This tool was written for companies to help them understand their attack surface from a blackbox point of view. It's often difficult to understand that something is easier to discover than some people assume, putting them at risk of false security.

It's also designed to be useful for red team assessments and bug bounties, which also help companies to identify weaknesses that could result in a compromise.

Some functionality was written to do the same thing for individuals to raise awareness about personal attack surface, privacy and how much data is publicly available. These issues are often out of scope in bug bounties and sometimes by design. We believe that blaming the user is the wrong approach and these issues should be addressed at the root cause by the people designing those systems.

License

GPLv3+

sn0int's People

Contributors

0x011011110 avatar definitepotato avatar dependabot[bot] avatar herrspace avatar hovman avatar kpcyrd avatar sebdufourcq avatar spriteovo avatar stoeckmann avatar weiznich avatar ysf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sn0int's Issues

https issues with aarch64 (BadRecordMac)

I've received a bug report from @steev over irc about connection issues with https:

an error  occurred trying to connect: received fatal alert: BadRecordMac

I could reproduce this issue and it seems this has been resolved in rustls/rustls#149. I'm still pulling in an old version via chrootable-https (work in progress), but putting it all together turned out to be complicated because rocket_contrib still has a dependency on the old ring version which I can't get rid of.

I'm going to track the progress on the 0.9.1 bugfix release in this issue.

Delete command

Right now we can't delete anything (we can only noscope it). This should be fairly simple to add.

Support conditions in sources

It should be possible to add a condition to a source. We could also attempt to filter with lua, but for large workspaces this might become a performance bottleneck.

Error message when using cargo install

Compiling libc v0.2.48
Compiling utf-8 v0.7.5
Compiling fnv v1.0.6
error: invalid format string: expected '}', found '?'
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/utf-8-0.7.5/src/read.rs:40:27
|
40 | write!(f, "invalid byte sequence: {:02x?}", bytes)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: invalid format string: expected '}', found '?'
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/utf-8-0.7.5/src/lib.rs:42:17
|
42 | / "found invalid byte sequence {invalid_sequence:02x?} after
43 | | {valid_byte_count} valid bytes, followed by {unprocessed_byte_count} more
44 | | unprocessed bytes",
| |___________________________________^

error: invalid format string: expected '}', found '?'
--> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/utf-8-0.7.5/src/lib.rs:54:17
|
54 | / "found incomplete byte sequence {incomplete_suffix:02x?} after
55 | | {valid_byte_count} bytes",
| |__________________________________________^

error: aborting due to 3 previous errors

error: Could not compile utf-8.
warning: build failed, waiting for other jobs to finish...
error: failed to compile sn0int v0.8.1 (file:///home/Desktop/Downloaded%20Tools/sn0int), intermediate artifacts can be found at /home/Desktop/Downloaded Tools/sn0int/target

Caused by:
build failed

Parsable output

Right now sn0int data is difficult to process and somebody would have to process the sqlite database. There should be 2 types of parsable output:

sn0int run

This subcommand should be extended to allow passing all parameters as arguments with an additional flag to stream json data.

sn0int select

A select command should be able to output a json stream. The select command also needs to be exposed as a subcommand. done: sn0int select --json

Automatic fields on insert

Right now we deserialize directly into NewOwnedUrl (for example). We should change this to something else which is an alias to NewOwnedUrl or a new type that implements Into<NewOwnedUrl> (or something similar to return a result).

This allows us to have some fields automatically set by other fields. For example we could have a path field that's automatically in sync with value, or we could fill the family field in ipaddrs automatically.

Remote tunnels

Considering all the network discovery tools we have it might make sense to add additional features to support usecases with multiple sensors that log to a central sn0int instance.

A naive solution is fairly trivial:

ncat -vlkp 1234 | sn0int run --stdin -v passive-arp -o network=example

There are some obvious disadvantages though:

  • An attacker can read the data
  • An attacker can replay or otherwise manipulate the data we receive

Setting up openssl is annoying, instead we should use either zmq or build something similar:

sn0int tunnel keygen                        # generate a keypair in
                                            # ~/.local/share/sn0int
sn0int tunnel connect <addr> <pubkeys ...>  # connect to addr and expect one
                                            # of these pubkeys, usually
                                            # only one
sn0int tunnel listen <addr> <pubkeys ...>   # bind to addr and wait for
                                            # connections, only accept
                                            # clients with the right pubkey

This would allow us to rewrite our previous approach to:

sn0int tunnel listen 0.0.0.0:1234 "$KEY" | \
    sn0int run --stdin -v passive-arp -o network=example

Note that listen could be replaced with connect, the connection direction doesn't indicate in which direction the data flows. In fact, the connection would be bi-directional but only one side would actually expect data.

On the sending side we would want to be able to define a buffer (possibly with disk persistence). This means that sn0int tunnel is still reading data from stdin into a ring buffer of configurable size even if the network is down, then replay everything after the network is up again. The script might want to know if messages have been skipped/dropped.

To avoid the need for pipes we could eventually add this as an option for sn0int run so it's able to connect/bind on its own. Especially in listen mode we could allow one script child per connection to avoid interference if both clients write at the same time.

Identities/Profiles

There should be a table to add online profiles.

From the top of my head, the following columns should exist:

  • service - the name of the service. this should be somewhat unified to make sure modules are compatible with each other.
  • username - the public username or login name
  • url (optional) - if profiles have public urls, the url to the profile
  • email (optional) - the profile email, if available
  • last_seen (optional) - the datetime the profile was last online, possibly monitored using a cronjob
  • last_location (optional) - the last location published by this profile
  • last_location_time (optional) - the datetime this location was published

If there is no concept of usernames, this column should duplicate the email column. The combination of service+username should be unique.

We probably also need a Person table to link profiles to an individual. A profile could be used by multiple individuals, so this might have to be a many-to-many relationship. If no person is known/linked, this should be displayed as Anonymous.

This might be tricky since the location data should be available on the Person object by populating the DetailedPerson from its children, but this would update the location of every person linked to a shared profile.

We probably also want to link the email table to the Person table and extend the pgp script to extract more info from the rfc4880 uid.

Allow devs to check if module changes need to be pushed

The workflow I want to support is:

sn0int publish *.lua

Running this twice currently raises issues because it would try to republish an existing version. Instead, it should report a successful publish if the upload is identical to the existing version.

Registry dependency in docker aren't locked

I've noticed that due to the way the sn0int-registry image is built that the Cargo.lock file is ignored and the latest semver compatible version of everything is used during the build.

This should be resolved.

Regression tests for modules

There should be a way to easily verify a module is still fully functional. This could be done by testing the module for a test function and if it exists:

  • allow executing sn0int test foo similar to sn0int run
  • automatically invoke this function prior a sn0int publish (with a flag to skip this)

The test function would prepare a temporary database, invoke the run function with different arguments and then test the database for certain things, eg if some entities have been added and if the values are set correctly.

error: cargo install: diesel: unable to get packages from source

Could this be a transient connectivity issue?

 Downloading diesel v1.4.1                                                                                                                
error: failed to compile `sn0int v0.9.0 (/[redacted]/sn0int)`, intermediate artifacts can be found at `[redacted]/sn0int/target`

Caused by:
  unable to get packages from source

Caused by:
  failed to parse manifest at `/home/[redacted]/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-1.4.1/Cargo.toml`

Caused by:
  feature `rename-dependency` is required

consider adding `cargo-features = ["rename-dependency"]` to the manifest

geoip lookups

There should be a way to do geoip lookups for ip addresses.

Update modules

Right now there's no command to "update all outdated modules", instead a user would have to re-run the install command for each outdated module manually. There should be a command to check each module and update them if needed.

Download progress bar

The initial downloads can take a moment depending on the bandwidth available, we should display a progress indicator that indicates how much % has been downloaded already.

Add Option<String> description field to ipaddrs

This field is reserved for the server/instance name configured at the hoster. Very often this field is going to be null, this is only used if we have access to the cloud provider for example.

Benchmark tooling

It seems some modules perform somewhat poorly. I've suspected this is due to db_insert, but one benchmark investigation that just inserts 1k subdomains finishes in 0.5sec.

The easiest way to tune modules for speed would be a mode that hooks into the http client and returns static responses and writes into a temporary database that is deleted afterwards.

Maybe the proper way to avoid this is just #14 though.

Add geofences/zones

Since we're going to have geolocations we also want to have geofences and assign them a name.

I'm not sure about naming yet.
We might want to avoid the following because of ambiguity: zone, location or geolocation
We might want to use one of the following: geozone, geofence

I'm not sure if we're going to support full polygons or keep it simple:

  • id
  • name
  • latitude
  • longitude
  • radius

This might replace the geolocation columns in the network table.

Public documentation

There is currently very light documentation to avoid too much attention on a half finished project.

We should slowly start adding documentation and put it on https://sn0int.readthedocs.io/en/latest/

The docs should feature:

  1. Intro
  2. Installation
  3. Running your first investigation
    3.1 Installing the default modules
    3.2 Adding something to scope
    3.3 Running a module
    3.4 Unscoping entities
    3.5 Running followup modules in the results
  4. Writing your own module

We should also generate a man page while we're at it.

Subcommand to create new module

It's a bit tedious to create a new module due to the headers needed. A subcommand to generate the basic boilerplate would be nice.

Cryptocurrency addresses

It would be useful if we could add crypto currency addresses to our scope, link them to a person and also check the balance.

This could be somewhat tricky because I'm not sure if the address does identify an entity uniquely. We might have to set the unique constraint on currency + address instead of just an address.

db_add should return null for unscoped objects

There are some modules (mostly passive dns) that don't cascade unscoped properly:

If the domain example.com is unscoped, a module can still add subdomains to example.com, without even realizing that it is out of scope.

This could be resolved by making the return value of db_add nullable, so it may not return an object and a script needs to handle this case.

Workspace commands

Right now there's a switch_db command to select a different workspace, this should be changed:

  • workspace - list all workspaces
  • workspace <foo> - switch to a different workspace

There should be tab completion for workspace selection to avoid typo workspaces that fill up ~/.local/share/sn0int. We should also start enforcing a set of valid characters for a workspace name.

Filtering with joins

I'd like to select all devices in a given network. It's currently not possible to do that in a single filter because we can only filter for fields in a row, without joins or sub queries.

  • devices
  • network_devices
  • network

This needs some more thought because it's a problem we're facing in more than one place.

Automatic transformation for redirects

I just wrote the second module that has a manual transformation written in lua to prepare the redirect field.

Since we can have automatic transformation since a while we should automatically try to populate that field if it is None.

Multithreading

It should be possible the run scripts with concurrency. This has some implications:

  • the database either needs locking or the code should ensure only one operation runs at a time
  • we need to change the spinner, either stack them or show multiple targets in the label
  • the log lines would need to have more context, often it's not clear which source item produced a certain log line

Preview and filter targets

After running use <module> a user should be able to use a target command to preview or filter the targets.

[sn0int][default] > use ctlogs 
[sn0int][default][dev/ctlogs] > target
[list all domains]
[sn0int][default][dev/ctlogs] > target where value like %.com
[+] 12 entities selected
[sn0int][default][dev/ctlogs] > target
[list all domains that end in .com]
[sn0int][default][dev/ctlogs] > 
[sn0int][default][dev/ctlogs] > run

This possibly replaces #1

Verbose db_add

There should be a way to log entities that we found even if they already exist in our database. This would be very useful for module development. We might add this as a -v flag to run.

This could be extended with a debug function that behaves like the info function but is only enabled if run -vv is executed.

Allow disabling spinners

There are some cases in which we don't want progress indicators and instead show a simple log.

This should be enabled if:

  • stdout is no tty
  • ui.no_spinners is set in the config
  • SN0INT_NO_SPINNERS= is set

Private modules

While it's possible to drop custom scripts into ~/.local/share/sn0int/modules/ we should explicitly support private modules. This could conflict with the registry and #8 though.

We could improve support by allowing the user to clone a repo into sn0int/modules/foo/. If we discover a git repo at sn0int/modules/foo/.git we would exclude it from a regular update and instead try to fetch+fast-forward the repo.

We could also invoke git clone for the user if we detect an url argument for sn0int install.

Browsing the registry

Right now it's difficult to discover new modules besides quickstart or searching for a specific author.

There should be a way to browse through available modules even if you don't know what you're looking for yet. We might want to prioritize featured modules and modules with many downloads.

Release build hangs forever due to LTO

After some troubleshooting it seems the release build hangs forever during lto:

 INFO 2019-02-23T14:19:15Z: rustc_codegen_llvm::back::lto: 1470 symbols to preserve in this crate
 INFO 2019-02-23T14:19:15Z: rustc_codegen_llvm::back::lto: going for that thin, thin LTO
 INFO 2019-02-23T14:19:15Z: rustc_codegen_llvm::back::lto: local module: 0 - sn0int.15aj1zuq-cgu.5
 INFO 2019-02-23T14:19:15Z: rustc_codegen_llvm::back::lto: local module: 1 - sn0int.15aj1zuq-cgu.0
 INFO 2019-02-23T14:19:16Z: rustc_codegen_llvm::back::lto: local module: 2 - sn0int.15aj1zuq-cgu.15
 INFO 2019-02-23T14:19:16Z: rustc_codegen_llvm::back::lto: local module: 3 - sn0int.15aj1zuq-cgu.10
 INFO 2019-02-23T14:19:16Z: rustc_codegen_llvm::back::lto: local module: 4 - sn0int.15aj1zuq-cgu.3
 INFO 2019-02-23T14:19:16Z: rustc_codegen_llvm::back::lto: local module: 5 - sn0int.15aj1zuq-cgu.6
 INFO 2019-02-23T14:19:17Z: rustc_codegen_llvm::back::lto: local module: 6 - sn0int.15aj1zuq-cgu.1
 INFO 2019-02-23T14:19:17Z: rustc_codegen_llvm::back::lto: local module: 7 - sn0int.15aj1zuq-cgu.12
 INFO 2019-02-23T14:19:17Z: rustc_codegen_llvm::back::lto: local module: 8 - sn0int.15aj1zuq-cgu.14
 INFO 2019-02-23T14:19:17Z: rustc_codegen_llvm::back::lto: local module: 9 - sn0int.15aj1zuq-cgu.4
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 10 - sn0int.15aj1zuq-cgu.2
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 11 - sn0int.15aj1zuq-cgu.7
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 12 - sn0int.15aj1zuq-cgu.9
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 13 - sn0int.15aj1zuq-cgu.13
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 14 - sn0int.15aj1zuq-cgu.11
 INFO 2019-02-23T14:19:18Z: rustc_codegen_llvm::back::lto: local module: 15 - sn0int.15aj1zuq-cgu.8
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto: thin LTO data created
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto: thin LTO import map loaded
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto: checking which modules can be-reused and which have to be re-optimized.
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.5: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.0: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.15: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.10: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.3: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.6: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.1: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.12: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.14: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.4: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.2: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.7: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.9: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.13: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.11: re-compiled
 INFO 2019-02-23T14:19:19Z: rustc_codegen_llvm::back::lto:  - sn0int.15aj1zuq-cgu.8: re-compiled
 INFO 2019-02-23T14:19:20Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.3
 INFO 2019-02-23T14:19:20Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.2
 INFO 2019-02-23T14:19:20Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.0
 INFO 2019-02-23T14:19:20Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.1
 INFO 2019-02-23T14:19:42Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.6
 INFO 2019-02-23T14:19:48Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.5
 INFO 2019-02-23T14:20:02Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.4
 INFO 2019-02-23T14:20:03Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.8
 INFO 2019-02-23T14:20:09Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.10
 INFO 2019-02-23T14:20:18Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.14
 INFO 2019-02-23T14:20:19Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.15
 INFO 2019-02-23T14:20:28Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.9
 INFO 2019-02-23T14:20:32Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.13
 INFO 2019-02-23T14:20:34Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.11
 INFO 2019-02-23T14:20:38Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.12
 INFO 2019-02-23T14:20:43Z: rustc_codegen_llvm::back::lto: running thin lto passes over sn0int.15aj1zuq-cgu.7

The solution is building with lto disabled:

RUSTFLAGS="-C lto=no" cargo build --release

This issue has been reproduced with:

  • rustc 1.32.0 (9fda7c223 2019-01-16)
  • rustc 1.34.0-nightly (e1c6d0057 2019-02-22)

This seems to be a rust/llvm issue, trying to disable lto in the Cargo.toml didn't work though:

[profile.release]
lto = false

Record and log redirect headers

The redirect header is a special case that should be added to the urls table and added to the log output.

This would make the url-scan script more useful.

Add breaches

There was a discussion on irc whether it would be useful to have a breach struct and allow linking emails to a breach. I think this would be useful, for performance reasons we wouldn't store the full breach in our database but instead we would probably go with something like this:

# breaches
- id: i32,
- value (name of the breach): String

# breach_emails
email_id: i32,
breach_id: i32,
password: Option<String>

This would allow selecting all scoped emails that are involved in a breach and also list all breaches an email has been involved with. The password is optional depending on whether we have a copy of the breach or not.

Note that due to breach compilations a breach can contain more than one password for an email.

Access token system

Some systems need access/api tokens, eg shodan or github. We should track them in a somewhat unified format and allow the user to add them to a database. A script may request access to those tokens, but there should be a permission system in place so not every script has access to all tokens.

cidr_contains

There should be a function in the stdlib that tests if a given cidr subnet contains a specific ip address and returns a bool.

db_select

Behaves very similar to db_insert, except that nil is returned if the object does not exist.

This is useful to check if we discovered a subdomain and we are not sure if it belongs to a domain that is in scope.

db_select does not allow listing objects. If the object exists but is excluded from scope, nil is returned as well.

Streamline logging

The following things should be fixed:

  • Inserts that trigger an upsert should be logged as an update
  • An update should log the identifier/value the old value (or none) and the new value, eg: column: old => new
  • Avoid duplicate code for scoped/unscoped that only differ in color codes

Also, consider:

  • Reuse the one-line representation in select as a base for detailed representation
  • Reuse the one-line representation in select for inserts

Untrusted tls certificates

Right now we are enforcing valid certificates for everything. This prevents the url-scan module from discovering https endpoints if we don't trust that cert.

It should be possible to disable tls verification per request. The trust status should be available in the response object and the url table should have a column that tracks the certificate status.

Error: EOF while parsing a value at line 1 column 0

Hello,
Following your tutorial, I get the following Error:
[sn0int][example][kpcyrd/ctlogs] > select domains
[sn0int][example][kpcyrd/ctlogs] > add domain
[?] Domain: example.com
[sn0int][example][kpcyrd/ctlogs] > run
[-] Failed "example.com": EOF while parsing a value at line 1 column 0
[+] Finished kpcyrd/ctlogs (1 errors)

Quickstart for users

We do not ship any modules by default and a user currently needs to copy or symlink the content of the modules/ folder to ~/.local/share/sn0int/modules/ or manually pick all relevant modules. If we detect an empty modules folder we should suggest running the quickstart command which would query the api for featured modules and install them.


                   ___/           .
     ____ , __   .'  /\ ` , __   _/_
    (     |'  `. |  / | | |'  `.  |
    `--.  |    | |,'  | | |    |  |
   \___.' /    | /`---' / /    |  \__/

        osint | recon | security

[+] Connecting to database
[+] Loaded 0 modules
[*] No modules found, run quickstart to install default modules
[sn0int][default] > quickstart
[installing module]
[installing module]
[installing module]
[...]
[sn0int][default] > 

This command should be exposed as sn0int quickstart as well.

Allow exposing stdin to scripts

In some cases we want to pass a file to a script. Since we don't expose any functions to access the filesystem this is currently not possible, but we could add an --stdin mode that enables some functions to access stdin. This would allow parsing some text based files or output of some tools:

cat dhcpd.leases | sn0int run --stdin -f ./dhcpd-parser.lua

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.