Giter Site home page Giter Site logo

activecm / rita Goto Github PK

View Code? Open in Web Editor NEW
2.5K 114.0 359.0 7.56 MB

Real Intelligence Threat Analytics (RITA) is a framework for detecting command and control communication through network traffic analysis.

License: GNU General Public License v3.0

Go 94.18% Shell 5.14% Makefile 0.47% Dockerfile 0.21%
rita network-traffic threat scanning offensive-countermeasures bro-ids blueteam security logs analytics

rita's Introduction

RITA (Real Intelligence Threat Analytics)

RITA Logo

If you get value out of RITA and would like to go a step further with hunting automation, futuristic visualizations, and data encrichment take a look at AC-Hunter.

Sponsored by Active Countermeasures.


RITA is an open source framework for network traffic analysis.

The framework ingests Zeek Logs in TSV format, and currently supports the following major features:

  • Beaconing Detection: Search for signs of beaconing behavior in and out of your network
  • DNS Tunneling Detection Search for signs of DNS based covert channels
  • Blacklist Checking: Query blacklists to search for suspicious domains and hosts

Install

Please see our recommended System Requirements document if you wish to use RITA in a production environment.

Automated Install

RITA provides an install script that works on Ubuntu 20.04 LTS, Debian 11, Security Onion, and CentOS 7.

Download the latest install.sh file here and make it executable: chmod +x ./install.sh

Then choose one of the following install methods:

  • sudo ./install.sh will install RITA as well as supported versions of Zeek and MongoDB. This is suitable if you want to get started as quickly as possible or you don't already have Zeek or MongoDB.

  • sudo ./install.sh --disable-zeek --disable-mongo will install RITA only, without Zeek or MongoDB. You may also use these flags individually.

Docker Install

See here.

Manual Installation

To install each component of RITA by manually see here.

Upgrading RITA

See this guide for upgrade instructions.

Getting Started

Configuration File

RITA's config file is located at /etc/rita/config.yaml though you can specify a custom path on individual commands with the -c command line flag.

  • The Filtering: InternalSubnets section must be configured or you will not see any results in certain modules (e.g. beacons, long connections). If your network uses the standard RFC1918 internal IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) you don't need to do anything as the default InternalSubnets section already has these. Otherwise, adjust this section to match your environment. RITA's main purpose is to find the signs of a compromised internal system talking to an external system and will automatically exclude internal to internal connections and external to external connections from parts of the analysis.

You may also wish to change the defaults for the following option:

  • Filtering: AlwaysInclude - Ranges listed here are exempt from the filtering applied by the InternalSubnets setting. The main use for this is to include internal DNS servers so that you can see the source of any DNS queries made.

Note that any value listed in the Filtering section should be in CIDR format. So a single IP of 192.168.1.1 would be written as 192.168.1.1/32.

Obtaining Data (Generating Zeek Logs)

  • Option 1: Generate PCAPs outside of Zeek

    • Generate PCAP files with a packet sniffer (tcpdump, wireshark, etc.)
    • (Optional) Merge multiple PCAP files into one PCAP file
      • mergecap -w outFile.pcap inFile1.pcap inFile2.pcap
    • Generate Zeek logs from the PCAP files
      • zeek -r pcap_to_log.pcap local "Log::default_rotation_interval = 1 day"
  • Option 2: Install Zeek and let it monitor an interface directly [instructions]

    • You may wish to compile Zeek from source for performance reasons. This script can help automate the process.

    • The automated installer for RITA installs pre-compiled Zeek binaries by default

      • Provide the --disable-zeek flag when running the installer if you intend to compile Zeek from source
    • To take advantage of the feature for monitoring long-running, open connections (default is 1 hour or more), you will need to install our zeek-open-connections plugin. We recommend installing the package with Zeek's package manager zkg. Newer versions of Zeek (4.0.0 or greater) will come bundled with zkg. If you do not have zkg installed, you can manually install it. Once you have zkg installed, run the following commands to install the package

      • zkg refresh
      • zkg install zeek/activecm/zeek-open-connections

      Next, edit your site/local.zeek file so that it contains the following line

      • @load packages

      Finally, run the following

      • zeekctl deploy

Importing and Analyzing Data With RITA

After installing RITA, setting up the InternalSubnets section of the config file, and collecting some Zeek logs, you are ready to begin hunting.

RITA can process TSV, JSON, and JSON streaming Zeek log file formats. These logs can be either plaintext or gzip compressed.

One-Off Datasets

This is the simplest usage and is great for analyzing a collection of Zeek logs in a single directory. If you expect to have more logs to add to the same analysis later see the next section on Rolling Datasets.

rita import path/to/your/zeek_logs dataset_name

Every log file in the supplied directory will be imported into a dataset with the given name. However, files in nested directories will not be processed.

Note: Rita is designed to analyze 24hr blocks of logs. Rita versions newer than 4.5.1 will analyze only the most recent 24 hours of data supplied.

Rolling Datasets

Rolling datasets allow you to progressively analyze log data over a period of time as it comes in.

rita import --rolling /path/to/your/zeek_logs dataset_name

You can make this call repeatedly as new logs are added to the same directory (e.g. every hour).

One common scenario is to have a rolling database that imports new logs every hour and always has the last 24 hours worth of logs in it. Typically, Zeek logs will be placed in /opt/zeek/logs/<date> which means that the directory will change every day. To accommodate this, you can use the following command in a cron job or other task scheduler that runs once per hour.

rita import --rolling /opt/zeek/logs/$(date --date='-1 hour' +\%Y-\%m-\%d)/ dataset_name

RITA cycles data into and out of rolling databases in "chunks". You can think of each chunk as one hour, and the default being 24 chunks in a dataset. This gives the ability to always have the most recent 24 hours' worth of data available. But chunks are generic enough to accommodate non-default Zeek logging configurations or data retention times as well. See the Rolling Datasets documentation for advanced options.

Note: dataset_name is simply a name of your choosing. We recommend a descriptive name such as the hostname or location of where the data was captured. Stick with letters, numbers, and underscores. Periods and other special characters are not allowed.

Examining Data With RITA

  • Use the show-X commands
    • show-databases: Print the datasets currently stored
    • show-beacons: Print hosts which show signs of C2 software
    • show-bl-hostnames: Print blacklisted hostnames which received connections
    • show-bl-source-ips: Print blacklisted IPs which initiated connections
    • show-bl-dest-ips: Print blacklisted IPs which received connections
    • show-dns-fqdn-ips: Print IPs associated with a specified FQDN
    • show-exploded-dns: Print dns analysis. Exposes covert dns channels
    • show-long-connections: Print long connections and relevant information
    • show-strobes: Print connections which occurred with excessive frequency
    • show-useragents: Print user agent information
  • By default, RITA displays data in CSV format
    • -d [DELIM] delimits the data by [DELIM] instead of a comma
      • Strings can be provided instead of single characters if desired, e.g. rita show-beacons -d "---" dataset_name
    • -H displays the data in a human readable format
      • This takes precedence over the -d option
    • Piping the human readable results through less -S prevents word wrapping
      • Ex: rita show-beacons dataset_name -H | less -S
  • Create a html report with html-report

Getting help

Please create an issue on GitHub if you have any questions or concerns.

Contributing to RITA

To contribute to RITA visit our Contributing Guide

License

GNU GPL V3 © Active Countermeasures ™

rita's People

Contributors

alextibbles avatar bglebrun avatar but-i-am-dominator avatar caffeinatedpixel avatar carrohan avatar droberson avatar edward-morgan avatar enbake-dev avatar ethack avatar fullmetalcache avatar hippi3hack3r avatar ianlee1521 avatar joelillo avatar jonzeolla avatar joswr1ght avatar kaliregenold avatar kirkhauck avatar lawrencehoffman avatar lijantropique avatar lisasw avatar meljbruno avatar mrgleam avatar null-default avatar samuelcarroll avatar testwill avatar u5surf avatar vivek-26 avatar william-stearns avatar zalgo2462 avatar zaowen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rita's Issues

UserAgent output and Improvement

For version: 0.9.2

  1. The UserAgent module should have an output that allows a user to view the machines that had that UserAgent string.
  2. The analyzer should also look for machines with the most unique UserAgent strings and the least unique UserAgent strings.
  3. The analyzer should identify which UserAgent strings were associated with the most and least number of bytes transferred.

Unhandled Exception for Non-existent Database

Rita version: v1.0.0-alpha-126-gc13c02d

rita show-beacons -d db-does-not-exist
DEBU[0000] entering                                      function=isBuilt module=meta package=database
INFO[0000] Failed aggregation: (Source collection: beacon doesn't exist)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5c2e5a]

goroutine 1 [running]:
panic(0x8b3120, 0xc420010100)
        /usr/local/go/src/runtime/panic.go:500 +0x1a1
gopkg.in/mgo%2ev2.(*Iter).Next(0x0, 0x8710a0, 0xc4201ee180, 0x8710a0)
        /home/bhisht/go/src/gopkg.in/mgo.v2/session.go:3684 +0x3a
gopkg.in/mgo%2ev2.(*Iter).All(0x0, 0x86d1e0, 0xc4201ca740, 0x0, 0x11)
        /home/bhisht/go/src/gopkg.in/mgo.v2/session.go:3800 +0x27f
github.com/ocmdev/rita/commands.showBeacons(0xc4200c3540, 0x0, 0xc4200c3540)
        /home/bhisht/go/src/github.com/ocmdev/rita/commands/show-beacons.go:45 +0x26a
github.com/urfave/cli.HandleAction(0x89be20, 0x980560, 0xc4200c3540, 0xc42008c900, 0x0)
        /home/bhisht/go/src/github.com/urfave/cli/app.go:485 +0xd4
github.com/urfave/cli.Command.Run(0x93ce6e, 0xc, 0x0, 0x0, 0x0, 0x0, 0x0, 0x94c4f2, 0x28, 0x0, ...)
        /home/bhisht/go/src/github.com/urfave/cli/command.go:207 +0xb92
github.com/urfave/cli.(*App).Run(0xc4200b7860, 0xc42000c180, 0x4, 0x4, 0x0, 0x0)
        /home/bhisht/go/src/github.com/urfave/cli/app.go:250 +0x812
main.main()
        /home/bhisht/go/src/github.com/ocmdev/rita/rita.go:26 +0xf0

Looks like this (and possibly other) exception just needs to be caught and handled with a user-friendly error message.

Scanning module improvement

For version: 0.9.2

The scanning module is quite broken. It will find port scanning, but the output is very noisy. Specifically anything that does NAT or Proxy etc. will come up as a port scan. The module needs to be rewritten to use something like the port popularity list from nmap to score the likelihood that a machine which is seen accessing multiple ports on another machine is actually scanning.

Cross Reference Output

We have code to do a cross reference analysis which alerts which hosts have appeared in multiple modules. However, we do not have the ability to display the results to the CLI.

Analysis hook

Allow hooking into the analysis command to run custom commands afterwards.

Option 1 - Config file option:

  • Create a new option in the configuration file that allows specifying a shell script
  • Modify the backend so that after the analysis command finishes it reads the config option and executes the script

Option 2 - Command line switch:

  • Add a new command line switch & argument to the analysis command that allows specifying a shell script via command line
  • Modify the backend so that after the analysis command finishes it executes the command line argument

Option 3 - Log file watcher:

  • Ensure that there is a log file entry that corresponds to a successful analysis run.
  • The next step would be to have a separate service that triggers when it detects the log entry. This wouldn't involve Rita source modification.

This feature is to support post processing of the analysis data. E.g. A script that uploads the analysis data to an off-site Mongo instance. This is useful for frontend deployments, but could also be utilized for alerting.

Fix Mongo query timeout in beaconing portion of analysis

The query to get the first/last timestamps is probably timing out and causing there to be 0 beaconing results.

The following log output is characteristic of this issue.

[-] Running beacon analysis
INFO[3779] Building collection: beacon                  
INFO[3779] Running beacon hunt                          
DEBU[3779] Looking for first connection timestamp       
DEBU[3839] Looking for last connection timestamp        
DEBU[3839] First and last timestamps found               first=0 last=0 time_elapsed=1m0.000551352s

Consider Bro intel.log as a blacklist data source

Bro has an intel log documented here.

One notable service that integrates with this log is Critical Stack which acts as a sort of blacklist marketplace/aggregator.

The suggestion I have is reading intel.log as a blacklist source. All the matching will already be done and we will just have to report the entries in this file. Users would be responsible for configuring Bro to populate the intel.log as they see fit.

Implement Mongo bulk insert for data import

The current implementation of the import feature uses individual Mongo insert commands for each data point that goes into the database. This area is highly threaded, but it is worth researching using bulk inserts instead.

The current proposal that would require the least messing with the current threading code (a large undertaking) is to add in a buffer of some sort that sits between the import code and the Mongo database. Instead of inserting directly into Mongo, the threads would place the data into an in-memory buffer. When the buffer is full, additional code will take that data and perform a bulk insert into Mongo.

Segfault in html-report

I can cause a segfault when running html-report when either:

  1. a database is imported but not analyzed
  2. a database does not exist

These could be from the same root cause but I wanted to make sure I mentioned both.

In the output below the "rita" and "reporting" databases are imported but not analyzed and the "doesnoteexist" database, well, does not exist.

➜  rita git:(f5024db) ✗ ./rita show-databases
Empire
rita
reporting
➜  rita git:(f5024db) ✗ ./rita html-report
[-] Writing: $GOPATH/src/github.com/ocmdev/rita/rita-html-report/Empire
[-] Writing: $GOPATH/src/github.com/ocmdev/rita/rita-html-report/rita
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x11c4e4a]

goroutine 1 [running]:
gopkg.in/mgo%2ev2.(*Iter).Next(0x0, 0x14a7dc0, 0xc420076980, 0x14a7dc0)
        $GOPATH/src/gopkg.in/mgo.v2/session.go:3684 +0x3a
gopkg.in/mgo%2ev2.(*Iter).All(0x0, 0x14a3980, 0xc420198980, 0x0, 0x0)
        $GOPATH/src/gopkg.in/mgo.v2/session.go:3800 +0x2b6
github.com/ocmdev/rita/reporting.printBeacons(0xc4201f4234, 0x4, 0xc420198460, 0x0, 0x0)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report-beacons.go:29 +0x370
github.com/ocmdev/rita/reporting.writeDB(0xc4201f4234, 0x4, 0xc4201550c0, 0x3b, 0xc420198460, 0x0, 0x0)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report.go:154 +0x291
github.com/ocmdev/rita/reporting.PrintHTML(0xc420155080, 0x3, 0x4, 0xc420198460, 0x0, 0x0)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report.go:67 +0x4ab
github.com/ocmdev/rita/commands.init.4.func1(0xc42001b180, 0x0, 0xc42001b180)
        $GOPATH/src/github.com/ocmdev/rita/commands/reporting.go:31 +0x111
github.com/urfave/cli.HandleAction(0x14d0ca0, 0x1598c40, 0xc42001b180, 0xc42006c900, 0x0)
        $GOPATH/src/github.com/urfave/cli/app.go:485 +0xd4
github.com/urfave/cli.Command.Run(0x157c697, 0xb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x158d40a, 0x29, 0x0, ...)
        $GOPATH/src/github.com/urfave/cli/command.go:207 +0xb6e
github.com/urfave/cli.(*App).Run(0xc420098d00, 0xc42000c280, 0x2, 0x2, 0x0, 0x0)
        $GOPATH/src/github.com/urfave/cli/app.go:250 +0x7d0
main.main()
        $GOPATH/src/github.com/ocmdev/rita/rita.go:26 +0x110

➜  rita git:(f5024db) ✗ ./rita html-report -d doesnotexist
[-] Writing: $GOPATH/src/github.com/ocmdev/rita/doesnotexist/doesnotexist
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x11c4e4a]

goroutine 1 [running]:
gopkg.in/mgo%2ev2.(*Iter).Next(0x0, 0x14a7dc0, 0xc420182d00, 0x14a7dc0)
        $GOPATH/src/gopkg.in/mgo.v2/session.go:3684 +0x3a
gopkg.in/mgo%2ev2.(*Iter).All(0x0, 0x14a3980, 0xc420185540, 0x0, 0x0)
        $GOPATH/src/gopkg.in/mgo.v2/session.go:3800 +0x2b6
github.com/ocmdev/rita/reporting.printBeacons(0x7fff5fbffa06, 0xc, 0xc42015f540, 0x0, 0x0)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report-beacons.go:29 +0x370
github.com/ocmdev/rita/reporting.writeDB(0x7fff5fbffa06, 0xc, 0xc420153540, 0x37, 0xc42015f540, 0xc, 0xc420186c90)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report.go:154 +0x291
github.com/ocmdev/rita/reporting.PrintHTML(0xc420186c90, 0x1, 0x1, 0xc42015f540, 0x0, 0x0)
        $GOPATH/src/github.com/ocmdev/rita/reporting/report.go:67 +0x4ab
github.com/ocmdev/rita/commands.init.4.func1(0xc42001b180, 0x0, 0xc42001b180)
        $GOPATH/src/github.com/ocmdev/rita/commands/reporting.go:31 +0x111
github.com/urfave/cli.HandleAction(0x14d0ca0, 0x1598c40, 0xc42001b180, 0xc42006c900, 0x0)
        $GOPATH/src/github.com/urfave/cli/app.go:485 +0xd4
github.com/urfave/cli.Command.Run(0x157c697, 0xb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x158d40a, 0x29, 0x0, ...)
        $GOPATH/src/github.com/urfave/cli/command.go:207 +0xb6e
github.com/urfave/cli.(*App).Run(0xc4200989c0, 0xc420010300, 0x4, 0x4, 0x0, 0x0)
        $GOPATH/src/github.com/urfave/cli/app.go:250 +0x7d0
main.main()
        $GOPATH/src/github.com/ocmdev/rita/rita.go:26 +0x110

The file logger doesn't support all logrus log levels

Logs statements can currently disappear if the logger is used with a level other than

log.DebugLevel: logPath + "/debug-" + time.Now().Format(util.TimeFormat) + ".log",
log.InfoLevel:  logPath + "/info-" + time.Now().Format(util.TimeFormat) + ".log",
log.WarnLevel:  logPath + "/warn-" + time.Now().Format(util.TimeFormat) + ".log",
log.ErrorLevel: logPath + "/error-" + time.Now().Format(util.TimeFormat) + ".log",

Adding additional levels, making sure time.Now() is consistent across the different files, and separating the logs into subfolders will help troubleshoot crashes with RITA.

Install script not working.

This line is also causing an error, is it too not needed?

What is suppose to be in the usr folder?

    cp -r usr $_RITADIR/usr
rita$ sudo ./install.sh
[sudo] password for rita:

 _ \ _ _| __ __|  \
   /   |     |   _ \
_|_\ ___|   _| _/  _\

Brought to you by the Offensive CounterMeasures

[+] Transferring files
cp: cannot stat ‘usr’: No such file or directory

Thank you,
Brian

Name change: TBD -> Beacon

A long time ago we added a new module called tbd... because we didn't know what to call it. It's now known as beacon (or beacons or beaconing...) analysis. We should rename every instance of the word TBD to beacon(ing/s).

Logging only enabled on import

Logs are only written to the RitaLogDirectory when the import command is run. They should be written during other commands as well except for show-* command output.

Elasticsearch pull of bro data?

Any chance this project will later have a feature to pull the Bro logs directly from Elasticsearch rather than from files? I thought a previous version of RITA did just this but I may be mistaken.

I would love to incorporate this in use cases where individuals are collecting bro logs with ELK and then use RITA to analyze them.

DNS Analysis

DNS has become a common protocol for use in covert channels. We would like to automate the discovery of these channels. This should be implemented using the research here: https://www.sans.org/reading-room/whitepapers/dns/detecting-dns-tunneling-34152

Currently, the parser needs a little cleanup for vector types which the bro dns log uses heavily.
Particularly, we need to assign a type in the parser types file to the type strings currently being matched in the parser, and we need to split the input string into a slice of data.

Bro DNS log example:

1486944317.772209	C3KlvK1CLFn9utAI87	192.168.0.112	56556	8.8.8.8	53	udp	3054	0.040023	google.com	1	C_INTERNET	1	A	0	NOERROR	F	F	T	T	0	216.58.217.14	9.000000	F
14:1486944317.852245	CTiEoV2krojvWy6Xr	192.168.0.112	38979	8.8.8.8	53	udp	10063	0.055996	14.217.58.216.in-addr.arpa	1	C_INTERNET	12	PTR	0	NOERROR	F	F	T	T	0	den03s09-in-f14.1e100.net,den03s09-in-f14.1e100.net,den03s09-in-f14.1e100.net,den03s09-in-f14.1e100.net	82723.000000,82723.000000,82723.000000,82723.000000	F

After fixing the parser, we need to convert the hostnames table to use the dns table rather than urls/http.

Finally, we need to implement the logic as described in the research paper in order to identify domains with large amounts of subdomains. In the future, we want to be able to support whitelisting domains in this process such as ad companies.

Other analysis that could be done on DNS logs include query type stats, abnormality detection (for example the qclass and z fields), and long domain analysis.

Helpful links:
https://en.wikipedia.org/wiki/Domain_Name_System
https://www.bro.org/sphinx/scripts/base/protocols/dns/main.bro.html
http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml
https://www.sans.org/reading-room/whitepapers/dns/detecting-dns-tunneling-34152

Panic Error when executing show-beacons

Not sure what is causing this error, everything else worked up until this point. Any help would be appreciated.

Rita1:~/go/bin$ ./rita show-beacons -d beacons
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5aa53a]

goroutine 1 [running]:
panic(0x8798e0, 0xc4200100f0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
gopkg.in/mgo%2ev2.(*Iter).Next(0x0, 0x83ac60, 0xc420062b80, 0x83ac60)
/home/XXXX/go/src/gopkg.in/mgo.v2/session.go:3684 +0x3a
gopkg.in/mgo%2ev2.(*Iter).All(0x0, 0x836ea0, 0xc4201301a0, 0x0, 0x7)
/home/XXXX/go/src/gopkg.in/mgo.v2/session.go:3800 +0x27f
github.com/ocmdev/rita/commands.showBeacons(0xc4200852c0, 0x0, 0xc4200852c0)
/home/XXXX/go/src/github.com/ocmdev/rita/commands/show-beacons.go:48 +0x22a
github.com/urfave/cli.HandleAction(0x863360, 0x938ed8, 0xc4200852c0, 0xc420054900, 0x0)
/home/XXXX/go/src/github.com/urfave/cli/app.go:485 +0xd4
github.com/urfave/cli.Command.Run(0x8fae36, 0xc, 0x0, 0x0, 0x0, 0x0, 0x0, 0x908ca2, 0x28, 0x0, ...)
/home/XXXX/go/src/github.com/urfave/cli/command.go:207 +0xb96
github.com/urfave/cli.(*App).Run(0xc42007f380, 0xc42000c340, 0x4, 0x4, 0x0, 0x0)
/home/XXXX/go/src/github.com/urfave/cli/app.go:250 +0x812
main.main()
/home/XXXX/go/src/github.com/ocmdev/rita/rita.go:26 +0xf0

Unable to remove files in meta database

We allow the creation of databases based on dates. However, in this mode, we do not know the name of the database we wish to parse the file into until we open the file. We would like to make a record of which files were parsed into each database and when in the files collection in the meta db. We create this collection before parse time in the hopes of recovering from a failed parse. However, if we create an entry in metadb for the file before the parse, we will not know the database the file was parsed into because the time portion of the database name was determined at parse time.

The solution I have come up with is to create a separate collection for each parse attempt. Create the files collection before the parse, and if the parse succeeds update the parse attempt's table and copy the updated records into the main files collection.

RITA does not display column headers for CSV output

CSV can easily be imported into Microsoft Excel, LibreOffice, etc. However, column headers are necessary in order to take advantage of the most useful features in these products. RITA should display column headers in CSV format.

Confirm when reseting analysis

Add in a confirmation prompt (e.g. "Are you sure you want to reset analysis on 'X' database?") when using the rita reset-analysis command. The caveat here is that if you don't specify the database it will reset all databases and the prompt needs to reflect that (e.g. "Are you sure you want to reset analysis on ALL databases?")

Similar to issue #46. And the implementation will likely be similar.

Log parser fails to import files which contain more fields than standard bro logs.

When a bro log contains extra fields, the parser crashes with

ERRO[0000] the log contains a field with no candidate in the data structure  error="unmatched field in log" missing_field="FIELD" path=conn.log 

Extra fields may present themselves when users run additional bro scripts through bro such as:

redef record Conn::Info += {
  orig_cc: string &optional &log;
  resp_cc: string &optional &log;
};

event connection_state_remove(c: connection)
  {
    local orig_loc = lookup_location(c$id$orig_h);
    if ( orig_loc?$country_code )
      c$conn$orig_cc = orig_loc$country_code;
    local resp_loc = lookup_location(c$id$resp_h);
    if ( resp_loc?$country_code )
      c$conn$resp_cc = resp_loc$country_code;
  }

This happens because https://github.com/ocmdev/rita/blob/master/parser/parser.go#L379 enforces more restrictions than necessary while parsing. Having enough fields to fill the struct should be sufficient. Extra data should not cause a crash.

Include data size in beaconing analysis

Bro collects how many bytes were sent with each connection. We can use this to aid in our detection of beacons. A beacon should have a small, regular payload.

Dockerize Rita

To facilitate easier testing and evaluate performance for production deployments.

Current brainstormed options:

  • Add docker files to current rita repo.
  • Create new repo under ocmdev for files to live.
  • Deploy on Docker Hub

Also, create a readme or add to the readme pointing to the right place and giving the command(s) that sets everything up.

Separate internal mongo settings from config.yaml

Can we move the Mongo table settings to a separate configuration file (e.g. ~/.rita/database-structure.yaml)? This is more of an open question/discussion that needs to be decided before implementing.

# NOTE: DO NOT CHANGE THE SETTINGS BELOW UNLESS YOU ARE FAMILIAR WITH THE CODE #
Structure:
    ConnectionTable: conn
    HttpTable: http
    DnsTable: dns
    UniqueConnectionTable: uconn
    HostTable: host

BlackListed:
    ThreadCount: 2
    ChannelSize: 1000
    BlackListTable: blacklisted
    Database: rita-blacklist

Dns:
    ExplodedDnsTable: explodedDns
    HostnamesTable: hostnames

Crossref:
    InternalTable: internXREF
    ExternalTable: externXREF
    BeaconThreshold: .7

Scanning:
    ScanThreshold: 50
    ScanTable: scan

Beacon:
    DefaultConnectionThresh: 24
    BeaconTable: beacon

Urls:
    UrlsTable: urls

UserAgent:
    UserAgentTable: useragent

MetaTables:
    FilesTable: files
    DatabasesTable: databases


# Adjusting batchsize and prefetch may help speed up certain database queries
BatchSize: 300
Prefetch: 0.33

This would allow the following improvements:

  • When RITA code updates that adds or changes these fields, using an old config will cause errors. However, this means the user needs to update their config file and possibly either overwrite their modifications or manually copy them over. Having a separate file would allow changes to database structure without forcing users to update their config.
  • This would declutter the user's config file and make it less confusing to edit.

External Resources in HTML Report

Currently, there is an externally referenced image in the HTML report.
https://github.com/ocmdev/rita/blob/dev/reporting/templates/templates.go#L21
https://github.com/ocmdev/rita/blob/dev/reporting/templates/templates.go#L50

This needs to be embedded in the generated file so that the HTML report is self-contained. These lines should be changed to:

<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFoAAAAtCAMAAAAZUYxJAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAIBUExURQAAACEJAP10APtzAP91AA0AACIhIP94AP////95AAMAAB4eHQkAABcDABsaGhAQEBYWFcxcAHgAAP+GHP98ACAAAAsLCt0AAAYGBrazsa+trBsDAP+tZgMDA4qKiv7CjMXFxeLi4rGwsNzb2xIDAO9tAI+Pj/v7+9liAP+ZQNFeAMdaAH59fP/z4/+dQv/66nQAAOzs7MFXAZiYmNjX1tRgAP/Hkv/btmYAAMrJyaSko9LS0q5OAF5dXeFmAKVLBLxUAP/WsTwAAJOTk4g8Aff3928AAHQyAN1kAPVwAJdEA342AISDg25ubf+ya7i4t80AAOtrAJ6cnOjo6KlLAJ1FAGVlZTQWBEZFRZA+ALZRAKmpqVAmCP/u2fuBG/3Srl0AAHRzc5UAAIEAANcAAOdpAKFJAvuVPjk5Oc3NzfqoYlRUVNumeb+/vywAAFUAAKCgoK4AAPX19UsAAIoAALsAAJsAACAVDGgrAP7Pp6BjL91xFy8vLxkQCcQAAPHx8fbKpKOCZctnFNF8M71vLeGXWBELB0UaAHJDHPy7hX1YOZ11VMGSasGghN+5msODTfK5iOjWx9OwkikpKZ8AAKMAAOZ1FsnAt4lNGtONUp2TitrNwrJ2RLeKZdLBst6DNuawgqQAALikkv/q1YVsWGdTQ0o6K+eIOOp5Gvvp2jkeByNTNI8AAAfdSURBVEjHvVaJW1pJEi/hhdcccigSEEUOeSDiwaVyeeMJGkXwnBgPjMYzaIxJzBgn5yaTY5LMTDI5Z3dmZ3f+yq1+gGAcN5lvv536eFXdXb/6dXV1v9cA/AUiqa6WfSlW9iewICs6g7qoGhu86Ksztrg4Y6uplYAeR6C4iGaC2OocoJhv6nPYY8y0Nz6OSv/5LIr+AFt8gi0nmPFYq9Waah+HPRUVx9Sgg1rf2ICPt4NjPkc7SNodjiWYcaes1tb2+9DtUDlCIVRjgHCHO6GqRVF1F8wkkYFDzKKIUzAoRmHVqTY1S63djZZl1Ut2tZjdg36xWgXWHDakZtV9DlStMIXaWpuJmeIXllviACtg+kcYoXhqjxEIBEidQMsgdTsrpLbNwQrZEIwwbHubWMhYrYidTOCYI8QKmRS4sembymCPU9eyQnFokBWwfYMMwwhVqlCCTlbrSLjFAnbErer1iYVsH1KL3W5sdQ+IheKxBCofZsX2gwqbtVNiAWOt9U0WFEQPPqRe6kVQqhup+3FWpGZb0VdLY9D24VTCvRQrdiML2zupFqineOpemst9O80thH1VZuvy+0s9bZ+jFrChVjaTtX1msHvyfhtG2e8LMaK3FZsDOer8mZcUUI/w1GNje3Sd1sTYDCVqHWgbt4uRui/FiNuxFogbxMAxli8SrmcyhfDBEMVOJmbyaZ+gZhh1+yTWDbckMcVb8aCPWqFQgEtI0VlY1XiWGtfDhkZYhpnJYifz23iSmj2FmmEEuOKZPjGDc/Tx1K1YMgHrtuJa4Quohfa+Af44+voG2+mu++x7PjHTL6TUtPCJflYgVC8laAEghEh7P8OOAMX221t78+8pUt8oPV/a1ovqh+7S0tLzVwGWaAed36D9Bu2N0tIb+GBnpm3gVfcPtNWG6gYMYMS/zqMb3FmsrPCE3K65WH/9Qc3Fmt9e1dTUXJyphuv1F+t/g3G4g/Y22g/19bfv1NfU1N+h+hXFfqCYD/CARqDjdg57r+ATchNeV1yquP4A1d2HFRUVl2YArg9VDN2Fq/B06NLQU8TcHRp6TX1DT99S7O+XKiruvuUxM9ikjrc89jViK/PUV+Hbqr9VvXuEau1hU1XTs38CPEL7kXpof+3ju7WmpjV4hp1vKezwIcUeYncN4GNTFUrTI4qtera29g7uFVD/0jF77data9eaD3/tmO24hdSHaJ+g53Hz7GxHR/Pz583Yfd482/z41xz28SF2nwAdRen4JYd9AjclR9xK+Ps/vqLyHpx1dXX/ngD4qaGu4TuQwo8NOFDX8P5NA3bp4Psc9rvfR3kM6HjIV+Fso+ENKPPUWgDnj16v/yeAVXO5ObAPENOV60ZBAVFdOYqO43Q6D0xgy8BjLW8CyzCHGA9QU84HRYfNOp1u2AOago3U3syeQ2V2QJOdUlF4e2Q6iiwWl8RjFMfdR8FHIpfeOyOXakEjEomkuMUKtHJaKxEvcjntUqcyg1WKFDlMBoJBOeyJO6+y8guvfh4r+eJ/ChKJDOXMZ0SWfU5zoyAT/EUi0W90dm5vb25eaGnpaezBp6SkpLGnp7GxBH8lqGmXH6QPHaOmp6enpeUCRvGyub3dibJRVv0p9fYmMi7szH9/+fLlK1e+PpeTxcXFs2cXF7Pm7CeyeO7c14i+cvnnl7vz8zsLjS3bnfrTSiKjK+jcbOEF815Y2Nnd3VkoKZl/iaHz87sv6eQ/Y2MXKb/fWVjoudDY0rmxsVFWVlYs+YNtlMkym/C/F/cEh7Yyf1Jws6uLc6LX6zEd1EVlZUX4FIi+mHcUI/iM7LTjoZFrFSdPukapKTz2VP70IrKvdaVSqsRZFAq5RqlUyhU8u1ReqZUrlUczKKRyLWjliDwGV2g1Uqkm4zjGLNKZvBzwL3bB+IvVfcgNhb0mc9hiMmcCtPSTof0EnvuMFH53pFBOgi6ShCQXhonYxGpsOZwMR2GaDMMql4T92OoLJyGWLkKCMGFwYsgctwoF8GR4YhmiUSl1TBRyS2GFOP3E0GWLp6MeUm4kBlMkSIaDxMQduNKBmM1lMURIXEeIf9nlOvDDetoVmfNE4oTzkGmEx9NxLph2WcAciduikN82OVJ3TZvmTIQjwVGi8xPOFekiwQAxrxMufRAlxOS0HbhMLmJEKi+J2UwcMa+QmGV6lJgRHifpAPEP05Ripq3s5zZHjZ90iEcMxItYPg0DpcaMLOm4gbggRrxxskKMAWLChJMeLxkeJZYkFq0c1xsnSSdxmUiXjgRXCzeAz9qzEjPZOEqNWRtcGWqsjdkQdRIvzJEVC9ER4xYxOj3LOleQGIFLky4+a4MrAk5cmScMAZvNUFAQWuvoCuFMESelns5nHTCSZNAVReoYMfrJFs26y2xz2kxRMuxZDxPLFg93RRROsh4lnmkd5lBQEBGMElM6su8nQUySWOKk6wAXYPEQ7zQOeeeIiWaNpSTBJLEcpCcOIitkfZr4iZGjcO6ALC9HsCBhI47pkLDgfvGsGGOwXO4P4BE3j5aHRwPhck60ZZ7g/LoXE+YuWPV7nMNh4xY4jetJMKx3maMw6jcvS4cz8H1Irhud8CLgnxaB5tO7S/5f3ldF/n/FKUjlkaJ/EAq4+duSXqNaLd6gIoVUpBHJsamRikQKLfq0IqWCjoGU3rxyxCjRTX0ZOH/pZi5mzf/96voPSPjPP+4MrGwAAAAASUVORK5CYII=" alt="Offensive Countermeasures" style="width:90px; float:left" />

Bro to MongoDB Direct Connect Plug In

Bro IDS possesses an extensible architecture allowing for new code to be distributed and ran along side it. In particular, it allows plugins to define new log writers. In theory, a plugin to write Bro data to MongoDB directly could be written.

Unfortunately, there isn't great documentation on how to go about doing this.

http://supbrosup.blogspot.com/2014/09/bro-plugins.html describes the process of making a C++ function available to Bro via the plugin system.

https://github.com/0xxon/bro-postgresql is an example of a Bro plugin which writes to an SQL server.

https://github.com/bro/bro/blob/master/src/logging is the source code for the logging plugin framework. It appears to be well documented from a code perspective.

Recommended Upgrade Path Documentation

We should document a recommended upgrade procedure. The following commands should be a reasonable attempt at updating a RITA installation.

# This removes a dependency issue caused by a third party
rm -rf $GOPATH/src/github.com/*irupsen/logrus
echo "Updating to the latest release of RITA."
# This fetches the latest code from Github
go get -u github.com/ocmdev/rita
# This will compile the latest code and install the binary into $GOPATH/bin
cd $GOPATH/src/github.com/ocmdev/rita
make install
# Using old config files in RITA will cause issues so this updates to the newest config and backs up the old one
echo "Backing up your RITA config to ~/.rita/config.yaml-orig"
mv ~/.rita/config.yaml ~/.rita/config.yaml-orig
cp $GOPATH/src/github.com/ocmdev/rita/etc/rita.yaml ~/.rita/config.yaml
echo "RITA has been updated to:" $(rita --version)
echo "You must copy any custom settings from ~/.rita/config.yaml-orig to ~/.rita/config.yaml"

The go get is redundant from make install and could be replaced by changing to the directory and running git pull. The one thing go get -u will do though is update dependencies. This may or may not be a good thing. We probably should look at vendoring again to decide if that's the right way to go.

Long connection analysis

Connections which have been open for a very long time could be hosting command and control channels. Doing analysis on the length of connections may be worth while.

Url analysis output

For version: 0.9.2

Url analysis should have an output that optionally shows which machines connected to a given URL. It should also optionally shorten the URL to a given length.

Id10t error?

I am sure I am doing something wrong. Your instructions are great now. But where do I go for help after I have everything installed?

I tried running this :
[bin]$ ./rita import INFO[0000] Starting run start_time=2016-10-06 22:55:41 ERRO[0000] aborting file parsing for this file error=Did not find a match in directory map file=/usr/local/bro/logs/2016-10-06/app_stats.22:04:24-22:53:38.log ERRO[0000] aborting file parsing for this file error=Did not find a match in directory map file=/usr/local/bro/logs/2016-10-06/communication.21:49:24-22:00:00.log ERRO[0000] aborting file parsing for this file error=Did not find a match in directory map file=/usr/local/bro/logs/2016-10-06/conn-summary.21:49:29-22:00:00.log ERRO[0000] aborting file parsing for this file error=Did not find a match in directory map file=/usr/local/bro/logs/2016-10-06/conn-summary.22:00:00-22:53:38.log ERRO[0000] aborting file parsing for this file error=Did not find a match in directory map file=/usr/local/bro/logs/2016-10-06/conn.21:49:29-22:00:00.log

It first was compressed bro files, but I uncompressed them to see if I got something different.

Any video tutorials, besides John's old one?

Thank you,
Brian

Documentation of Bro local networks

Need to update the documentation to indicate that the bro/networks.cfg needs to be configured with the user's IP address space.

Also, when importing pcaps into Bro the command should be bro -r file.pcap local to enable marking local network connections in the Bro output. This command should be tested first to verify it works as expected.

Is DirectoryMap required?

I'm doing a first time setup, and the "Importing Data, Option 2" section says "Say you have two sets of logs to analyze"

What if you don't? Can I comment the DirectoryMap entries, or should I put one entry and comment the other?

Better output for blacklisted

For version 0.9.2

The blacklisted module should have an output that shows what local machines connected to the blacklisted addresses, the blacklisted address' score, and the blacklisted address.

Install script not working.

Line 60 of the install script fails:

[+] Transferring files
cp: cannot stat ‘bin’: No such file or directory

I have commented that out and then line 62 fails with this error:

[+] Transferring files
cp: cannot stat ‘usr’: No such file or directory

When I comment that out it seems to then succeed, but I don't know if there are missing files that need copied over.

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.