Giter Site home page Giter Site logo

ipt-netflow-1's Introduction

ipt_NETFLOW linux 2.6 kernel module by <[email protected]> -- 11 Feb 2008

============================
= OBTAINING LATEST VERSION =
============================

   $ git clone git://ipt-netflow.git.sourceforge.net/gitroot/ipt-netflow/ipt-netflow
   $ cd ipt-netflow


================
= INSTALLATION =
================

1. Besides kernel you will need iptables/netfilter source matching your
     installation or just fresh install from there: ftp://ftp.netfilter.org/pub/iptables/snapshot/
   I have this: ftp://ftp.netfilter.org/pub/iptables/snapshot/iptables-1.3.7-20070329.tar.bz2
   Unpack it somewhere and build with make.

2. Run ./configure script and it will create Makefile

3. make all install; depmod
   This will install kernel module and iptable specific library.

Troubleshooting:
   1) Sometimes you will want to add CC=gcc-3 to make command.
   Example: make CC=gcc-3.3

   2) Compile module with actual kernel source compiled.
   I.e. first compile kernel and boot into it, and then compile module.

   3) For autoloading module after reboot: set net.netflow.destination (or load
   module, if idestination set on load) after interfaces are up. Becasue module
   needs exporting interface (usually lo) to establish export connection.

4. After this point you should be able to load module
     and use -j NETFLOW target in your iptables. See next section.


===========
= RUNNING =
===========

1. You can load module by insmod like this:
   # insmod ipt_NETFLOW.ko destination=127.0.0.1:2055 debug=1

   Or if properly installed (make install; depmod) by this:
   # modprobe ipt_NETFLOW destination=127.0.0.1:2055

   See, you may add options in insmod/modprobe command line, or add
     them in /etc/ to modules.conf or modprobe.conf like thus:
   options ipt_NETFLOW destination=127.0.0.1:2055

2. Statistics is in /proc/net/stat/ipt_netflow
   To view slab statistics: grep ipt_netflow /proc/slabinfo

3. You can view parameters and control them via sysctl, example:
   # sysctl -w net.netflow.hashsize=32768

4. Example of directing all traffic into module:
   # iptables -A FORWARD -j NETFLOW
   # iptables -A INPUT -j NETFLOW
   # iptables -A OUTPUT -j NETFLOW


===========
= OPTIONS =
===========

   destination=127.0.0.1:2055
     - where to export netflow, to this ip address
       You will see this connection in netstat like this:
       udp 0 0 127.0.0.1:32772 127.0.0.1:2055 ESTABLISHED 

   destination=127.0.0.1:2055,192.0.0.1:2055
     - mirror flows to two (can be more) addresses,
       separate addresses with comma.

   inactive_timeout=15
     - export flow after it's inactive 15 seconds. Default value is 15.

   active_timeout=1800
     - export flow after it's active 1800 seconds (30 minutes). Default value is 1800.

   debug=0
     - debug level (none).

   sndbuf=number
     - size of output socket buffer in bytes. Recommend you to put
       higher value if you experience netflow packet drops (can be
       seen in statistics as 'sock: fail' number.)
       Default value is system default.

   hashsize=number
     - Hash table bucket size. Used for performance tuning.
       Abstractly speaking, it should be two times bigger than flows
       you usually have, but not need to.
       Default is system memory dependent small enough value.

   maxflows=2000000
     - Maximum number of flows to account. It's here to prevent DOS attacks. After
       this limit reached new flows will not be accounted. Default is
       2000000, zero is unlimited.

   aggregation=string..
     - Few aggregation rules (or some say they are rule.)

     Buffer for aggregation string 1024 bytes, and sysctl limit it
       to ~700 bytes, so don't write there a lot.
     Rules worked in definition order for each packet, so don't
       write them a lot again.
     Rules applied to both directions (dst and src).
     Rules tried until first match, but for netmask and port
        aggregations separately.
     Delimit them with commas.

     Rules are of two kinds: for netmask aggregation
        and port aggregation:

     a) Netmask aggregation example: 192.0.0.0/8=16
     Which mean to strip addresses matching subnet 192.0.0.0/8 to /16.

     b) Port aggregation example: 80-89=80
     Which mean to replace ports from 80 to 89 with 80.

     Full example:
        aggregation=192.0.0.0/8=16,10.0.0.0/8=16,80-89=80,3128=80
        
====================
= HOW TO READ STAT =
====================

  Statistics is your friend to fine tune and understand netflow module performance.

  To see stat:
  # cat /proc/net/stat/ipt_netflow

  How to interpret the data:

> Flows: active 5187 (peak 83905 reached 0d0h1m ago, maxflows 2000000), mem 283K

  active X: currently active flows in memory cache.
    - for optimum CPU performance it is recommended to set hash table size to
      twice of average of this value, or higher.
  peak X reached Y ago: peak value of active flows.
  mem XK: how much kilobytes of memory currently taken by active flows.
    - one active flow taking 56 bytes of memory.
    - there is system limit on cache size too.

> Hash: size 8192 (mem 32K), metric 1.0, 1.0, 1.0, 1.0. MemTraf: 1420 pkt, 364 K (pdu 0, 0).

  Hash: size X: current hash size/limit.
    - you can control this by sysctl net.netflow.hashsize variable.
    - increasing this value can significantly reduce CPU load.
    - default value is not optimal for performance.
    - optimal value is twice of average of active flows.
  mem XK: how much memory occupied by hash table.
    - hash table is fixed size by nature, taking 4 bytes per entry.
  metric X, X, X, X: how optimal is your hash table being used.
    - lesser value mean more optimal hash table use, min is 1.0.
    - this is moving average (EWMA) of hash table access divided
      by match rate (searches / matches) for 4sec, and 1, 5, 15 minutes.
      Sort of hash table load average.
  MemTraf: X pkt, X K: how much traffic accounted for flows that are in memory.
    - these flows that are residing in internal hash table.
  pdu X, X: how much traffic in flows preparing to be exported.
    - it is included already in aforementioned MemTraf total.

> Timeout: active 1800, inactive 15. Maxflows 2000000

  Timeout: active X: how much seconds to wait before exporting active flow.
    - same as sysctl net.netflow.active_timeout variable.
  inactive X: how much seconds to wait before exporting inactive flow.
    - same as sysctl net.netflow.inactive_timeout variable.
  Maxflows 2000000: maxflows limit.
    - all flows above maxflows limit must be dropped.
    - you can control maxflows limit by sysctl net.netflow.maxflows variable.

> Rate: 202448 bits/sec, 83 packets/sec; 1 min: 668463 bps, 930 pps; 5 min: 329039 bps, 483 pps

  - Module throughput values for 1 second, 1 minute, and 5 minutes.

> cpu#  stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
> cpu0  stat: 980540  10473 180600,    0    0    0    0, sock:   4983 928 0, 7124 K, traffic: 188765, 14 MB, drop: 27863, 1142 K

  cpu#: this is Total and per CPU statistics for:
  stat: <search found new, trunc frag alloc maxflows>: internal stat for:
  search found new: hash table searched, found, and not found counters.
  trunc: how much truncated packets is ignored
    - these are that possible don't have valid IP header.
    - accounted in drop packets counter but not in drop bytes.
  frag: how much fragmented packets have seen.
    - kernel always defragments INPUT/OUTPUT chains for us.
    - these packets are not ignored but not reassembled either, so:
    - if there is no enough data in fragment (ex. tcp ports) it is considered zero.
  alloc: how much cache memory allocations is failed.
    - packets ignored and accounted in drop stat.
    - probably increase system memory if this ever happen.
  maxflows: how much packets ignored on maxflows (maximum active flows reached).
    - packets ignored and accounted in drop stat.
    - you can control maxflows limit by sysctl net.netflow.maxflows variable.

  sock: <ok fail cberr, bytes>: table of exporting stats for:
  ok: how much Netflow PDUs are exported (i.e. UDP packets sent by module).
  fail: how much socket errors (i.e. packets failed to be sent).
    - packets dropped and their internal statistics cumulatively accounted in drop stat.
  cberr: how much connection refused ICMP errors we got from export target.
    - probably you not launched collector software on destination,
    - or specified wrong destination address.
    - flows lost in this fashion is not possible to account in drop stat.
    - these are ICMP errors, and would look like this in tcpdump:
      05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
      05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
  bytes: how much kilobytes of exporting data successfully sent by the module.

  traffic: <pkt, bytes>: how much traffic is accounted.
  pkt, bytes: sum of packets/megabytes accounted by module.
    - flows that failed to be exported (on socket error) is accounted here too.

  drop: <pkt, bytes>: how much of traffic is not accounted.
  pkt, bytes: sum of packets/kilobytes we are lost/dropped.
    - reasons they are dropped and accounted here:
      truncated/fragmented packets,
      packet is for new flow but failed to allocate memory for it,
      packet is for new flow but maxflows is already reached,
      all flows in export packets that got socket error.

> sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, other 0

  sockX: per destination stats for:
  X.X.X.X:Y: destination ip address and port.
    - controlled by sysctl net.netflow.destination variable.
  sndbuf X: how much data socket can hold in buffers.
    - controlled by sysctl net.netflow.sndbuf variable.
    - if you have packet drops due to sndbuf reached (error -11) increase this value.
  filled X: how much data in socket buffers right now.
  peak X: peak value of how much data in socket buffers was.
    - you will be interested to keep it below sndbuf value.
  err: how much packets are dropped due to errors.
    - all flows from them will be accounted in drop stat.
  sndbuf reached X: how much packets dropped due to sndbuf being too small (error -11).
  other X: dropped due to other possible errors.

> aggr0: ...
  aggrX: aggregation rulesets.
    - controlled by sysctl net.netflow.aggregation variable.

=========
= VOILA =
=========

ipt-netflow-1's People

Contributors

aabc avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.