Giter Site home page Giter Site logo

faucetsdn / networkml Goto Github PK

View Code? Open in Web Editor NEW
117.0 15.0 38.0 104.73 MB

Machine learning plugins for network traffic

License: Apache License 2.0

Python 57.86% Makefile 0.58% Dockerfile 0.43% Jupyter Notebook 40.81% Shell 0.32%
machine-learning machine-learning-algorithms ml network-analysis poseidon pcap network-traffic-classification network-traffic-identification hacktoberfest

networkml's People

Contributors

alshaboti avatar anarkiwi avatar cglewis avatar cstephenson970 avatar dependabot[bot] avatar gregs5 avatar hax7 avatar jseparovic avatar jspeed-meyers avatar krb1997 avatar lilchurro avatar paulgowdy avatar pyup-bot avatar rashley-iqt avatar renovate-bot avatar renovate[bot] avatar sneakyoctopus12 avatar squeeve avatar toddstavish avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

networkml's Issues

Assign requires shapes of both tensors to match

Description

Experiencing tensorflow.python.framework.errors_impl.InvalidArgumentError when running make eval_onelayer from commit cc17326 (July 18th) onwards. I have confirmed that the error does not occur on 1fd5e3e (July 17th) or before.

Environment

  • Git commit hash cc17326
  • Docker version 18.03.1-ce
  • Ubuntu 18.04

Output

The following output is a result of running a script which runs train_onelayer on a bunch of node extracted IOT pcaps and then an eval_onelayer on some similar node extracted pcaps. The above InvalidArgumentError happens in the evaluation step. (Second code snippet)

I have also included the training step (first snippet) because there is the following warning Warning: The least populated class in y has only 3 members, which is too few. The minimum number of groups for any class cannot be less than n_splits=5. I have got this before a number of times and assumed it was something to do with not having enough data. Do you think this it at any way related to this error?

Training

sh ../onelayer.sh

Sending build context to Docker daemon  578.4MB
Step 1/8 : FROM debian:stretch-slim
---> 3e235dbb0ba6
Step 2/8 : LABEL maintainer="Charlie Lewis <[email protected]>"
---> Using cache
---> 4c4a0b3a3e95
Step 3/8 : ENV BUILD_PACKAGES="        build-essential         linux-headers-4.9         python3-dev         cmake         tcl-dev         xz-utils         zlib1g-dev         git         curl"     APT_PACKAGES="        ca-certificates         openssl         python3         python3-pip         tcpdump"     PYTHON_VERSION=3.6.4     PATH=/usr/local/bin:$PATH     PYTHON_PIP_VERSION=9.0.1     LANG=C.UTF-8
---> Using cache
---> f21017329ec4
Step 4/8 : COPY requirements.txt requirements.txt
---> Using cache
---> 25674e6a26bd
Step 5/8 : RUN set -ex;     apt-get update -y;     apt-get upgrade -y;     apt-get install -y --no-install-recommends ${APT_PACKAGES};     apt-get install -y --no-install-recommends ${BUILD_PACKAGES};     ln -s /usr/bin/idle3 /usr/bin/idle;     ln -s /usr/bin/pydoc3 /usr/bin/pydoc;     ln -s /usr/bin/python3 /usr/bin/python;     ln -s /usr/bin/python3-config /usr/bin/python-config;     ln -s /usr/bin/pip3 /usr/bin/pip;     pip install -U -v setuptools wheel;     pip install -U -v -r requirements.txt;     apt-get remove --purge --auto-remove -y ${BUILD_PACKAGES};     apt-get clean;     apt-get autoclean;     apt-get autoremove;     rm -rf /tmp/* /var/tmp/*;     rm -rf /var/lib/apt/lists/*;     rm -f /var/cache/apt/archives/*.deb         /var/cache/apt/archives/partial/*.deb         /var/cache/apt/*.bin;     find /usr/lib/python3 -name __pycache__ | xargs rm -r;     rm -rf /root/.[acpw]*
---> Using cache
---> 71f74155053a
Step 6/8 : COPY . /poseidonml
---> e43665f19d7f
Step 7/8 : WORKDIR /poseidonml
Removing intermediate container 72a08abbe52a
---> e561ad0cb2fb
Step 8/8 : RUN pip uninstall -y poseidonml && pip install .
---> Running in 6fb4d2853c7a
Uninstalling poseidonml-0.1.4:
 Successfully uninstalled poseidonml-0.1.4
Processing /poseidonml
Requirement already satisfied: numpy==1.14.5 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: pika==0.12.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: redis==2.10.6 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: scikit-learn==0.18.2 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: scipy==1.1.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: tensorflow==1.9.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Collecting setuptools<=39.1.0 (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
 Downloading https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl (566kB)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: tensorboard<1.10.0,>=1.9.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.5/dist-packages (from tensorboard<1.10.0,>=1.9.0->tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: werkzeug>=0.11.10 in /usr/local/lib/python3.5/dist-packages (from tensorboard<1.10.0,>=1.9.0->tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Installing collected packages: poseidonml, setuptools
 Running setup.py install for poseidonml: started
   Running setup.py install for poseidonml: finished with status 'done'
 Found existing installation: setuptools 40.0.0
   Uninstalling setuptools-40.0.0:
     Successfully uninstalled setuptools-40.0.0
Successfully installed poseidonml-0.1.5.dev0 setuptools-39.1.0
Removing intermediate container 6fb4d2853c7a
---> aa5fbe6644c0
Successfully built aa5fbe6644c0
Successfully tagged cyberreboot/poseidonml:base
~/workspace/PoseidonML/DeviceClassifier/OneLayer ~/workspace/PoseidonML
Sending build context to Docker daemon  209.9kB
Step 1/6 : FROM cyberreboot/poseidonml:base
---> aa5fbe6644c0
Step 2/6 : LABEL maintainer="Charlie Lewis <[email protected]>"
---> Running in 0be06a51cf98
Removing intermediate container 0be06a51cf98
---> 608b9fd721dc
Step 3/6 : COPY . /OneLayer
---> f81ecbea53ab
Step 4/6 : COPY models /models
---> 23be2cd4b02f
Step 5/6 : WORKDIR /OneLayer
Removing intermediate container 361e5f27c872
---> 3df5d8c3ed66
Step 6/6 : ENTRYPOINT ["python3", "eval_OneLayer.py"]
---> Running in 18594406f9ca
Removing intermediate container 18594406f9ca
---> 8f3a2444b630
Successfully built 8f3a2444b630
Successfully tagged poseidonml:onelayer
~/workspace/PoseidonML
Running OneLayer Train on PCAP files ~/workspace/traffic/SanitizeNodes/Training/
Reading data
Reading /pcaps/TribySpeaker-160925-7-18b79e022044.pcap as TribySpeaker
Reading /pcaps/SmartBabyMonitor-161003-2-0024e41118a8.pcap as SmartBabyMonitor
Reading /pcaps/TPLinkRouterBridgeLAN-160923-5-14cc205133ea.pcap as TPLinkRouterBridgeLAN
...
Reading /pcaps/SamsungGalaxyTab-160924-4-0821ef3bfce3.pcap as SamsungGalaxyTab
Reading /pcaps/TPLinkRouterBridgeLAN-160929-7-14cc205133ea.pcap as TPLinkRouterBridgeLAN
Reading /pcaps/NESTProtectSmokeAlarm-161009-7-18b43025bee4.pcap as NESTProtectSmokeAlarm
Making data splits
Normalizing features
Doing feature selection
/usr/local/lib/python3.5/dist-packages/sklearn/utils/__init__.py:54: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
 if np.issubdtype(mask.dtype, np.int):
/usr/local/lib/python3.5/dist-packages/sklearn/model_selection/_split.py:581: Warning: The least populated class in y has only 3 members, which is too few. The minimum number of groups for any class cannot be less than n_splits=5.
 % (min_groups, self.n_splits)), Warning)
...
/usr/local/lib/python3.5/dist-packages/sklearn/model_selection/_split.py:581: Warning: The least populated class in y has only 3 members, which is too few. The minimum number of groups for any class cannot be less than n_splits=5.
 % (min_groups, self.n_splits)), Warning)
/usr/local/lib/python3.5/dist-packages/sklearn/model_selection/_split.py:581: Warning: The least populated class in y has only 3 members, which is too few. The minimum number of groups for any class cannot be less than n_splits=5.
 % (min_groups, self.n_splits)), Warning)
[0, 67, 68, 123, 443, 1024, 1046, 1077, 1091, 1092, 1104, 1147, 1467, 1489, 2017, 2048, 2070, 2101, 2115, 2116, 2128, 2185, 2491, 2595, 3041, 3072, 3125, 3139, 3140, 3152, 3195, 3209, 3515, 3537, 3618, 4065, 4096, 4097, 4098, 4099, 4100, 4101, 4102, 4103]
/usr/local/lib/python3.5/dist-packages/sklearn/metrics/classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
 'precision', 'predicted', average, warn_for)
F1 score: 0.8813119197002154

Evaluating

Testing on ~/workspace/traffic/SanitizeNodes/Testing/AmazonEcho-160926-9-44650d56ccd3.pcap
Sending build context to Docker daemon  578.3MB
Step 1/8 : FROM debian:stretch-slim
---> 3e235dbb0ba6
Step 2/8 : LABEL maintainer="Charlie Lewis <[email protected]>"
---> Using cache
---> 4c4a0b3a3e95
Step 3/8 : ENV BUILD_PACKAGES="        build-essential         linux-headers-4.9         python3-dev         cmake         tcl-dev         xz-utils         zlib1g-dev         git         curl"     APT_PACKAGES="        ca-certificates         openssl         python3         python3-pip         tcpdump"     PYTHON_VERSION=3.6.4     PATH=/usr/local/bin:$PATH     PYTHON_PIP_VERSION=9.0.1     LANG=C.UTF-8
---> Using cache
---> f21017329ec4
Step 4/8 : COPY requirements.txt requirements.txt
---> Using cache
---> 25674e6a26bd
Step 5/8 : RUN set -ex;     apt-get update -y;     apt-get upgrade -y;     apt-get install -y --no-install-recommends ${APT_PACKAGES};     apt-get install -y --no-install-recommends ${BUILD_PACKAGES};     ln -s /usr/bin/idle3 /usr/bin/idle;     ln -s /usr/bin/pydoc3 /usr/bin/pydoc;     ln -s /usr/bin/python3 /usr/bin/python;     ln -s /usr/bin/python3-config /usr/bin/python-config;     ln -s /usr/bin/pip3 /usr/bin/pip;     pip install -U -v setuptools wheel;     pip install -U -v -r requirements.txt;     apt-get remove --purge --auto-remove -y ${BUILD_PACKAGES};     apt-get clean;     apt-get autoclean;     apt-get autoremove;     rm -rf /tmp/* /var/tmp/*;     rm -rf /var/lib/apt/lists/*;     rm -f /var/cache/apt/archives/*.deb         /var/cache/apt/archives/partial/*.deb         /var/cache/apt/*.bin;     find /usr/lib/python3 -name __pycache__ | xargs rm -r;     rm -rf /root/.[acpw]*
---> Using cache
---> 71f74155053a
Step 6/8 : COPY . /poseidonml
---> ac11edbe1037
Step 7/8 : WORKDIR /poseidonml
Removing intermediate container 40cf1ceda5d3
---> 3efaafb9b8d7
Step 8/8 : RUN pip uninstall -y poseidonml && pip install .
---> Running in 4854b1d4813f
Uninstalling poseidonml-0.1.4:
 Successfully uninstalled poseidonml-0.1.4
Processing /poseidonml
Requirement already satisfied: numpy==1.14.5 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: pika==0.12.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: redis==2.10.6 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: scikit-learn==0.18.2 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: scipy==1.1.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: tensorflow==1.9.0 in /usr/local/lib/python3.5/dist-packages (from poseidonml==0.1.5.dev0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Collecting setuptools<=39.1.0 (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
 Downloading https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl (566kB)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: tensorboard<1.10.0,>=1.9.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.5/dist-packages (from tensorboard<1.10.0,>=1.9.0->tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Requirement already satisfied: werkzeug>=0.11.10 in /usr/local/lib/python3.5/dist-packages (from tensorboard<1.10.0,>=1.9.0->tensorflow==1.9.0->poseidonml==0.1.5.dev0)
Installing collected packages: poseidonml, setuptools
 Running setup.py install for poseidonml: started
   Running setup.py install for poseidonml: finished with status 'done'
 Found existing installation: setuptools 40.0.0
   Uninstalling setuptools-40.0.0:
     Successfully uninstalled setuptools-40.0.0
Successfully installed poseidonml-0.1.5.dev0 setuptools-39.1.0
Removing intermediate container 4854b1d4813f
---> ac5c5ef9eebc
Successfully built ac5c5ef9eebc
Successfully tagged cyberreboot/poseidonml:base
~/workspace/PoseidonML/DeviceClassifier/OneLayer ~/workspace/PoseidonML
Sending build context to Docker daemon  157.2kB
Step 1/6 : FROM cyberreboot/poseidonml:base
---> ac5c5ef9eebc
Step 2/6 : LABEL maintainer="Charlie Lewis <[email protected]>"
---> Running in 088e18cd7fb3
Removing intermediate container 088e18cd7fb3
---> 248e7e9bd59c
Step 3/6 : COPY . /OneLayer
---> b4fdff9c42e4
Step 4/6 : COPY models /models
---> 62902c460d8d
Step 5/6 : WORKDIR /OneLayer
Removing intermediate container 31d824807a86
---> 3dbbe25c2bec
Step 6/6 : ENTRYPOINT ["python3", "eval_OneLayer.py"]
---> Running in 3cd0fd77b21a
Removing intermediate container 3cd0fd77b21a
---> 53850224d110
Successfully built 53850224d110
Successfully tagged poseidonml:onelayer
~/workspace/PoseidonML
Running OneLayer Eval on PCAP file ~/workspace/traffic/SanitizeNodes/Testing/AmazonEcho-160926-9-44650d56ccd3.pcap
docker run -it -v "~/workspace/traffic/SanitizeNodes/Testing/AmazonEcho-160926-9-44650d56ccd3.pcap:/pcaps/eval.pcap" -e SKIP_RABBIT=true --entrypoint=python3 poseidonml:onelayer eval_OneLayer.py
Traceback (most recent call last):
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call
   return fn(*args)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
   options, feed_dict, fetch_list, target_list, run_metadata)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
   run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [169,512] rhs shape= [141,400]
        [[Node: save/Assign_13 = Assign[T=DT_FLOAT, _class=["loc:@network/session_rnn/rnn/basic_lstm_cell/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/network/session_rnn/rnn/basic_lstm_cell/kernel/Adam_1, save/RestoreV2:13)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "eval_OneLayer.py", line 431, in <module>
   abnormality = eval_pcap(pcap_path, conf_labels, time_const, label=labels[0], rnn_size=rnn_size)
 File "/usr/local/lib/python3.5/dist-packages/poseidonml/eval_SoSModel.py", line 29, in eval_pcap
   rnnmodel.load(os.path.join(working_set.find(Requirement.parse('poseidonml')).location, 'poseidonml/models/SoSmodel'))
 File "/usr/local/lib/python3.5/dist-packages/poseidonml/SoSmodel.py", line 204, in load
   self.saver.restore(self.sess, path)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1752, in restore
   {self.saver_def.filename_tensor_name: save_path})
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run
   run_metadata_ptr)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run
   feed_dict_tensor, options, run_metadata)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run
   run_metadata)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call
   raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [169,512] rhs shape= [141,400]
        [[Node: save/Assign_13 = Assign[T=DT_FLOAT, _class=["loc:@network/session_rnn/rnn/basic_lstm_cell/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/network/session_rnn/rnn/basic_lstm_cell/kernel/Adam_1, save/RestoreV2:13)]]

Caused by op 'save/Assign_13', defined at:
 File "eval_OneLayer.py", line 431, in <module>
   abnormality = eval_pcap(pcap_path, conf_labels, time_const, label=labels[0], rnn_size=rnn_size)
 File "/usr/local/lib/python3.5/dist-packages/poseidonml/eval_SoSModel.py", line 27, in eval_pcap
   rnnmodel = SoSModel(rnn_size=rnn_size)
 File "/usr/local/lib/python3.5/dist-packages/poseidonml/SoSmodel.py", line 76, in __init__
   self._build_model()
 File "/usr/local/lib/python3.5/dist-packages/poseidonml/SoSmodel.py", line 129, in _build_model
   self.saver = tf.train.Saver()
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1284, in __init__
   self.build()
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1296, in build
   self._build(self._filename, build_save=True, build_restore=True)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1333, in _build
   build_save=build_save, build_restore=build_restore)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 781, in _build_internal
   restore_sequentially, reshape)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 422, in _AddRestoreOps
   assign_ops.append(saveable.restore(saveable_tensors, shapes))
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 113, in restore
   self.op.get_shape().is_fully_defined())
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 219, in assign
   validate_shape=validate_shape)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 60, in assign
   use_locking=use_locking, name=name)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
   op_def=op_def)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
   op_def=op_def)
 File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1740, in __init__
   self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [169,512] rhs shape= [141,400]
        [[Node: save/Assign_13 = Assign[T=DT_FLOAT, _class=["loc:@network/session_rnn/rnn/basic_lstm_cell/kernel"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](optimizer/network/session_rnn/rnn/basic_lstm_cell/kernel/Adam_1, save/RestoreV2:13)]]

Makefile:13: recipe for target 'eval_onelayer' failed
make: *** [eval_onelayer] Error 1

Thanks in advance.

Add pcap directory support.

The eval_[models] scripts currently only ingest pcap files; they should be allowed to ingest pcap directories, though.

Saving decision to Redis throws error

Description

Redis doesn't save the ML decision due to a type error.

Environment

  • All Device Classifier eval
  • Ubuntu 18.04 / python 3.6

Steps to reproduce

  • run make eval_[model.. either model will throw error]

Expected result

No error message between building the container and printing the result.

Actual result

ERROR:__main__:Failed to update keys in Redis because: Invalid input of type: 'dict'. 
Convert to a byte, string or number first.

Divide by Zero error when training the SoSModel

Traceback (most recent call last):
  File "train_SoSModel.py", line 47, in <module>
    data = create_dataset(data_dir, time_const)
  File "/SoSModel/session_sequence.py", line 122, in create_dataset
    session_info = featurize_session(key, value, source=source)
  File "/SoSModel/pcap_utils.py", line 406, in featurize_session
    freq_1 = num_sent_by_1/(last_time - first_time)
ZeroDivisionError: float division by zero

How to process all ports as features

The current way that ports are processed as features is by limiting port number < config['max port'] here with is 1024 by default.

Based on this paper, IoT devices are using ports numbers greater than 1024.

Ideally, all ports that are used should be used as features. However, set max port=65535 will create a huge feature vector where most of its value is zeros.

What I am proposing is to use two lists during the training (ports_list and ports_freq). Such that ports_list contains only the ports witnessed in the dataset and ports_freq counts its frequency.

What do you think?

When generating predictions, eval_onelayer crashes on a KeyError

Traceback (most recent call last):
  File "eval_OneLayer.py", line 210, in <module>
    instance.main()
  File "eval_OneLayer.py", line 116, in main
    source_mac, timestamps[0])
  File "/usr/local/lib/python3.5/dist-packages/poseidonml/common.py", line 159, in get_previous_state
    state[b'representation'].decode('ascii'))
KeyError: b'representation'

train_OneLayerModel.py script and training samples errors

Hi,
I am trying to create a model to be used in Poseidon #49. I captured traffic from raspberry pi and a laptop into two pcap files, then.

1- I added "IOT" into config.json labels list.
2- I added the two pcap files into pcap dir and edited label_assignments.json as below.

root@e0e996977e0e:/app/NodeClassifier/pcap# ls
BusinessWS-60m-mylaptop.pcap  label_assignments.json
IOT-260m-ibmwatsoniot.pcap

root@e0e996977e0e:/app/NodeClassifier/pcap# cat label_assignments.json
{
    "IOT": "IOT",
    "BusinessWS": "Business workstation"
}

When I run train script, I got "ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0".
What is wrong? I have already added two pcap files and labeled them in label_assignments.json.

rs/ot@e0e996977e0e:/app/NodeClassifier# python train_OneLayerModel.py pcap/ models
Reading data
Reading pcap/IOT-260m-ibmwatsoniot.pcap as IOT
Reading pcap/BusinessWS-60m-mylaptop.pcap as Business workstation
Making data splits
Normalizing features
Doing feature selection
Traceback (most recent call last):
  File "train_OneLayerModel.py", line 28, in <module>
    model.train(data_dir)
  File "/app/NodeClassifier/utils/OneLayer.py", line 153, in train
    self.feature_list = select_features(X_normed, y_train)
  File "/app/NodeClassifier/utils/training_utils.py", line 101, in select_features
    selection_model.fit(X, y)
  File "/usr/local/lib/python3.5/site-packages/sklearn/linear_model/randomized_l1.py", line 112, in fit
    sample_fraction=self.sample_fraction, **params)
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/memory.py", line 283, in __call__
    return self.func(*args, **kwargs)
  File "/usr/local/lib/python3.5/site-packages/sklearn/linear_model/randomized_l1.py", line 54, in _resample_model
    for _ in range(n_resampling)):
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 758, in __call__
    while self.dispatch_one_batch(iterator):
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 608, in dispatch_one_batch
    self._dispatch(tasks)
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 571, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 109, in apply_async
    result = ImmediateResult(func)
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 326, in __init__
    self.results = batch()
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 131, in __call__
    return [func(*args, **kwargs) for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py", line 131, in <listcomp>
    return [func(*args, **kwargs) for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.5/site-packages/sklearn/linear_model/randomized_l1.py", line 377, in _randomized_logistic
    clf.fit(X, y)
  File "/usr/local/lib/python3.5/site-packages/sklearn/linear_model/logistic.py", line 1186, in fit
    sample_weight=sample_weight)
  File "/usr/local/lib/python3.5/site-packages/sklearn/svm/base.py", line 875, in _fit_liblinear
    " class: %r" % classes_[0])
ValueError: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0

Overhauling featurizer

There are a number of features that aren't currently being considered when training the models that we should probably consider. In thinking about features we would like to keep and add, I think we should take the opportunity to reformat the featurizer script so that it's a bit more understandable and allows subsequent users the flexibility to add to, or use only a subset of, the features when they train their own models.

GOALS:

  • Come up with a set of features to extract from traffic.
  • Reorganize featurizer's output so that users can modularly understand what features have been extracted/processed, and select the features they would like to use for training.

Need sample "label_assignments.json" file

Need sample label_assignments.json file in the pcap training directory. (There is none - it's currently expected, but missing). Ideally, also include something like a README file that guides the user on the syntax of the file, and the fact that keys in the file are used in parsing the pcaps filenames when training.

Perhaps a label_assignments.json.readme that says something like:


The label_assignments.json file must be configured to include the proper prefixes for the packet captures (pcaps) that you would like to train from. The key is the filename prefix (which expects a dash afterwards), and the value is the label you would like to use. For example, if you have a directory full of pcaps such as:

iphone7-b-Wed0945-5days.pcap
win-work-Wed0945-5days.pcap
iphone7-a-Wed0945-5days.pcap
iphone7-c-Wed0945-5days.pcap
win10homedesktop-Wed0945-5days.pcap

You might create an assignments.json file that looks like:

{
"win" : "Administrator workstation",
"win10homedestop" : "Business workstation",
"iphone7" : "Smartphone"
}

NOTE: the "-" in the filename is mandatory.

Selection of either IP or MAC for training steps

In the model training step, it would be helpful to have the ability to specify whether the selection from the PCAP was based on a single MAC, or a single IP address, because depending on how the packet capture was acquired, we might want to select on MAC (to survive things like DHCP-based IP address rotation), or on IP address.

Encoding error

Not sure what's causing this exactly yet, but seems to be preventing outputting results for certain PCAPs:

2018-07-10T16:00:06+00:00 172.17.0.1 quizzical_euclid/1ca4e34a5309/pcap[5341]: SyntaxError: Non-UTF-8 code starting with '\xd4' in file /opt/vent_files/tcprewrite-dot1q-2018-07-10-19_56_39.552490-UTC/pcap-node-splitter-2018-07-10-19_57_30.328932-UTC/servers/trace_534824e5ea82f9dac115f2a5254e5ee4d2f30169_2018-07-10_19_40_35-miscellaneous.pcap on line 1, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

On-line analysis

Hi,
Is is possible to use PoseidonML for on-line traffic analysis?
i.e., instead of use pcap(s) as input, use tcpdump output as input to PoseidonML.

RandomizedLogisticRegression is being deprecated

/usr/local/lib/python3.5/dist-packages/sklearn/utils/deprecation.py:58: DeprecationWarning: Class RandomizedLogisticRegression is deprecated; The class RandomizedLogisticRegression is deprecated in 0.19 and will be removed in 0.21.
  warnings.warn(msg, category=DeprecationWarning)

make eval_onelayer => ValueError: Cannot feed value of shape..

Hey guys,

Whilst evaluating a few of my trained .pkl, I noticed I was getting a recurring error for some of the pcaps I was evaluating against.

This error appears about 80% of the time, sometimes the evaluation works (I got a correct 93% classification yesterday for a certain pcap), but the majority of times it shows the following:

Running OneLayer Eval on PCAP file /home/james/PoseidonML/DeviceClassifier/OneLayer/opts/16-10-10.pcap
docker run -it -v "/home/james/PoseidonML/DeviceClassifier/OneLayer/opts/16-10-10.pcap:/pcaps/eval.pcap" -e SKIP_RABBIT=true poseidonml:onelayer
Traceback (most recent call last):
  File "eval_OneLayer.py", line 419, in <module>
    abnormality = eval_pcap(pcap_path, label=labels[0])
  File "/usr/local/lib/python3.5/dist-packages/poseidonml/eval_SoSModel.py", line 51, in eval_pcap
    np.expand_dims(L, axis=0),
  File "/usr/local/lib/python3.5/dist-packages/poseidonml/SoSmodel.py", line 242, in get_output
    self.L: L
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1100, in _run
    % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 4) for Tensor 'Placeholder_2:0', which has shape '(?, 21)'
Makefile:13: recipe for target 'eval_onelayer' failed
make: *** [eval_onelayer] Error 1

The training works most of the time however occasionally it shows the following warning however I am not sure if it is relevant:

Doing feature selection
/usr/local/lib/python3.5/dist-packages/sklearn/utils/__init__.py:54: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
  if np.issubdtype(mask.dtype, np.int):
[0, 1024, 1077, 1104, 1147, 1467, 2017, 2048, 2128, 3072, 4096, 4097, 4098, 4099, 4101]
F1 score: 0.9501632191792376

It still gives a half decent F1 score. However then trying to eval_onelayer using this trained .pkl and a pcap from the same selection that it was trained on, usually gives the first stream of errors a the top of this comment.

Any indication why this may be?

Thanks

Developing PoseidonML with venv modules

Hi there,

I am struggling to understand what way I can develop this project with the poseidonml package being installed in venv. Surely this means that the changes I make locally cannot be applied.

I am currently adding files into the local PoseidonML directory, but am getting import errors like the one below because venv doesn't run a local instance

Traceback (most recent call last):
 File "train_NewModel.py", line 9, in <module>
   from poseidonml.NewModel import NewModel
ImportError: No module named poseidonml.NewModel

I can manually edit files in ~/workspace/PoseidonML/venv/local/lib/python2.7/site-packages/poseidonml to see my imports work, but these changes are not included in our forked PoseidonML repo, and will be overwritten any time these dependencies are refreshed.

What is the proper procedure for developing? How would you suggest we tackle these imports?

Thanks in advance.

Initial Update

The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.

bug found with presumably a pcap without enough traffic

2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]: Traceback (most recent call last):
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:   File "eval_OneLayer.py", line 414, in <module>
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:     abnormality = eval_pcap(pcap_path, label=labels[0])
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:   File "/NodeClassifier/eval_SoSModel.py", line 29, in eval_pcap
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:     perturb_types=['random data']
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:   File "/NodeClassifier/utils/session_iterator.py", line 58, in __init__
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]:     self.train_length = self.X_train.shape[0]
2018-02-20T18:42:17+00:00 172.17.0.1 pcap[788]: AttributeError: 'list' object has no attribute 'shape'

currently the connection to rabbitmq is not checked for errors

inside of AbnormalDetector/eval_classifier.py the connection to rabbit needs to be minimally put in a try/except, but ideally it would have an environment variable to send to rabbit or log to stdout (in case a connection to rabbitmq isn't available)

non-int of type 'NoneType'

2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]: Traceback (most recent call last):
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:   File "eval_OneLayer.py", line 182, in <module>
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:     instance.main()
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:   File "eval_OneLayer.py", line 73, in main
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:     mean=False
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:   File "/usr/local/lib/python3.5/dist-packages/poseidonml/Model.py", line 234, in get_representation
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:     source_ip=source_ip,
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:   File "/usr/local/lib/python3.5/dist-packages/poseidonml/Model.py", line 103, in get_features
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:     session_dict
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:   File "/usr/local/lib/python3.5/dist-packages/poseidonml/featurizer.py", line 48, in extract_features
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]:     num_sport_init = [0]*max_port
2018-10-16T16:23:59+00:00 172.17.0.1 nostalgic_dijkstra[2512]: TypeError: can't multiply sequence by non-int of type 'NoneType'

Logger level won't change

Description

The logging level won't change on eval_OneLayer.py. I am trying to change it from INFO to DEBUG but the output remains at INFO.

Environment

  • Testing on 1fd5e3e git commit hash because of eval issue
  • Evaluation on OneLayer
  • Docker version 18.03.1-ce
  • Ubuntu 16.04 and 18.04

Steps to reproduce

  • Change logger level from INFO to DEBUG here
  • Run eval_onelayer
  • Notice that the output below does not include any logger.debug statements
INFO:__main__:AmazonEcho : 0.863
INFO:__main__:Twelve : 0.123
INFO:__main__:InsteonCamera : 0.004
INFO:__main__:Message: {"44:65:0d:56:cc:d3": {"classification": {"labels": ["AmazonEcho", "Twelve", "InsteonCamera"], "confidences": [0.8628105019070437, 0.12311244235740644, 0.00389281380861247]}, "decisions": {"behavior": "normal", "investigate": false}, "valid": false, "timestamp": 1474812876.44121}}

Expected result

Based on the code, I would be expecting at least the following to precede the above output...

warning about invalid results when trying to evaluate a pcap

when evaluating against a PCAP with onelayer, PoseidonML now warns that it might have invalid results because of version mismatch:

$ export PCAP=~/Desktop/small_pcaps/foo.pcap
$ make
Sending build context to Docker daemon  69.85MB
Step 1/8 : FROM debian:stretch-slim
 ---> e9e49a465deb
Step 2/8 : LABEL maintainer="Charlie Lewis <[email protected]>"
 ---> Using cache
 ---> a625cab1d3c9
Step 3/8 : ENV BUILD_PACKAGES="        build-essential         linux-headers-4.9         python3-dev         cmake         tcl-dev         xz-utils         zlib1g-dev         git         curl"     APT_PACKAGES="        ca-certificates         openssl         python3         python3-pip         tcpdump"     PYTHON_VERSION=3.6.4     PATH=/usr/local/bin:$PATH     PYTHON_PIP_VERSION=9.0.1     LANG=C.UTF-8
 ---> Using cache
 ---> 81e65f93bed3
Step 4/8 : COPY requirements.txt requirements.txt
 ---> Using cache
 ---> 81537ed22e58
Step 5/8 : RUN set -ex;     apt-get update -y;     apt-get upgrade -y;     apt-get install -y --no-install-recommends ${APT_PACKAGES};     apt-get install -y --no-install-recommends ${BUILD_PACKAGES};     ln -s /usr/bin/idle3 /usr/bin/idle;     ln -s /usr/bin/pydoc3 /usr/bin/pydoc;     ln -s /usr/bin/python3 /usr/bin/python;     ln -s /usr/bin/python3-config /usr/bin/python-config;     ln -s /usr/bin/pip3 /usr/bin/pip;     pip install -U -v setuptools wheel;     pip install -U -v -r requirements.txt;     apt-get remove --purge --auto-remove -y ${BUILD_PACKAGES};     apt-get clean;     apt-get autoclean;     apt-get autoremove;     rm -rf /tmp/* /var/tmp/*;     rm -rf /var/lib/apt/lists/*;     rm -f /var/cache/apt/archives/*.deb         /var/cache/apt/archives/partial/*.deb         /var/cache/apt/*.bin;     rm -rf /root/.[acpw]*
 ---> Using cache
 ---> 7b967670c768
Step 6/8 : COPY . /poseidonml
 ---> Using cache
 ---> 4b403ff49a7d
Step 7/8 : WORKDIR /poseidonml
 ---> Using cache
 ---> 32e2d2db3066
Step 8/8 : RUN pip uninstall -y poseidonml && pip install .
 ---> Using cache
 ---> 4ecab0e18563
Successfully built 4ecab0e18563
Successfully tagged cyberreboot/poseidonml:base
~/github/public/poseidonml/DeviceClassifier/OneLayer ~/github/public/poseidonml
Sending build context to Docker daemon  416.3kB
Step 1/6 : FROM cyberreboot/poseidonml:base
 ---> 4ecab0e18563
Step 2/6 : LABEL maintainer="Charlie Lewis <[email protected]>"
 ---> Using cache
 ---> 1c380a5d6227
Step 3/6 : COPY . /OneLayer
 ---> Using cache
 ---> 8df2635bc33a
Step 4/6 : COPY models /models
 ---> Using cache
 ---> c76fb56fa01f
Step 5/6 : WORKDIR /OneLayer
 ---> Using cache
 ---> 3e05bd617fb2
Step 6/6 : ENTRYPOINT ["python3", "eval_OneLayer.py"]
 ---> Using cache
 ---> 658f8d098558
Successfully built 658f8d098558
Successfully tagged poseidonml:onelayer
~/github/public/poseidonml
dec0828b31d077b07d89716bdc3172339d8e6df18d7ee6cf9130e1d626865e99

Running OneLayer Eval on PCAP file /Users/clewis/Desktop/small_pcaps/BusinessWorkstation-rlewis-Mon1017-n00.pcap
/usr/local/lib/python3.5/dist-packages/sklearn/base.py:251: UserWarning: Trying to unpickle estimator LabelBinarizer from version 0.20.0 when using version 0.20.1. This might lead to breaking code or invalid results. Use at your own risk.
  UserWarning)
/usr/local/lib/python3.5/dist-packages/sklearn/base.py:251: UserWarning: Trying to unpickle estimator MLPClassifier from version 0.20.0 when using version 0.20.1. This might lead to breaking code or invalid results. Use at your own risk.
  UserWarning)
INFO:__main__:Not enough sessions in pcap '/pcaps/eval.pcap'

AttributeError: 'PosixPath' object has no attribute 'rfind' when not supplying a PCAP file

Description

PoseidonML crashes if the PCAP environment variable is not set to a PCAP file, for example if it's set to a directory instead.

Environment

latest from master, on OSX

Steps to reproduce

  • $ export PCAP=~/tmp/pcaps
  • make

Expected result

It should either work, or inform the user that it's being set incorrectly in a graceful way.

Actual result

Running OneLayer Eval on PCAP file /tmp/pcaps
Traceback (most recent call last):
  File "eval_OneLayer.py", line 204, in <module>
    instance.main()
  File "eval_OneLayer.py", line 46, in main
    os.path.split(child)[-1].split('.')[-1] in { 'pcap','dump','cap'}:
  File "/usr/lib/python3.5/posixpath.py", line 103, in split
    i = p.rfind(sep) + 1
AttributeError: 'PosixPath' object has no attribute 'rfind'
make: *** [eval_onelayer_nobuild] Error 1

UnboundLocalError for 'pairs' - pcap_utils.py

Whilst trying to train a model I encountered the following error:

root@d3499990ed37:/app/NodeClassifier# python train_OneLayer.py ./Training firsttest.pickle
Reading data
Reading ./Training/WebTraffic-test.pcap as WebTraffic
Traceback (most recent call last):
  File "train_OneLayer.py", line 31, in <module>
    model.train(data_dir)
  File "/app/NodeClassifier/utils/OneLayer.py", line 133, in train
    labels=self.labels
  File "/app/NodeClassifier/utils/training_utils.py", line 67, in read_data
    capture_source = get_source(binned_sessions)
  File "/app/NodeClassifier/utils/pcap_utils.py", line 140, in get_source
    _, ip_mac_pairs = get_indiv_source(session_dict)
  File "/app/NodeClassifier/utils/pcap_utils.py", line 88, in get_indiv_source
    if is_private(source_address) or is_private(destination_address):
  File "/app/NodeClassifier/utils/pcap_utils.py", line 24, in is_private
    if pairs[0] == '10': private = True
UnboundLocalError: local variable 'pairs' referenced before assignment

Why would the pairs variable not be assigned? From what I gathered reading pcap_utils, the address needs to be equal to or greater than 4 for 'pairs' to get set, would would cause the address argument to be less than that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.