Giter Site home page Giter Site logo

linwhitehat / et-bert Goto Github PK

View Code? Open in Web Editor NEW
317.0 4.0 76.0 10.8 MB

The repository of ET-BERT, a network traffic classification model on encrypted traffic. The work has been accepted as The Web Conference (WWW) 2022 accepted paper.

License: MIT License

Python 100.00%
pre-training transformer-architecture burst-analysis mask-burst-modeling same-origin-burst-prediction pytorch encrypted-traffic-analysis

et-bert's Introduction

Greetings! 👋 I'm 𝑋𝑖𝑛𝑗𝑖𝑒 𝐿𝐼𝑁.

A Ph.D. studying CyberSecurity in IIE-CAS/UCAS.

  • 🔭 I’m currently working on Network Security and Traffic Analysis;
  • 😄 Pronouns: Encrypted Traffic Identification and Out-of-Distribution Generalization;
  • 💬 Ask me about anything here.
  • 📫 How to reach me: @𝑂𝑢𝑡𝑙𝑜𝑜𝑘
Github Stats

et-bert's People

Contributors

linwhitehat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

et-bert's Issues

Dataset issues

Hello, I would like to ask if the dataset you gave ISCX VPN, USTC_TFC public dataset is processed at the flow level or at the package level. Looking forward to your reply, thank you!

模型效果差的问题

大佬,你好,我们拿自己训练的模型,跑另一个相同分布的测试数据,效果很差,模型很容易过拟合,你们发现了吗

Hardware configuration problem

We encountered some problems in the process of reproducing the model.

Can you tell us your hardware configuration, for example, how much memory? how many GPUs?what type of GPUs? because we found that we were running out of memory during the reproduction process.

Some questions about the datasets.

Hello, we want to use the preprocessed USTC-TFC, ISCX-VPN-Service and ISCX-VPN-App dataset directly, but we don't find the download link of USTC-TFC dataset in your project. In addition, would you provide which category each label index value of these three datasets represents? Because we have the need for further specific division.
Thank you.

parameters

Hello, I'd like to ask about the parameters without pre training

【fine-tuning】RuntimeError:The size of tensor a (128)must match the size of tensor b (768)at non-singleton dimension 2

Thank you for your work! When we follow the steps to fine-tuning the pretrained_model.bin, we meet a problem "RuntimeError:The size of tensor a (128)must match the size of tensor b (768)at non-singleton dimension 2"
https://github.com/GuisengLiu/GitData/blob/master/1.png
image
The details of our experimental environment are as follows:
python 3.6
cudatoolkit 10.2.89
pytorch 1.10.2
Available GPU are Tesla K80 x 4

Accuracy problem

Hello,lin,At the beginning, I didn't load the pre training parameters. The accuracy rate of 10epoch can reach 87. After I loaded the pre training parameters provided by you, the accuracy rate of 10epoch is only about 82, (in the loading process, because your data is embedded to [60000, 768], and the data is only about [21100, 768], I just did not load embedded.word_embedded.weight. Other parameters are provided by you, and the results should increase a lot. Why is this case?) the entire process uses CSTNet data sets

result different

我们用你的预训练模型和词典,根据你的代码,我们做的ISCX-VPN-App微调数据预处理,然后微调模型,flow的结果和你的相差了7个点,packet级别和你差了3个点,你觉得可能是哪里的问题?我们上个月下载的你的代码

关于预处理数据的问题

作者您好,在论文中,您最后给出的APP可视化的结果中,关于FileTransfer的有两个App分别是是FTPS和SFTP,可是在您提供的标签那张图中,这两个却是vpn通道下的,请问您用的是vpn通道下的FTPS和SFTP数据吗 ?这又是为什么呢?

data preprocessing

Hello author, I’m sorry to bother you again. In the paper, I didn’t see more detailed data preprocessing information. In the process of data preprocessing, is the slice data 256 bytes, 784 bytes, or 900 words? festival. Looking forward to your reply

packages problem

Why my conda says needs python version to be 2.6 to install argparse, I have searched on the Google but didn't find the solution.
截屏2022-07-02 22 41 08

Some questions about data generation

你好,我运行了预训练和fine-tuning两种生成数据的脚本,得到的npy文件仅有datagram和标签两种,但我看到cstnet示例数据中还有len、direction、message_type和time类型的npy文件,我在代码中搜索这些文件名也没有找到相关的写入文件的代码,请问这些npy文件是必需的吗?该如何生成?

另外我有一批自己捕获的非pcpa格式的数据,包含了报文的16进制和五元组,可以直接自己构造数据文件吗?应当依照哪种格式?如果参照上述情况仅构造datagram和标签两种npy文件,该如何让算法知道哪些数据属于同源BURST?

Hello, I have run pre-training and fine-tuning scripts to generate data, and the only npy files I can get are datagram and tag. However, I can see that there are also npy files of len, direction, Message_type and time types in cstnet sample data. I searched these file names in the code and did not find the relevant code for writing files. Are these npy files required? How do I generate it?

In addition, I have a batch of data in non-PCPA format captured by myself, including hexadecimal and quintuple of packets. Can I directly construct data files by myself? Which format should be followed? If I construct only datagram and tag npy files based on the above situation, how do I let the algorithm know which data belongs to the same BURST?

结果有差

您好,我使用您的代码去预处理本地的VPN-nonVPN数据集进行ISCX-VPN-Service实验,我的packet级的效果是94.7,和您98.9相差四个点,不知道是哪里出了问题,我的数据集形况如下:
./non-vpn:
Chat Email File Transfer P2P Streaming VoIP

./non-vpn/Chat:
AIMchat1.pcapng aim_chat_3a.pcap facebookchat1.pcapng facebookchat3.pcapng facebook_chat_4b.pcap hangouts_chat_4a.pcap ICQchat2.pcapng icq_chat_3b.pcap skype_chat1b.pcap
AIMchat2.pcapng aim_chat_3b.pcap facebookchat2.pcapng facebook_chat_4a.pcap hangout_chat_4b.pcap ICQchat1.pcapng icq_chat_3a.pcap skype_chat1a.pcap

./non-vpn/Email:
email1a.pcap email1b.pcap email2a.pcap email2b.pcap gmailchat1.pcapng gmailchat2.pcapng gmailchat3.pcapng

./non-vpn/File Transfer:
skype_file1.pcap skype_file2.pcap skype_file3.pcap skype_file4.pcap skype_file5.pcap skype_file6.pcap skype_file7.pcap skype_file8.pcap

./non-vpn/P2P:
Torrent01.pcap

./non-vpn/Streaming:
netflix1.pcap netflix3.pcap spotify1.pcap spotify3.pcap vimeo1.pcap vimeo3.pcap youtube1.pcap youtube3.pcap youtube5.pcap youtubeHTML5_1.pcap
netflix2.pcap netflix4.pcap spotify2.pcap spotify4.pcap vimeo2.pcap vimeo4.pcap youtube2.pcap youtube4.pcap youtube6.pcap

./non-vpn/VoIP:
hangouts_audio1a.pcap hangouts_audio1b.pcap hangouts_audio2a.pcap hangouts_audio2b.pcap hangouts_audio3.pcap hangouts_audio4.pcap

./vpn:
Chat Email File Transfer P2P Streaming VoIP

./vpn/Chat:
vpn_aim_chat1a.pcap vpn_chat.pcap vpn_facebook_chat1b.pcap vpn_hangouts_chat1b.pcap vpn_icq_chat1b.pcap vpn_skype_chat1b.pcap
vpn_aim_chat1b.pcap vpn_facebook_chat1a.pcap vpn_hangouts_chat1a.pcap vpn_icq_chat1a.pcap vpn_skype_chat1a.pcap

./vpn/Email:
vpn_email2a.pcap vpn_email2b.pcap

./vpn/File Transfer:
vpn_ftps_A.pcap vpn_ftps_B.pcap vpn_sftp_A.pcap vpn_sftp_B.pcap vpn_skype_files1a.pcap vpn_skype_files1b.pcap

./vpn/P2P:
vpn_bittorrent.pcap

./vpn/Streaming:
vpn_netflix_A.pcap vpn_spotify_A.pcap vpn_vimeo_A.pcap vpn_vimeo_B.pcap vpn_youtube_A.pcap

./vpn/VoIP:
vpn_facebook_audio2.pcap vpn_hangouts_audio1.pcap vpn_hangouts_audio2.pcap vpn_skype_audio1.pcap vpn_skype_audio2.pcap vpn_voipbuster1a.pcap vpn_voipbuster1b.pcap

关于预训练的数据集

请问一下,文中提到预训练数据集共30G,15G来自公开数据集,15来自CSTNET。我在unb的网站上找到了ISCXVPN2016、ISCXTor2016等很多个数据集,可以问一下作者15G公开数据集具体指哪些,或者做了哪些处理吗。另外,15G的CSTNET非公开数据集有办法获取吗,谢谢。

Some questions about equipment requirements

Hi, I'm trying to reproduce ET-BERT recently. I followed the steps in the README you provided. When I run preprocess.py to pre-process the encrypted traffic burst corpusre, I found that the program will interrupt and exit in the middle of the execution. I don’t know if my equipment conditions are not up to the requirements. What is the performance of the equipment you used in the experiment, such as cpu, gpu, memory size, etc. Looking forward to your reply. Thank you!

which tensor is used in A.1 of your paper?

你好,请问在论文A.1部分用于可视化的向量是embedding的输出还是classification内部的向量?
这部分代码可以分享一下吗?我想用类似A.1的方式观察fine-tuning后的模型效果,但在推理代码run_classifier_infer.py中没能找到输出embedding结果的方式

No module named 'uer'

When I try to run pretrain.py to pre-train,there is a mistake happed.
I really need you help
截屏2022-07-16 18 31 17

Question about generating the tsv file to fine-tuning.

Hello, I follow your readme file seems to be unable to generate the tsv file needed for the downstream task fine-tuning, can you give more detailed steps or share the ISCX-VPN task tsv file you generated in the paper. Thanks.

Should the length of sample be equal to the number of categories?

你好,请问sample参数应该按照什么样的格式设置,从generation函数的代码来看是每个类别的采样数列表吗?但是在main.py中示例代码将其设为了[5000],长度仅有1,是如何适配有多个类型的cstnet数据集的?
Hello, what format should the sample parameter be set according to? According to the code of the ’generation' function, is it a list of samples for each category? But how does the example code in main.py, which sets it to [5000] and is only 1 in length, fit into cSTNET datasets of multiple types?

where is `models/encryptd_vocab.txt` come from?

你好,请问示例命令中使用的的models/encryptd_vocab.txt是在corpora中的谷歌云链接中下载的encrypted_traffic_burst.txt上得到的吗?好像与我生成的不一样

A Question about ISCX-Tor dataset

Dear the author of ET-BERT.

First of all, thank you for the great research work.

Like previous issues about files you used in dataset (ISCX-VPN-service, -App), it was very clear that you provide the exact file lists you chose from the pubilc datasets.
(#28)

I'm very sorry to request, but could you tell me the list of files (you used) in ISCX-Tor dataset, please?

数据清洗的问题

您好,作者大大请问下预训练和下游任务微调,模型评估的数据,是不是都经过/ET-BERT/data_process/open_dataset_deal.py下的clean_pcap处理过了吗,另外关于过滤要求中的frame.len>80,可以细说下,为啥要过滤pcap长度没有80个字节的pcap包,这是有什么讲究吗?

GPU configuration

Our lab is also interested in applying BERT to network traffic anomaly detection, so I'm curious about the exact configuration of your server's GPU , for example, how many graphics cards,and I'd appreciate it if you could answer.

生成fine-tuning阶段数据部分时出现的问题

作者你好请问一下为什么在生成fine-tuning阶段数据的时候,数据集上的有些token会出现4个字节即8个16进制,在生成token的时候不是不应该时bigram进行转换,即一个token是2个字节
image
image
如红色划线部分

Is here should be 'append' instead of '='?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [3], in <cell line: 6>()
      1 main.samples = main.count_label_number(main.samples)
      3 train_model = ["pre-train"]
----> 6 main.dataset_extract(train_model)

File ~/projects/ET-BERT-main/data_process/main.py:111, in dataset_extract(model)
    109 pprint(y_test)
    110 pprint(x_payload_test)
--> 111 for test_index, valid_index in split_2.split(x_payload_test, y_test):
    112     x_payload_valid, y_valid = \
    113         x_payload_test[valid_index], y_test[valid_index]
    114     x_payload_test, y_test = \
    115         x_payload_test[test_index], y_test[test_index]

File ~/anaconda3/envs/etbert/lib/python3.10/site-packages/sklearn/model_selection/_split.py:1600, in BaseShuffleSplit.split(self, X, y, groups)
   1570 """Generate indices to split data into training and test set.
   1571 
   1572 Parameters
   (...)
   1597 to an integer.
   1598 """
   1599 X, y, groups = indexable(X, y, groups)
-> 1600 for train, test in self._iter_indices(X, y, groups):
   1601     yield train, test

File ~/anaconda3/envs/etbert/lib/python3.10/site-packages/sklearn/model_selection/_split.py:1923, in StratifiedShuffleSplit._iter_indices(self, X, y, groups)
   1921 n_samples = _num_samples(X)
   1922 y = check_array(y, ensure_2d=False, dtype=None)
-> 1923 n_train, n_test = _validate_shuffle_split(
   1924     n_samples,
   1925     self.test_size,
   1926     self.train_size,
   1927     default_test_size=self._default_test_size,
   1928 )
   1930 if y.ndim == 2:
   1931     # for multi-label y, map each distinct row to a string repr
   1932     # using join because str(row) uses an ellipsis if len(row) > 1000
   1933     y = np.array([" ".join(row.astype("str")) for row in y])

File ~/anaconda3/envs/etbert/lib/python3.10/site-packages/sklearn/model_selection/_split.py:2098, in _validate_shuffle_split(n_samples, test_size, train_size, default_test_size)
   2095 n_train, n_test = int(n_train), int(n_test)
   2097 if n_train == 0:
-> 2098     raise ValueError(
   2099         "With n_samples={}, test_size={} and train_size={}, the "
   2100         "resulting train set will be empty. Adjust any of the "
   2101         "aforementioned parameters.".format(n_samples, test_size, train_size)
   2102     )
   2104 return n_train, n_test

ValueError: With n_samples=1, test_size=0.5 and train_size=None, the resulting train set will be empty. Adjust any of the aforementioned parameters.

你好,我运行finetuning数据生成的最后一步的时候产生了这个报错,我输出了这里传入的参数x_payload_test和y_test:
Hello,when I ran the data_process/main.py as the last step of generating finetuning data,I got this error.Then I pprint x_payload_test and y_test

array(['fd82 8296 9602 0200 0000 0000 0000 0080 8002 0220 2000 00c0 c008 0800 0000 0002 0204 0405 05b4 b401 0103 0303 0308 0801 0101 0104 040201f9 f9df df95 95fd fd82 8296 9603 0380 8012 1272 7210 10df df6f 6f00 0000 0002 0204 0405 05b4 b401 0101 0104 0402 0201 0103 0303 0307fd82 8296 9603 0301 01f9 f9df df96 9650 5010 1001 0100 00bf bffc fc00 0000fd82 8296 9603 0301 01f9 f9df df96 9650 5018 1801 0100 00c2 c202 0200 0000 0050 504f 4f53 5354 5420 202f 2f66 6669 6973 7368 682f 2f67 6773 736c 6c2e 2e6a 6a73 7370 7020 2048 4854 5454 5450 502f 2f31 312e 2e31 310d 0d0a 0a55 5573 7365 6572 722d 2d41 4167 6765 656e 6e74 743a 3a20 204d 4d6f 6f7a 7a69 696c 6c6c 6c61 612f 2f35 352e 2e30 3020 2028 2857 5769 696e 6e64 646f 6f77 7773 7320 204e 4e54 5420 2031 3130 302e 2e30 303b 3b20 2057 5769 696e 6e36 3634 343b 3b20 2078 7836 3634 343b 3b20 2072 7276 763a 3a38 3834 342e 2e30 3029 2920 2047 4765 6563 636b 6b6f 6f2f 2f32 3230 3031 3130 3030 3031 3130 3031 3120 2046 4669 6972 7265 6566 666f01f9 f9df df96 96fd fd82 8298 9809 0950 5010 1000 00ed ed8f 8f5f 5f00 0000 0000 0000 0000 0000 0000 0000'],
      dtype='<U1085')

array([0])

二者中都只有一个元素
There is only one element in both

About the configuration of two fine-tuning strategies in your code.

While reading you paper, we noted that the packet-level fine-tuning outperform the flow-level one in most tasks.

Could you tell us how to use the packet-level fine-tuning in this code for comparison, since we saw that you compared these two fine-tuning strategies is every task. However, we didn't find a specific configuration of selecting these two strategies in the file of "run_classification.py".

Or, these two strategies are about different data processing for raw traffic?

Thank you.

Error when running the finetuning code

Hi authors,
I keep encountering the following error when trying to reproduce your code. Have you been through this?

../aten/src/ATen/native/cuda/Loss.cu:271: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [5,0,0] Assertion t >= 0 && t < n_classes failed.
../aten/src/ATen/native/cuda/Loss.cu:271: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion t >= 0 && t < n_classes failed.
0%| | 0/10 [00:01<?, ?it/s]
Traceback (most recent call last):
File "", line 4, in
File "", line 182, in train_model
File "/home/satyandra/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "", line 61, in forward
File "/home/satyandra/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/satyandra/.local/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/satyandra/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2689, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: CUDA error: device-side assert triggered

No module named 'build_vector_dataset'

File ./ET-BERT-main/data_process/main.py:20, in
18 import shutil
19 import dataset_generation
---> 20 import build_vector_dataset
21
22 import data_preprocess

ModuleNotFoundError: No module named 'build_vector_dataset'

你好,我发现项目和依赖中都中没有这个库,似乎也没有被引用,它是多余的吗。
Hello, I find that this library is not in the project and dependencies, and does not seem to be referenced, is it redundant?

[Fine-tuning Problem] No module named 'uer'

Hello, when I am reproducing your fine tuning work, I keep reporting the following error, I don't know how to solve it, I am looking forward to your answer.

Traceback (most recent call last):
  File "fine-tuning/run_classifier.py", line 8, in <module>
    from uer.layers import *
ModuleNotFoundError: No module named 'uer'

求助

您好,首先非常感谢您的ET-Bert模型的开源工作,这对我学业有这巨大的帮助。 我有一事向您请求,您能否将flow-level 的VPN-service 、VPN -APP、USTC-TFC和CSTNET-TLS数据发给我,这对我意义重大。
再次感谢您,您对我的学业有着巨大的帮助。
祝您学业有成,生活顺利~

about USTC-TFC datasets

你好。关于USTC-TFC2016数据集,我们按你提供的预处理代码进行flow预处理,很多类别数据个数为0,因为都小于5k被删除了,或者packet小于3个被过滤了。你们没有出现这种情况吗?

Some questions about data preprocessing for pre-training

Hello, I want to pre-train the model from scratch. In the first step of "Repeoduce ET-BERT",in "data_preprocess/main.py",I haven't found how to split pcap file or how to generate BURST in this process.The code directly calls the preprocess function to process the PCAP files and then generates the corpus txt files.

By further reading the code,I found that the "pretrain_dataset_generation" function in "datasets/dataset_generation.py" should be the pre-training data processing process, which consists of pcap files processing and split, BURST feature generation, etc. However, in "get_burst_feature" function called in the BURST generation part, I did not see how to remove the source IP and target IP of the packets and the TCP source and target port, which should be different from the paper described in 4.1.2.

On the other hand,line 64 of "data_preprocess/main.py" removes the Ethernet and IP headers, but there seems it doesn't remove the tcp's source and destination port numbers. So what I want to know is,How do I construct pre-training data for any given pcap files.

Thank you.

Question about Pre-Trained Model provided in Github.

First of all, thanks for your great research work!

I just want to know whether the pre-trained model is ET-BERT(packet)? or it also includes ET-BERT(flow)?

I want to do a flow-level classification task (which input of the task is a flow), and it seems that your pre-trained model, provided through github, is for packet-level. (Please tell me if I was wrong)

Is it fine to use the pre-trained model and fine-tuning it with flow-level dataset, pre-processed by "/data_process/main.py" with "dataset_level = flow"?

Or is there any pre-trained model "ET-BERT(flow)"?

Thanks a lot.

您好,我在用您处理数据的代码遇到一些问题?

我在使用/data_process/main.py去对我电脑中的VPN-nonVPN数据集进行处理,用于微调。

我按照您的data_process/下的文档进行了如下操作,
将一下变量更改了
_category = 12
pcap_path, dataset_save_path, samples, features, dataset_level = "F:\shi\dataset\VPN-nonVPN", "F:\shi\dataset\result\", [5000], ["payload"], "packet"
 splitcap=True (line:54)
splitcap_finish =0
并在pcap_path目录下手动创建了splitcap文件夹,
然后运行 ,出现了以下报错,不知道是哪里的问题,想问一下您有什么办法吗?
image

About the packet string truncation in pre-training.

Hello, I want to pre-train the model from scratch. In the first step of "Repeoduce ET-BERT",in "data_preprocess/main.py",I haven't found how to split pcap file or how to generate BURST in this process.The code directly calls the preprocess function to process the PCAP files and then generates the corpus txt files.

By further reading the code,I found that the "pretrain_dataset_generation" function in "datasets/dataset_generation.py" should be the pre-training data processing process, which consists of pcap files processing and split, BURST feature generation, etc. However, in "get_burst_feature" function called in the BURST generation part, I did not see how to remove the source IP and target IP of the packets and the TCP source and target port, which should be different from the paper described in 4.1.2.

On the other hand,line 64 of "data_preprocess/main.py" removes the Ethernet and IP headers, but there seems it doesn't remove the tcp's source and destination port numbers. So what I want to know is,How do I construct pre-training data for any given pcap files.

Thank you.

Thanks for your interest in our work.
The data_preprocess directory is in fact what builds the vocabulary table, and I have updated the directory name.
The dataset_generation.py is the key code for data preprocessing, regarding the removal of fields such as IP and port on lines 151, 307 and 313. Also in the "data_preprocess/main.py" you mentioned, the corresponding processing is done and the removal content contains the IP and port information. The truncation location in our experiment is 76, which already contains MAC, IP, and TCP port information.

For the processing of arbitrary PCAP files, I will subsequently update a README file to introduce it independently, as the process is not shown fluently in the code.

Originally posted by @linwhitehat in #7 (comment)

I have read your updated code, and I still have some doubts:

1.In "vocab_process/main.py", line 64, the code is 'words.decode()[68:]'. Should it be 'words.decode()[76:]'? because the first 34 bytes of one packet do not contain the source and destination port numbers of TCP.

2.In the pre-training data processing stage, in line 111 of "data_process/dataset_generation.py/get_burst_feature()", the code is 'Packet_string = data.decode()[: 2 * payload_len]'. Should it be 'Packet_string = data.decode()[76: 76+2 * payload_len]'?Because the first 38 bytes need to be truncated.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.