This code is for ICLR 2024 paper "Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature", where we borrow or extend some code from DetectGPT.
Paper | LocalDemo | OpenReview
Method | 5-Model Generations ↑ | ChatGPT/GPT-4 Generations ↑ | Speedup ↑ |
---|---|---|---|
DetectGPT | 0.9554 | 0.7225 | 1x |
Fast-DetectGPT | 0.9887 (relative↑ 74.7%) | 0.9338 (relative↑ 76.1%) | 340x |
- Python3.8
- PyTorch1.10.0
- Setup the environment:
pip install -r requirements.txt
(Notes: our experiments are run on 1 GPU of Tesla A100 with 80G memory.)
Please run following command:
python scripts/local_infer.py --data <path to csv with 'text', 'label' and 'split' columns>
Following folders are created for our experiments:
- ./exp_main -> experiments for 5-model generations (main.sh).
- ./exp_gpt3to4 -> experiments for GPT-3, ChatGPT, and GPT-4 generations (gpt3to4.sh).
(Notes: we share generations from GPT-3, ChatGPT, and GPT-4 in exp_gpt3to4/data for convenient reproduction.)
If you find this work useful, you can cite it with the following BibTex entry:
@inproceedings{bao2023fast,
title={Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature},
author={Bao, Guangsheng and Zhao, Yanbin and Teng, Zhiyang and Yang, Linyi and Zhang, Yue},
booktitle={The Twelfth International Conference on Learning Representations},
year={2023}
}