Giter Site home page Giter Site logo

factor's Introduction

Detecting Deepfakes Without Seeing Any

Official PyTorch Implementation of "Detecting Deepfakes Without Seeing Any".

PWC

False Facts in Deepfake Attacks

False facts

(a) Face forgery: the claimed identity is seamlessly blended into the original image. The observed image is accompanied by a false fact i.e., “an image of Barack Obama”.

(b) Audio-Visual (AV): fake audio is generated to align with the original video or fake video is generated to align with the original audio. Fake media are accompanied by a false fact, that the video and audio describe the same event.

(c) Text-to-Image (TTI): the textual prompt is used§ by a generative model e.g. Stable Diffusion, to generate a corresponding image. The fake image is accompanied by a false fact, that the caption and the image describe the same content.

FACTOR

FACTOR

FACTOR leverages the discrepancy between false facts and their imperfect synthesis within deepfakes. By quantifying the similarity using the truth score, computed via cosine similarity, FACTOR effectively distinguishes between real and fake media, enabling robust detection of zero-day deepfake attacks.

Installation

Create a virtual environment, activate it and install the requirements file:

virtualenv -p /usr/bin/python3 venv
source venv/bin/activate
pip install -r requirements.txt

1. Face-Forgery

Please refer to face-forgery/ and the instructions for implementing the face-forgery model.

2. Audio-Visual

Please refer to audio-visual/ and the instructions for implementing the audio-visual model.

Citation

If you find this useful, please cite our paper:

@article{reiss2023detecting,
  title={Detecting Deepfakes Without Seeing Any},
  author={Reiss, Tal and Cavia, Bar and Hoshen, Yedid},
  journal={arXiv preprint arXiv:2311.01458},
  year={2023}
}

factor's People

Contributors

talreiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

factor's Issues

Python versioning

I am getting this error while running face_sdk/detect_and_align.py.

  File "/home//Downloads/FACTOR/FaceX-Zoo/face_sdk/detect_and_align.py", line 2, in <module>
    from core.model_handler.face_alignment.FaceAlignModelHandler import FaceAlignModelHandler
  File "/home//Downloads/FACTOR/FaceX-Zoo/face_sdk/core/model_handler/face_alignment/FaceAlignModelHandler.py", line 7, in <module>
    logging.config.fileConfig("config/logging.conf")
  File "/usr/lib/python3.10/logging/config.py", line 80, in fileConfig
    handlers = _install_handlers(cp, formatters)
  File "/usr/lib/python3.10/logging/config.py", line 135, in _install_handlers
    section = cp["handler_%s" % hand]
  File "/usr/lib/python3.10/configparser.py", line 965, in __getitem__
    raise KeyError(key)
KeyError: 'handler_fileHandlers'

can you tell me what python version you have used ??

requirements.txt installation error

I was trying to install the stuffs needed by running 'pip install -r requirements.txt' but im getting an error while installing faiss-gpu
" WARNING: Generating metadata for package faiss-gpu produced metadata for project name faiss-cpu. Fix your #egg=faiss-gpu fragments.
Discarding https://files.pythonhosted.org/packages/51/85/7a7490dbecaea9272953b88e236a45fe8c47571a069bc28b352f0b224ea3/faiss-gpu-1.7.1.tar.gz (from https://pypi.org/simple/faiss-gpu/): Requested faiss-cpu from https://files.pythonhosted.org/packages/51/85/7a7490dbecaea9272953b88e236a45fe8c47571a069bc28b352f0b224ea3/faiss-gpu-1.7.1.tar.gz (from -r requirements.txt (line 14)) has inconsistent name: expected 'faiss-gpu', but metadata has 'faiss-cpu'
ERROR: Could not find a version that satisfies the requirement faiss-gpu==1.7.1 (from versions: 1.5.3, 1.6.0, 1.6.1, 1.6.3, 1.6.4, 1.6.4.post2, 1.6.5, 1.7.0, 1.7.1, 1.7.1.post1, 1.7.1.post2)
ERROR: No matching distribution found for faiss-gpu==1.7.1"

This is the error, can you look into it?
1

Testing on Custom Video

How can we test this model on a custom data(video) ? Is it necessary to download the whole FakeAV dataset to run the audio-visual model on a custom deepfake/real video ? I would be really thankful if you could guide me through.

Thanks
Shivin

audio-visual data categories

I observed that there are several files in the "data" folder containing paths. I was wondering what is their purpose. They seem to be a subset of the original set.

Face_forgery eval performances CelebDF

Hello,

I tested the face_frogery method with the CelebDF Dataset. I used /Celeb_real for identity references and /Celeb_synthesis for the claimed identities. As in the paper I used 32 frames for each video. Unfortunnatly I cannot found the results I expected to obtain with the paper statistics and have around 50% AUC (so nothing done).

Can you give me some infos on what I didn't understand?

Key 'input_modality' not in 'AVHubertPretrainingConfig'

发生异常: ConfigKeyError
Key 'input_modality' not in 'AVHubertPretrainingConfig'
full_key: input_modality
reference_type=Optional[AVHubertPretrainingConfig]
object_type=AVHubertPretrainingConfig
File "/work/liuzehua/task/DF/FACTOR/av_hubert/fairseq/fairseq/dataclass/utils.py", line 507, in merge_with_parent
merged_cfg = OmegaConf.merge(dc, cfg)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/fairseq/fairseq/tasks/init.py", line 40, in setup_task
cfg = merge_with_parent(dc(), cfg, remove_missing=remove_missing)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/avhubert/hubert_asr.py", line 465, in build_model
task_pretrain = tasks.setup_task(w2v_args.task)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/fairseq/fairseq/models/init.py", line 106, in build_model
return model.build_model(cfg, task)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/fairseq/fairseq/tasks/fairseq_task.py", line 355, in build_model
model = models.build_model(cfg, self, from_checkpoint)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/fairseq/fairseq/checkpoint_utils.py", line 502, in load_model_ensemble_and_task
model = task.build_model(cfg.model, from_checkpoint=True)
File "/work/liuzehua/task/DF/FACTOR/av_hubert/avhubert/inference.py", line 131, in
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task([ckpt_path])
omegaconf.errors.ConfigKeyError: Key 'input_modality' not in 'AVHubertPretrainingConfig'
full_key: input_modality
reference_type=Optional[AVHubertPretrainingConfig]
object_type=AVHubertPretrainingConfig

in utils.py merged_cfg = OmegaConf.merge(dc, cfg)
how to fix this question

How to build the label.npy file?

I want to use face-forgery and test two video , one is the real and other is fake.
So I use FaceX-Zoo to get the .npy files. But I don't know how to build the label.npy.
Should I use an array like [1, 2.....] and convert it to nparry and save as the .npy ?

My dataset like:

/content/output_data/
├── fakeobama
│   ├── frame0.jpg
│   ├── frame100.jpg
│   ├── frame101.jpg
│   ├── frame102.jpg
      └── frame9.jpg
└── realobama
    ├── frame0.jpg
    ├── frame100.jpg
    ├── frame101.jpg
    └── frame9.jpg

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.