Giter Site home page Giter Site logo

peekaboo's Introduction

Peekaboo: Interactive Video Generation via Masked-Diffusion

PyTorch implementation of Peekaboo, a training-free and zero-latency interactive video generation pipeline

Peekaboo: Interactive Video Generation via Masked-Diffusion
Yash Jain1*, Anshul Nasery2*, Vibhav Vineet3, Harkirat Behl3
1Microsoft, 2University of Washington, 3Microsoft Research
*denotes equal contribution

A Horse galloping through a meadow A Panda playing Peekaboo An Eagle flying in the sky
Condition Condition Condition
A Horse galloping through a meadow A Panda playing Peekaboo An Eagle flying in the sky

Quickstart ๐Ÿš€

Follow the instructions below to download and run Peekaboo on your own prompts and bbox inputs. These instructions need a GPU with ~40GB VRAM for zeroscope and ~13GB VRAM for modelscope . If you don't have a GPU, you may need to change the default configuration from cuda to cpu or follow intstructions given here.

Set up a conda environment:

conda env create -f env.yaml
conda activate peekaboo

Generate peekaboo videos through CLI:

python src/generate.py --model zeroscope --prompt "A panda eating bamboo in a lush bamboo forest" --fg_object "panda"

# Optionally, you can specify parameters to tune your result:
# python src/generate.py --model zeroscope --frozen_steps 2 --seed 1234 --num_inference_steps 50 --output_path src/demo --prompt "A panda eating bamboo in a lush bamboo forest" --fg_object "panda"

Or launch your own interactive editing Gradio app:

python src/app_modelscope.py 

app_modelscope.py

(For advice on how to get the best results by tuning parameters, see the Tips section).

Changing bbox

We recommend using our interactive Gradio demo to play with different input bbox. Alternatively, editing the bbox_mask variable in generate.py will achieve similar results.

Datasets and Evaluation

Please follow the README to download the datasets and learn more about Peekaboo's evaluation.

Tips

If you're not getting the quality result you want, there may be a few reasons:

  1. Is the video not gaining control or degrading? The number of frozen_steps might be insufficient. The number of steps dictate how long the peekaboo attention mask modulation will act during the generation process. Too less will not affect the output video and too much will degrade the video quality. The default frozen_steps is set to 2, but aren't necessarily optimal for each (prompt, bbox) pair. Try:
    • Increasing the frozen_steps, or
    • Decreasing the frozen_steps
  2. Poor video quality: A potential reason could be that the base model does not support the prompt well or in other words, generate a poor quality video for the given prompt. Try changing the model or improving the prompt.
  3. Try generating results with different random seeds by chaning seed parameter and running generation multiple times.
  4. Increasing the number of steps sometimes improves results.

BibTeX

@article{jain2023peekaboo,
  title={PEEKABOO: Interactive Video Generation via Masked-Diffusion},
  author={Jain, Yash and Nasery, Anshul and Vineet, Vibhav and Behl, Harkirat},
  journal={arXiv preprint arXiv:2312.07509},
  year={2023}
}

Comments

If you implement Peekaboo in newer text-to-video models, feel free to raise a PR. ๐Ÿ˜„

This readme is inspired by InstructPix2Pix

peekaboo's People

Contributors

anshuln2 avatar jinga-lala avatar microsoft-github-operations[bot] avatar microsoftopensource avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

peekaboo's Issues

code release

Hi, thank you for your good work, I am very interested. I want to know how versatile this method is and when it will be open-sourced.

Image looks negative (and so does mp4)

Hello.
Thank you for the repo. Code is working fine, but image looks like negative (purplish). So does the mp4 file.

Any suggestions what needs to be done.
frame_0000

I get correct image by writing (255-frame) as png (in generate.py->save_frames).
Thanks

which part in code is cross-attention mask and temporal attention mask?

Hi, I'm trying to learn your implementation for your amazing paper and I have some questions:

I see there are two mask, one is frozen_mask , another is encoder_attention_mask
I guess the frozen_mask is the one for spatial-temporal, which is the foreground boundingbox
the encoder_attention_mask is for text condition.

However, the question is:

    1. [Masked spatial attention] I think the frozen_mask can only mask the foreground (bbox) pixels, not the background pixels, because I didnot find the code for masking background one
    1. [masked cross-attention] I cannot find the code that frozen_mask interacts with encoder_attention_mask, which is masked cross-attention
    1. [masked temporal attention] I cannot find the code for masked temporal attention.

Thank you for anyone who can help me

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.