Paper To Code implementation of pix2pix GAN in native pytorch
Original Paper- ArxViv
Pls checkout the medium article for a quick overview.
Train Pix2Pix GAN models for:
- Colorizing and enhancing an old, black & white grainy footage
- Inpainting images to fix scratches, missing pixels etc.
- Generating faces from doodles
I download 2 videos (Pursuit of Happiness last scenes, Funniest moments in talkshow), split the video into frames, and generate a grainy b/w image using some basic PIL functions. So I now have my image pairs for my training data:
Checkout youtube for video with audio - recommended: Youtube Link
For training, I used VOC2012 dataset, and generated a “distorted version” of each image by randomly cropping pixels and adding black lines and blobs. Tested it on a held out portion of the dataset.
For training, I generated a “doodle” for each face in the 10k faces dataset using a combination of face-landmark detection feature in opencv and Holy Nested Edge detection.
For testing, I set up webcam to read hand-drawn doodles on post-it notes.
Check out youtube video : Youtube Link