GazeCorrection:Self-Guided Eye Manipulation in the wild using Self-Supervised Generative Adversarial Networks
The code of paper GazeCorrection:Self-Guided Eye Manipulation in the wild using Self-Supervised Generative Adversarial Networks.
Project page | Paper |
Gaze correction aims to redirect the person's gaze into the camera by manipulating the eye region, and it can be considered as a specific image resynthesis problem. Gaze correction has a wide range of applications in real life, such as taking a picture with staring at the camera. In this paper, we propose a novel method that is based on the inpainting model to learn from the face image to fill in the missing eye regions with new contents representing corrected eye gaze. Moreover, our model does not require the training dataset labeled with the specific head pose and eye angle information, thus, the training data is easy to collect. To retain the identity information of the eye region in the original input, we propose a self-guided pretrained model to learn the angle-invariance feature. Experiments show our model achieves very compelling gaze-corrected results in the wild dataset which is collected from the website and will be introduced in details.
-
Clone this repo:
git clone https://github.com/zhangqianhui/GazeCorrection.git
-
Download the NewGaze dataset
Download the tar of NewGaze dataset from Google Driver Linking.
cd your_path unzip NewGazeData.tar
-
Pretraining Model
We have provided the self-guided pretraining model in directory: ./sg_pre_model_g
-
Train this model using the your parameter
(1)Please edit the config.py file to select the proper hyper-parameters.
(2)Change the "base_path" to "your_path" of NewGaze dataset.
Then
python main.py
- Comparison Results
- Experiments Results