Overview of our proposed method. It mainly contains two parts: (a) the off-line generation of multi-scale component dictionaries from large amounts of high-quality images, which have diverse poses and expressions. K-means is adopted to generate K clusters for each component (i.e., left/right eyes, nose and mouth) on different feature scales. (b) The restoration process and dictionary feature transfer (DFT) block that are utilized to provide the reference details in a progressive manner. Here, DFT-i block takes the Scale-i component dictionaries for reference in the same feature level.
(a) Offline generation of multi-scale component dictionaries.
(b) Architecture of our DFDNet for dictionary feature transfer.
Downloading from the following url and put them into ./.
- BaiduNetDisk (s9ht)
- GoogleDrive
- Crop face from the whole image.
cd ./CropFace
python crop_face_dlib.py
- Compute the facial landmarks.
cd ./FaceLandmarkDetection
python get_face_landmark.py
(You can change the image path and save path in line 17~18. This code is mainly borrowed from this work)
- Run the face restoration.
python test_FaceDict.py
(You can directly run this code for the given test images and landmarks without step 1 and 2. The image path can be revised in line 100~103.)
Input | Results |
---|---|
@InProceedings{Li_2020_ECCV,
author = {Li, Xiaoming and Chen, Chaofeng and Zhou, Shangchen and Lin, Xianhui, Zuo, Wangmeng and Zhang, Lei},
title = {Blind Face Restoration via Deep Multi-scale Component Dictionaries},
booktitle = {ECCV},
year = {2020}
}