Giter Site home page Giter Site logo

omni-us / research-ganwriting Goto Github PK

View Code? Open in Web Editor NEW
66.0 9.0 22.0 461 KB

Source code for ECCV20 "GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images"

License: MIT License

Python 99.08% Shell 0.92%
generative-adversarial-network handwriting-synthesis

research-ganwriting's Introduction

License: MIT Python 3.7

GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images

A novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content.

Architecture

GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images
Lei Kang, Pau Riba, Yaxing Wang, Marçal Rusiñol, Alicia Fornés, and Mauricio Villegas
Accepted to ECCV2020.

Software environment:

  • Ubuntu 16.04 x64
  • Python 3.7
  • PyTorch 1.4

Setup

To install the required dependencies run the following command in the root directory of the project: pip install -r requirements.txt

Dataset preparation

The main experiments are run on IAM since it's a multi-writer dataset. Furthermore, when you have obtained a pretrained model on IAM, you could apply it on other datasets as evaluation, such as GW, RIMES, Esposalles and CVL.

How to train it?

First download the IAM word level dataset, then execute prepare_dataset.sh [folder of iamdb dataset] to prepared the dataset for training.
Afterwards, refer your folder in load_data.py (search img_base).

Then run the training with:

./run_train_scratch.sh

Note: During the training process, two folders will be created: imgs/ contains the intermediate results of one batch (you may like to check the details in function write_image from modules_tro.py), and save_weights/ consists of saved weights ending with .model.

If you have already trained a model, you can use that model for further training by running:

./run_train_pretrain.sh [id]

In this case, [id] should be the id of the model in the save_weights directory, e.g. 1000 if you have a model named contran-1000.model.

How to test it?

We provide two test scripts starting with tt.:

  • tt.test_single_writer.4_scenarios.py: Please refer to Figure 4 of our paper to check the details. At the beginning of this code file, you need to open the comments in turns to run 4 scenarios experiments one by one.

  • tt.word_ladder.py: Please refer to Figure 7 of our paper to check the details. It's fun:-P

Citation

If you use the code for your research, please cite our paper:

To be updated...

research-ganwriting's People

Contributors

hendraet avatar leitro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

research-ganwriting's Issues

Errors in Architecture Overview

image

When looking at the image of the architecture overview, I noticed two things that were reflected differently in the code.

  1. The noise that is added to is not present in the code. Am I missing something here?
  2. The cubes that represent the shape of are misleading because they imply that when merging and the channel dimension changes. However, the linear layer here halves the number of channels of the combined feature maps. This way the number of channels is .

If I am not mistaken or have missed something, would it be possible to fix those issues?
Because besides those minor flaws, the graphic is really beautiful and provides a great overview of the network's architecure.

Usecase question

Hi. I don't have a deep understanding of all this, so please bear with me.

I am working hobby project with the ornate handwriting of a medieval manuscript. The manuscript is in Latin. There are no letter "j"s (i is used), no "k"s (didn't exist), no "v"s (u is used), no "w"s (didn't exist), and very few "y"s.

Would research-GANwriting be capable of doing either of the following tasks?

Produce the letters that don't exist based on the letters that do.

Produce multiple, unique instances of letters that are few in number. (The letters that exist in abundance slightly vary from one to the next, because this is handwriting. So an "e", for example, looks slightly different every time. I'm asking if MC-GAN could create more "y"s, for example, with each one slightly varying from the others, yet plausibly the product of the original scribe.)

Thank you!

Model not learning

Thanks very much for sharing your work with us.

I wish to ask for your help. I have trained the model from scratch for more than 4000 epochs, but, it is not learning. I did not change anything in your code except the directory for IAM. The pre-trained model that you shared was trained for 2000 epochs and it is a thousand times better than the model I trained for 4000 epochs.

Please, let me know if I have to make any changes in your code for learning to take place.

Thanks

Pretrained model

Hi @leitro @hendraet, thanks for sharing your amazing work.

Could you share your pre-trained model so that we can get rid of retraining one and play it right away?

I will be grateful if you could provide your pre-trained model. Thanks in advance.

Fig. 7 code

I was wondering if you could share the actual code for Fig. 7 since the reported code tt.word_ladder.py is related to Fig. 8 of the manuscript. Thanks in advance

Question about the training loss in code

Thanks for releasing the code of your awesome paper. I have a question about the loss calculation in network_pro.py. You generate a new text using the new_ed1 function in load_data.py and use the generated image of the new text string to calculate the discriminator loss, writer classifier loss and the recognition loss together with the generated image of the target text string. What is the purpose of this processing?

Generate line of words

Hi, I was impressed by your great work. I realized there is a pretrained model which can generation words within length 7. But I am wondering can I use this repo to generate a line of words? Like I set the max character as a super large number, then retrain a new model, can that new one generate a line?

Problem about extracting style pictures.

Hi, sir, this is a very cool and awesome article, and thank you for sharing the code.

I noticed that is displayed in your code,
"# NUM_CHANNEL = 15
NUM_CHANNEL = 50"
but the paper mentioned "K = 15 word images from the same writer w_i".

Did I miss something, or what does "NUM_CHANNEL" mean?
Thank you very much.
:D

OOV FID calculate questions

I have read your article GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images and your code, and I am confused about how to calculate the FID of the OOV words.
I noticed that in your article you mentioned that you prepared 400 unique words to test the OOV performance, but there are only 80 words in your file corpora_english/oov.common_words , I have a few questions:

  1. If it is actually 400 words, could you please give me the word list?
  2. Is the OOV FID calculated by averaging of the FID scores of each handwriting style or directly performing evaluation on randomly sampled generated images?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.