Comments (6)
In my opinion setting SAVE_MODEL_STEPS_PERIOD
parameter to 1000
is fair enough but it depends on the total number of steps (number of images/batch size * number of epochs), so if you'd prefer to increase the number of epochs then you should also increase SAVE_MODEL_STEPS_PERIOD
parameter.
Of course , you can use images that are not 256x256, currently if images are bigger than 256 then input pipeline resizes them to 256x256 - you can change it if you want in config file.
And what do you mean by saying ?
to obtain a slightly better result than the one shown in the README image after 5 epochs
The picture in README presents only pipeline output (middle picture) for model training in warm-up mode. In this mode pipeline is trained only with reconstruction loss so it has poor abilities to fill masked pixels with more advanced content. If you want to obtain more robust results you have to continue training in standard mode when more advanced loss functions are used.
Regards!
from inpainting-gmcnn-keras.
hi,
have you tried running pipeline from master
branch?
BR
from inpainting-gmcnn-keras.
Hi @tlatkowski,
I execute running pipeline from master
but the result is the same. I had to update tensorflow
version to 1.15
at requirements/requirements-cpu.txt
.
After run runner.py, the outputs folder has this content:
Is that right? I expected that these folder has more than one step (step_000)
step_000.png
from inpainting-gmcnn-keras.
hi @garispe ,
it looks correct, i mean pipeline saves this kind of picture every 1000 steps (as defined in config/main_config.ini
, parameter SAVE_MODEL_STEPS_PERIOD
), you can change it if you want or wait until more pictures are generated. And one other remark, you'd better start training from generator warm-up, the entire model is quite complex and contains several loss constituents and without warming generator up it will be hard to obtain proper results.
from inpainting-gmcnn-keras.
Hi @tlatkowski,
Indeed, changing that variable to 1 I can see an image for each step.
What number do you recommend to use? And, to obtain a slightly better result than the one shown in the README image after 5 epochs, how many images would be necessary to train the model?
I also take the opportunity to consult you, if it is possible to use images that are not 256x256?
Thanks in advance,
Regards!
from inpainting-gmcnn-keras.
Hi @tlatkowski,
Thanks for you answer! As first thing, I will close this issue due to I was wrong.
I will try that you mentioned and will tell you if I found another problem.
Thanks again!
Excellent work!
Regards!
from inpainting-gmcnn-keras.
Related Issues (8)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from inpainting-gmcnn-keras.