sold's People
sold's Issues
Some questions about PWC-Net
-
As stated in the README, the
tfoptflow
implementation of PWC-Net was used. However,tfoptflow
has some alarming yet unresolved issues (e.g., regarding scaling and flow equation). What modification(s) were employed in this work? -
Also, the Requirements Setup says that:
Please overwrite
tfoptflow/model_pwcnet.py
andtfoptflow/model_base.py
using the ones in this repository.The "ones in this repository"—which I believe are to be used in place of
model_pwcnet.py
andmodel_base.py
from thetfoptflow
code base—seem to be missing.
overwrite files missing!
In your README, you said that Please overwrite tfoptflow/model_pwcnet.py and tfoptflow/model_base.py using the ones in this repository.
, but there are no files related to these two.
Thanks!
Code for Meta Learning with Reptile missing
Hi Alex,
Thanks for the inspiring work and the implementation.
Did you miss out checking in the code for Section 3.6 Meta Learning?
Thanks
Where are these checkpoints?
I tried the repo but encountering some problem, it seems missing checkpoints.
I found in the colab demo, the model.ckpt-239999 are successfully loaded.
INFO:tensorflow:Restoring parameters from train_dir_initFlow_Fence/model.ckpt-239999
INFO:tensorflow:Restoring parameters from train_dir_imgReconstruction_Fence/model.ckpt-239999
What are these checkpoints? I downloaded the ckpt.zip and unzip it, get some checkpoints with 'ckpt_' prefix.
Thanks.
Comparing to Alayrac et al.
Great work. Just wondering how did you use 'The Visual Centrifuge: Model-Free Layered Video Representations' paper in your paper comparison? Did you retrain that on your data? Or did you get their weights?
Supplementary Material missing in version 2 of the paper
Hi, I noticed that a version 2 of the arXiv paper is now available. However, this version has no Supplementary Material (which was present in version 1). Does this mean that the dataset generation and network architecture are different from that of v1? If so, will you be uploading a corresponding Supplementary Material for v2? Thanks.
How to use multiple image sequences for online training?
Say I have two five-frame image sequences, seq1_I{0,1,2,3,4}.png
and seq2_I{0,1,2,3,4}.png
. How can I run online training to use the said image sequences?
Looking at train_fence_online.py
, I'm guessing it will have something to do with the --batch_size
and the --training_scene
flags:
Lines 22 to 23 in 519ee0d
Lines 26 to 28 in 519ee0d
However, I cannot tell how to specify the image sequence. Do I pass
--batch_size 2 --training_scene seq
when calling !python train_fence_online.py
?Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.