Comments (6)
The native implementation is using only standard pytorch layers/operations, while the other one uses the Function interface at places. The latter can have some memory advantage for larger input sizes.
from pacnet.
There shouldn't be any noticeable differences in speed. The non-native implementation has advantages in terms of peak memory usage, but I have no statistics regarding the gap. The native version was there mostly for debugging purposes, and the non-native version should be the preferred option now.
from pacnet.
Hey Hang, after a second thought, the original version is actually correct to me. kernel
will be used when you compute grad_input
, and input
will be used when you compute grad_kernel
. See the highlights below.
In your updated version I suppose you will see errors in https://github.com/NVlabs/pacnet/blob/master/pac.py#L352 that input
is None when ctx.needs_input_grad[0] == False
and ctx.needs_input_grad[0] == True
. Am I right?
from pacnet.
Hey Hang, thanks for the comment! Additionally I am wondering if you have any statistics on how big the performance gap is between native and functional implementations, especially in case of large input sizes, or any differences in backprop speed, because I did some basic benchmarking and did not find significant difference there. Thanks!
from pacnet.
Ah i see. Thanks! The native version would also be useful for prototyping because it is easier to build upon.
from pacnet.
@Jerrypiglet That's right. I wasn't careful enough on this. The change is now reverted.
from pacnet.
Related Issues (20)
- PacConv3d HOT 1
- Batch size HOT 1
- Reason for No ReLU HOT 1
- torch 1.4.0 cannot find type2backend HOT 12
- how to generate predictions using fcn8spac? HOT 10
- some basic questions HOT 3
- Debugging Error HOT 11
- Questions on using the CRF layer HOT 1
- JBU training error HOT 1
- Question about Transpose
- Pytorch 1.6 autocast HOT 2
- about the crf model HOT 3
- PacConv3d and AMP HOT 2
- Reason for scaling mask input HOT 1
- PAC_CRF step setting question HOT 2
- autocast support HOT 1
- Error with pacconv: trying to differentiate twice a function that was marked with @once_differentiable
- A question about updating the weight of the kernel HOT 2
- About how to train a fcn8spac from scratch, thanks
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pacnet.