Comments (3)
Hello and thank you for your interest.
It really depends on what you mean by debugging.
- When it's a compilation issue, you may find using
setup.py
better as the logs are a bit more concise. - If you want to check your implementation, you probably have to either come up with a pure python/pytorch implementation, and compare outputs of the two. That's what we typically do, which is send random tensors with different shapes through the CUDA version and the "torch version" with the same weights, compute the outputs, check if they
allclose
, and then do the same for the backward pass. - If you're sure of your forward pass being correct, gradcheck is probably a better way to check if your backward pass kernel is correct.
- It's also not a bad idea to have assertions in place, even in the kernel when debugging, but I'd recommend leaving in as few assertions in the device code (the kernel) as possible and do assertions mostly before the kernel call.
If you're getting into optimization, you may find pytorch's profiler useful for measuring latency, and you'd probably need NVIDIA Nsight to profile in more detail.
I hope you find these useful, but if you need more details, please let us know.
from neighborhood-attention-transformer.
Thank you very much for your suggestions! I will have a try.
from neighborhood-attention-transformer.
Closing this due to inactivity. If you still have questions feel free to open it back up.
from neighborhood-attention-transformer.
Related Issues (20)
- Is it necessary to write dedicated fp16 kernel ? HOT 8
- No position encoding? Could you explain some your thoughts? HOT 2
- abbreviation for rpb HOT 2
- Legacy Torch implementation for Dilated Neighborhood Attention (DiNAT) HOT 5
- Relation to visual attention network (VAN). HOT 4
- A question about the rpb in LegacyNeighborhoodAttention2D HOT 5
- Compile problem HOT 2
- Motivation on choosing NAT depth HOT 2
- While ruuning the code, I got this types of problem. Could you please tell me the solution HOT 11
- ONNX HOT 2
- How to visualize the attention map? HOT 3
- Welcome update to OpenMMLab 2.0 HOT 1
- Is it possible to do upsampling using NAT ? HOT 2
- Where is natten.py
- May I ask whether the code of coco instance segmentation mask2former is dinat or NAT? HOT 1
- some problem during train HOT 9
- Is DiNAT code is runnable? HOT 2
- Is dectect model available? HOT 2
- freeze_at be set to 2 to freeze the pretrained weight downloaded from the official website? HOT 2
- About the receptive field of image pixel HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neighborhood-attention-transformer.