Comments (2)
Hi @smonsays, thank you for the interesting question!
In general I think it would have been better to use jnp
, my perspective on this is that we should be passing the compiler as much information and structure as we reasonably can (e.g. rather than passing it a constant, we should pass it the expression to create that constant).
That said, I ran a couple of benchmarks, and it seems that for this toy example it is actually more efficient for the small sequence length chosen in the example. So np
is the optimal choice for the example with XLA on GPU as implemented today.
Looking at the optimized programs generated by XLA, it seems like when you use jnp
that XLA has decided to fuse the tril
operation into a kernel which also applies the mask (as opposed to evaluating it as a constant value at compile time and making that constant part of the program). For larger sequence lengths I guess this increase in compute is offset by the fact you can avoid loading the constant from HBM.
It's quite possible that for a full transformer model (my benchmark only looks at generating and applying the causal mask) the difference in execution time will be in the noise of the overall step.
from dm-haiku.
Wow, thanks @tomhennigan for the comprehensive answer. It is very insightful to visualise the compiled XLA the way you did. I guess I'll trust the XLA compiler to be the smart one in the future.
from dm-haiku.
Related Issues (20)
- More fine-grained mixed-precision configuration HOT 2
- Suggestion: alias `Transformed`(WithState) apply to __call__ HOT 2
- Is there a way to load parameters from Flax model? HOT 2
- Support model examples HOT 7
- Change to jax.interpreters.xla for JAX==0.4.14 HOT 3
- Warning: hk.LayerNorm when used in transformer decoder causes violation of autoregressive property HOT 1
- Reservoir Computing with Haiku
- Efficiency difference in using jax.lax.fori_loop vs looping over identical layers? HOT 2
- Please publish requirements.txt fix to pip
- How to use `apply` with additional parameters? HOT 1
- hk.Conv2DTranspose takes FOREVER to initialize and compile HOT 1
- 0.4.16 timeline HOT 2
- How to export haiku network parameters into Pytorch network?
- Modules got silently "reused" with `hk.vmap` HOT 2
- Wrong gradients in a Haiku network
- Direct Feedback Alignment
- Issue with wheels including docs and examples folder
- `haiku.experimental.flax` is not part of newest pip release HOT 1
- Train multiple hk.nets.MLP with one optimizer HOT 2
- TypeError: 'type' object is not subscriptable HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dm-haiku.