Comments (3)
In general, Haste and PyTorch weights aren't compatible because PyTorch doesn't support all of the features that Haste does (e.g. Zoneout, DropConnect, LayerNormLSTM). In the specific case where those features aren't used, the weights are sort of compatible – they're convertible from one format to the other.
Until we have full CPU support for Haste, we'll add methods to export weights so that PyTorch's native classes can use them if you're not using any of the Haste-specific features. I'll take this as a "please add CPU support to Haste" feature request.
from haste.
Added a couple of methods to haste.LSTM
and haste.GRU
so you can import/export weights into PyTorch native classes.
Example of converting Haste LSTM parameters to native PyTorch LSTM parameters:
haste_lstm = haste.LSTM(...)
native_lstm = nn.LSTM(...)
native_lstm.weight_ih_l0, \
native_lstm.weight_hh_l0, \
native_lstm.bias_ih_l0, \
native_lstm.bias_hh_l0 = haste_lstm.to_native_weights()
native_lstm.flatten_parameters()
Example of converting native PyTorch parameters to Haste LSTM parameters:
native_lstm = nn.LSTM(...)
haste_lstm = haste.LSTM(...)
haste_lstm.from_native_weights(
native_lstm.weight_ih_l0,
native_lstm.weight_hh_l0,
native_lstm.bias_ih_l0,
native_lstm.bias_hh_l0)
Similar code applies to the GRU layer.
from haste.
Closing this out since we now have support for converting to/from PyTorch native weights. Haste layers also now run on CPU.
from haste.
Related Issues (20)
- Install on pip on systems without cuda HOT 7
- Segmentation fault on Cuda 10.0 HOT 2
- Support zoneout on lstm cell state and add recurrent dropout HOT 2
- CUDA error: an illegal memory access was encountered HOT 6
- haste_pytorch: Gradient for kernel/recurrent_kernel becomes zero when trained on gpu HOT 4
- How to expose LayerNormGRUCell to python ? HOT 2
- Can't run haste layers in Keras HOT 12
- Biases in final IndRNN layer are 0 HOT 1
- Zoneout remains during eval() HOT 2
- return_state_sequence for tf version
- layer_norm_gru_cell HOT 1
- Can Bidirectional Rnn and multi-layer Rnn be supported? HOT 1
- Activation function in IndRNN HOT 1
- haste_pytorch does not install properly with conda cudatoolkit? HOT 3
- Feature request for cell classes for pytorch HOT 7
- `RNN`s with `zoneout > 0.0` have wrong gradients HOT 1
- haste_tf compilation fails with "‘bfloat16’ in namespace ‘Eigen’ does not name a type"
- Support for PyTorch packed sequences HOT 2
- Supporting RWKV (a RNN that can match transformer LM & zero-shot performance at 1B+ params)
- Nan loss when replace pytorch LSTM with your LSTM or LayerNormLSTM HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from haste.