Comments (6)
@raghavgurbaxani The 3D block expects a 3D tensor with shape (batch_size, time_steps, input_dim)
.
You can always set input_dim=1, by using a Reshape
layer or Lambda
with K.expand_dims(..., axis=-1)
That way you turn your 2D input into a 3D with the last dimension being one.
from keras-attention.
Thank you so much for your help ! I tried tf.expand(dims,axis=-1) and I am able to compile my code successfully - however it doesn't train well.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
features (InputLayer) [(None, 16, 1816)] 0
__________________________________________________________________________________________________
lstm (LSTM) (None, 2048) 31662080 features[0][0]
__________________________________________________________________________________________________
tf_op_layer_ExpandDims (TensorF [(None, 2048, 1)] 0 lstm[0][0]
__________________________________________________________________________________________________
attention_score_vec (Dense) (None, 2048, 1) 1 tf_op_layer_ExpandDims[0][0]
__________________________________________________________________________________________________
last_hidden_state (Lambda) (None, 1) 0 tf_op_layer_ExpandDims[0][0]
__________________________________________________________________________________________________
attention_score (Dot) (None, 2048) 0 attention_score_vec[0][0]
last_hidden_state[0][0]
__________________________________________________________________________________________________
attention_weight (Activation) (None, 2048) 0 attention_score[0][0]
__________________________________________________________________________________________________
context_vector (Dot) (None, 1) 0 tf_op_layer_ExpandDims[0][0]
attention_weight[0][0]
__________________________________________________________________________________________________
attention_output (Concatenate) (None, 2) 0 context_vector[0][0]
last_hidden_state[0][0]
__________________________________________________________________________________________________
attention_vector (Dense) (None, 128) 256 attention_output[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 1024) 132096 attention_vector[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 1024) 0 dense[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 1024) 0 leaky_re_lu[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 120) 123000 dropout[0][0]
__________________________________________________________________________________________________
feature_weights (InputLayer) [(None, 120)] 0
__________________________________________________________________________________________________
multiply (Multiply) (None, 120) 0 dense_1[0][0]
feature_weights[0][0]
==================================================================================================
Total params: 31,917,433
Trainable params: 31,917,433
Non-trainable params: 0
__________________________________________________________________________________________________
While training I get the error -
File "/mnt/ext/raghav/conda/envs/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 113, in save_model model, filepath, overwrite, include_optimizer) File "/mnt/ext/raghav/conda/envs/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 101, in save_model_to_hdf5 default=serialization.get_json_type).encode('utf8') File "/mnt/ext/raghav/conda/envs/lib/python3.7/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/mnt/ext/raghav/conda/envs/lib/python3.7/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/mnt/ext/raghav/conda/envs/lib/python3.7/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/mnt/ext/raghav/conda/envs/lib/python3.7/site-packages/tensorflow/python/util/serialization.py", line 69, in get_json_type raise TypeError('Not JSON Serializable:', obj) TypeError: ('Not JSON Serializable:', b'\n\nExpandDims\x12\nExpandDims\x1a\x14lstm/strided_slice_7\x1a\x0eExpandDims/dim*\x07\n\x01T\x12\x020\x01*\n\n\x04Tdim\x12\x020\x03')
Any idea why this occurs ?
from keras-attention.
Yes you have to use Lambda(lambda z: K.expand_dims(z, axis=-1))
. K.expand_dims(z, axis=-1)
is not a layer and that's why keras is complaining. Use it inside a Sequential. Or consider this lambda as any other layer.
With the imports:
from tensorflow.keras.layers import Lambda
import tensorflow.keras.backend as K
from keras-attention.
I'll close this issue for now. Let me know if you need more help.
from keras-attention.
@philipperemy thank you so much :) It worked
from keras-attention.
@raghavgurbaxani GREAT!
from keras-attention.
Related Issues (20)
- Restricting attention weights to domain
- Hiddent state parameter, what really should be passed? HOT 1
- pip install and numpy, keras packages are forced to be uninstalled HOT 1
- Use this repository for CNN HOT 1
- weird attention weights when adding sequence of numbers. HOT 1
- attention when using more than one feature HOT 1
- get_config HOT 14
- Using attention with multivariate timeseries data
- Loading model problems HOT 5
- Interpreting attention weights for more than one input features. HOT 2
- Add guidance to README to use Functional API for saving models that use this layer HOT 4
- Attention Mechanism not working HOT 10
- what do the h_t mean in the Attention model? HOT 1
- Output with multiple time steps HOT 1
- Attention not working for MLP HOT 2
- TypeError: Expected `trainable` argument to be a boolean, but got: 64 HOT 3
- Please update version HOT 1
- TypeError: __call__() takes 2 positional arguments but 3 were given HOT 2
- Number of parameters in Attention layer HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from keras-attention.