tebesu / collaborativememorynetwork Goto Github PK
View Code? Open in Web Editor NEWCollaborative Memory Network for Recommendation Systems, SIGIR 2018
Collaborative Memory Network for Recommendation Systems, SIGIR 2018
CollaborativeMemoryNetwork/util/attention.py
Line 177 in 2c55efd
Is hop_mapping reinitialized for every memory hop? Why is it not initialized in the __init__
function and then trained progressively?
Follow the instructions in README, I set tf version to 1.4.0 and dm-sonnet to 1.36, then I ran the code and got this error:
Traceback (most recent call last):
File "train.py", line 13, in
from util.cmn import CollaborativeMemoryNetwork
File "D:\Study\reproduction\CollaborativeMemoryNetwork-tf\util\cmn.py", line 1, in
import sonnet as snt
File "E:\Anaconda\envs\tf1.4\lib\site-packages\sonnet_init_.py", line 63, in
ensure_dependency_available_at_version('tensorflow', '1.8.0')
File "E:\Anaconda\envs\tf1.4\lib\site-packages\sonnet_init.py", line 61, in _ensure_dependency_available_at_version
(package_name, pkg.version, min_version))
SystemError: tensorflow version 1.4.0 is installed, but Sonnet requires at least version 1.8.0.
Since the sonnet 1.8.0 version is not exist, if I use the sonnet 2.0.0, snt.AbstractModule is not compatiable.
Could you give me some advices?
Hi, I'm trying to re-implement this project in PyTorch. I've written the model following the Tensorflow implementation given here, however, my loss does not converge.
I believe I might be getting the trainable parameters wrong. Could you please tell me which parameters are trainable in your model?
If it's not too much trouble could you also have a look at my code?
Link
Dear authors,
I was trying to run your algorithm. While pretrain.py runs until the end, train.py, with your own configuration, crashes very soon with an InvalidArgumentError
I am using tensorflow 1.8.0, sonnet 1.23 and python 3.6
The console oputput is the following:
[INFO:tensorflow]:116:
{
"batch_size": 128,
"decay_rate": 0.9,
"embed_size": 50,
"filename": "data/citeulike-a.npz",
"grad_clip": 5.0,
"hops": 2,
"item_count": "16980",
"l2": 0.1,
"learning_rate": 0.001,
"logdir": "result/004/",
"max_neighbors": 311,
"neg_count": 4,
"optimizer": "rmsprop",
"optimizer_params": "{'momentum': 0.9, 'decay': 0.9}",
"pretrain": "pretrain/citeulike-a_e50.npz",
"save_directory": "result/004/",
"tol": 1e-05,
"user_count": "5551"
}
[WARNING:tensorflow]:126: From /home/ubuntu/Scaricati/CollaborativeMemoryNetwork-master/util/layers.py:250: get_or_create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.get_or_create_global_step
[INFO:tensorflow]:116: Creating Hop Mapping 2 with <function relu at 0x7f6033936400>
[INFO:tensorflow]:116: Creating Hop Mapping 2 with <function relu at 0x7f6033936400>
[WARNING:tensorflow]:126: From train.py:83: Supervisor.init (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.MonitoredTrainingSession
2018-09-18 10:31:07.033201: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[INFO:tensorflow]:116: Running local_init_op.
[INFO:tensorflow]:116: Done running local_init_op.
[INFO:tensorflow]:116: Starting standard services.
[INFO:tensorflow]:116: Saving checkpoint to path result/004/model.ckpt
[INFO:tensorflow]:116: Starting queue runners.
[INFO:tensorflow]:116: Loading Pretrained Embeddings.... from pretrain/citeulike-a_e50.npz
0%| | 0/6232 [00:00<?, ?it/s]
[0] Loss: 10.9797 » » » » : 0%| | 0/6232 [00:00<?, ?it/s]
[0] Loss: 10.9797 » » » » : 0%| | 1/6232 [00:00<54:19, 1.91it/s]
[0] Loss: 10.9669 » » » » : 0%| | 1/6232 [00:00<1:04:38, 1.61it/s]
[0] Loss: 10.9458 » » » » : 0%| | 1/6232 [00:00<1:24:32, 1.23it/s]
[0] Loss: 10.9458 » » » » : 0%| | 3/6232 [00:00<28:10, 3.68it/s] Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[126,0] = 12178 is not in [0, 5551)
[[Node: MemoryOutput_2/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@Optimizer/ApplyGradients/update_MemoryOutput/embeddings/ApplyRMSProp"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](MemoryOutput/embeddings/read, _arg_NeighborhoodNeg_0_5, Optimizer/gradients/MemoryOutput_1/embedding_lookup_grad/concat/axis)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 116, in
batch_loss, _ = sess.run([model.loss, model.train], feed)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[126,0] = 12178 is not in [0, 5551)
[[Node: MemoryOutput_2/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@Optimizer/ApplyGradients/update_MemoryOutput/embeddings/ApplyRMSProp"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](MemoryOutput/embeddings/read, _arg_NeighborhoodNeg_0_5, Optimizer/gradients/MemoryOutput_1/embedding_lookup_grad/concat/axis)]]
Caused by op 'MemoryOutput_2/embedding_lookup', defined at:
File "train.py", line 80, in
model = CollaborativeMemoryNetwork(config)
File "/home/ubuntu/Scaricati/CollaborativeMemoryNetwork-master/util/cmn.py", line 42, in init
self._construct()
File "/home/ubuntu/Scaricati/CollaborativeMemoryNetwork-master/util/cmn.py", line 69, in _construct
self.user_output(self.input_neighborhoods_negative),
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/sonnet/python/modules/base.py", line 389, in call
outputs, subgraph_name_scope = self._template(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/template.py", line 455, in call
result = self._call_func(args, kwargs)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/template.py", line 400, in _call_func
result = self._func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/sonnet/python/modules/base.py", line 246, in _build_wrapper
output = self._build(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/sonnet/python/modules/embed.py", line 166, in _build
self._embeddings, ids, name="embedding_lookup")
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 308, in embedding_lookup
transform_fn=None)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 131, in _embedding_lookup_and_transform
result = _clip(array_ops.gather(params[0], ids, name=name),
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2736, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3065, in gather_v2
"GatherV2", params=params, indices=indices, axis=axis, name=name)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/home/ubuntu/anaconda3/envs/DL_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): indices[126,0] = 12178 is not in [0, 5551)
[[Node: MemoryOutput_2/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@Optimizer/ApplyGradients/update_MemoryOutput/embeddings/ApplyRMSProp"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](MemoryOutput/embeddings/read, _arg_NeighborhoodNeg_0_5, Optimizer/gradients/MemoryOutput_1/embedding_lookup_grad/concat/axis)]]
Hello authors:
I have some confused . Have you use this method for explicit score prediction ?
Hi, could you please explain why masking has been carried out over the scores?
CollaborativeMemoryNetwork/util/attention.py
Line 114 in 2c55efd
Also, what exactly is the significance of the minimum float value here? Specifically, why not use just 0s and 1s to mask?
Hello, @tebesu
I have read your paper "Collaborative Memory Network for Recommendation Systems (SIGIR 2018)" and I am pretty interested in your work. But I am not sure about an implementation detail of the paper's equation 5:
where vector q^{h}_{u,i}
is transformed by a matrix W^{h}
before adding with other vectors (o^{h}_{u,i}
and b^{h}
). My problem is that the size of q^{h}_{u,i}
is different given different i. Then how is it possible to determine the size of W^{h}
?
As the code is not available yet, I open this issue here. Thank you very much in advance for checking my problem. Please let me know if I misunderstood the paper.
Sorry for asking quesetions here because Gmail reminds that your email in paper doesn't exist.....
I have some questions in section 3.2 Neighborhood Attention
CollaborativeMemoryNetwork/util/cmn.py
Line 157 in 2c55efd
CollaborativeMemoryNetwork/util/cmn.py
Line 158 in 2c55efd
Why do we need to share these weights? I'm unable to see how these variables are being used.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.