Hi,
I am playing around with the vae_cf implementation and I tried to my dataset, instead of the ML-20M. It has the same structure (userid, itemid, ratings) and I made sure data type are the same.
I left all the pre-processing part as it is. I am using google colab to run the code. When I use the ML-20M dataset everything works just fine. Instead, when I try to train the model using my dataset, I get nan values in ndcg_dist list.
Here i copy/paste the error i get:
InvalidArgumentErrorTraceback (most recent call last)
in ()
82 ndcgs_vad.append(ndcg_)
83 print("printing ndcg_var: ", ndcg_var)
---> 84 merged_valid_val = sess.run(merged_valid, feed_dict={ndcg_var: ndcg_, ndcg_dist_var: ndcg_dist})
85 print("printing feed_dict:", feed_dict)
86 summary_writer.add_summary(merged_valid_val, epoch)
3 frames
/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/client/session.pyc in _do_call(self, fn, *args)
1382 '\nsession_config.graph_options.rewrite_options.'
1383 'disable_meta_optimizer = True')
-> 1384 raise type(e)(node_def, op, message)
1385
1386 def _extend_graph(self):
InvalidArgumentError: Nan in summary histogram for: ndcg_at_k_hist_validation
[[node ndcg_at_k_hist_validation (defined at /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Does anyone have any clue what is the cause of this issue?
I would really appreciate any kind of advice and suggestions you may give me.
Thank you very much.
Have a great day.