This repository contains the code for the real data experiments presented in our paper “An embarrassingly simple approach to zero-shot learning”, presented at ICML 2015.
You used the dot product between the ground truth semantic description and predicted semantic description as a metric to measure the closeness between the two vectors.
Why did you prefer the dot product over Euclidean distance? What is the intuition behind this approach. I mean, why is euclidean measure is not working.
Why do you compare the predicted semantic description with the ground truth semantic description of only new classes and not all classes.
What is the input matrix present in all.kernel. Why is this matrix symmetric and squared? What features did you use?