In notes/MultiLayerPerceptron.ipynb
, there is a dimensional conflict in this function:
def mlp_fun(x, Weight, Bias, Func):
f = Variable(x, requires_grad=False)
NumOfLayers = len(Weight)
for i in range(NumOfLayers):
f = Func[i](torch.matmul(Weight[i], f) + Bias[i])
return f
I have printed all steps for a 1,2,1 sized network, below are the results:
![image](https://user-images.githubusercontent.com/8215944/48833671-f23ba680-ed8c-11e8-88f4-79630c7ad0c5.png)
While the result of torch.matmul(Weight[0], x)
is a 1x2 matrix, Bias[0]
is a 2x1 vector and their summation is a 2x2 matrix.
This leads to a dimensional conflict in f
results.