Creating a python implementation of neural networks.
It contains two classes:
1.NeuralNetwork
: takes input and performs forward + backward propogation based on the loss functions defined.
2.Optimizers
: contains Batch-Gradient-Descent and ADAM optimization methods to update network weights.
Credits: This repository is largely based on Ayush's work: repo
Much thanks!!
pip install neural_networks
Inputs:
- input layer dimensions
- activation functions for hidden layers + output layers. some of the valid activation functions are:
- "relu"
- "sigmoid"
- "tanh"
- "softmax"
- loss functions:
- "MSE"
- "CrossEntropyLoss"
This module also contains a Optimizer
class which takes in the training data and optimizes the parameter weights based on the type of optimizer and loss function
# test run for binary classification problem:
np.random.seed(3)
print('Running a binary classification test')
data = datasets.make_classification(n_samples=30000,n_features=10,n_classes=2)
X = data[0].T
Y = (data[1].reshape(30000,1)).T
print("Input shape: ", X.shape)
print("Output shape: ", Y.shape)
#Generate sample binary classification data
net = NeuralNetwork(layer_dimensions=[10,20,1],
activations=['relu','sigmoid'])
net.cost_function = 'CrossEntropyLoss'
print(net)
#Optimize using standard gradient descenet
optim = Optimizer.gradientDescentOptimizer
optim(input=X,
mappings=Y,
net=net,
alpha=0.07,
epoch=200,
lamb=0.05,
print_at=100)
output = net.forward(X)
#Convert the probabilities to output values
output = 1*(output>=0.5)
accuracy = np.sum(output==Y)/30000
print('for gradient descent \n accuracy = ' ,np.round(accuracy*100,5))