Giter Site home page Giter Site logo

nndl-delta-learning-rule-'s Introduction

Delta learning rule

The delta rule is a formula for updating the weights of a neural network during training. It is considered a special case of the backpropagation algorithm. The delta rule is in fact a gradient descent learning rule. Recall that the process of training a neural network involves iterating over the following steps:

A set of input and output sample pairs are selected randomly and run through the neural network. The network makes predictions on these samples.

The loss between the predictions and the true values is computed.

Adjust the weights in a direction that makes the loss smaller.

The delta rule is one algorithm that can be used repeatedly during training to modify the network weights to reduce loss error.

It is based on a gradient descent approach which continues forever. It states that the modification in the weight of a node is equal to the product of the error and the input where the error is the difference between desired and actual output.

The delta rule in an artificial neural network is a specific kind of backpropagation that assists in refining the machine learning/artificial intelligence network, making associations among input and outputs with different layers of artificial neurons.

Generally, backpropagation has to do with recalculating input weights for artificial neurons utilizing a gradient technique. Delta learning does this by using the difference between a target activation and an obtained activation. By using a linear activation function, network connections are balanced. Another approach to explain the Delta rule is that it uses an error function to perform gradient descent learning.

Application

The generalized delta rule is important in creating useful networks capable of learning complex relations between inputs and outputs. Compared to other early learning rules like the Hebbian learning rule or the Correlation learning rule, the delta rule has the advantage that it is mathematically derived and directly applicable to Supervised Learning. In addition, unlike the Perceptron learning rule which relies on the use of the Heaviside step function as the activation function which means that the derivative does not exist at zero, and is equal to zero elsewhere, the delta rule is applicable to differentiable activation functions like tanh and the sigmoid function

nndl-delta-learning-rule-'s People

Contributors

anjalipathak03 avatar

Stargazers

Siddhi Patade avatar  avatar

Watchers

 avatar

Forkers

sidharthsudhi1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.