MATHEMATICAL APPROACH OF THE BACKPROPAGATION METHOD FOR CONSTRUCTING ARTIFICIAL NEURAL NETWORKS
DOI:
https://doi.org/10.54309/IJICT.2024.19.3.003Keywords:
backpropagation method; loss function; ANN (Artificial neural network); gradient descent, activation function; weights; biases; parameters.Abstract
Backpropagation is probably the core part of a neural network. This method is used to efficiently train a network using a chain rule that allows differentiation of complex functions. In other words, after each pass through the network, the backpropagation method performs a backward pass to adjust the model parameters, such as weights and biases.
This article highlights the importance of using the backpropagation method from the point of view of mathematical formulas for neural networks.
The importance of using the backpropagation learning algorithm to calculate the gradient (gradient descent) and the need to use the activation function to minimize the loss function is mathematically described and calculated by formulas, and also proven by calculating the matrix products of vectors for each layer of parameters - weights and biases and applying complex differential equations.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 INTERNATIONAL JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
https://creativecommons.org/licenses/by-nc-nd/3.0/deed.en