Abstract: The algorithms for supervised learning in artificial neural networks (ANN) require time and high computational power. As these algorithms gain popularity in a variety of domains, it is critical for them to run fast. Following a brief survey of the different dimensions of parallelism in ANN this paper analyses the performance comparison between different parallelization techniques to show the advantages and disadvantages of these strategies.
Disclaimer: The Publisher and/or the editor(s) are not responsible for the statements, opinions, and data contained in any published works. These are solely the views of the individual author(s) and contributor(s). The Publisher and/or the editor(s) disclaim any liability for injury to individuals or property arising from the ideas, methods, instructions, or products mentioned in the content.
Submit Feedback
We value your input! Use this form to report any concerns or provide feedback on our published articles. All submissions will be kept confidential.
By using this site you agree to our Privacy Policy and Terms of Use. We use cookies, including for analytics, personalisation, and ads.