Show simple item record

dc.contributor.authorTeso Fernández de Betoño, Adrián ORCID
dc.contributor.authorZulueta Guerrero, Ekaitz
dc.contributor.authorCabezas Olivenza, Mireya
dc.contributor.authorTeso Fernández de Betoño, Daniel ORCID
dc.contributor.authorFernández Gámiz, Unai
dc.date.accessioned2022-09-16T15:40:22Z
dc.date.available2022-09-16T15:40:22Z
dc.date.issued2022-09-05
dc.identifier.citationMathematics 10(17) : (2022) // Article ID 3206es_ES
dc.identifier.issn2227-7390
dc.identifier.urihttp://hdl.handle.net/10810/57755
dc.description.abstractWhen training a feedforward stochastic gradient descendent trained neural network, there is a possibility of not learning a batch of patterns correctly that causes the network to fail in the predictions in the areas adjacent to those patterns. This problem has usually been resolved by directly adding more complexity to the network, normally by increasing the number of learning layers, which means it will be heavier to run on the workstation. In this paper, the properties and the effect of the patterns on the network are analysed and two main reasons why the patterns are not learned correctly are distinguished: the disappearance of the Jacobian gradient on the processing layers of the network and the opposite direction of the gradient of those patterns. A simplified experiment has been carried out on a simple neural network and the errors appearing during and after training have been monitored. Taking into account the data obtained, the initial hypothesis of causes seems to be correct. Finally, some corrections to the network are proposed with the aim of solving those training issues and to be able to offer a sufficiently correct prediction, in order to increase the complexity of the network as little as possible.es_ES
dc.description.sponsorshipThe authors were supported by the government of the Basque Country through the research grant ELKARTEK KK-2021/00014 BASQNET (Estudio de nuevas técnicas de inteligencia artificial basadas en Deep Learning dirigidas a la optimización de procesos industriales).es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectmachine learninges_ES
dc.subjectneural network traininges_ES
dc.subjecttraining algorithmses_ES
dc.titleA Study of Learning Issues in Feedforward Neural Networkses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.date.updated2022-09-08T13:24:38Z
dc.rights.holder© 2022 by the authors.Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/ 4.0/).es_ES
dc.relation.publisherversionhttps://www.mdpi.com/2227-7390/10/17/3206es_ES
dc.identifier.doi10.3390/math10173206
dc.departamentoesIngeniería eléctrica
dc.departamentoesIngeniería de sistemas y automática
dc.departamentoesIngeniería Energética
dc.departamentoeuIngeniaritza elektrikoa
dc.departamentoeuSistemen ingeniaritza eta automatika
dc.departamentoeuEnergia Ingenieritza


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

© 2022 by the authors.Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/ 4.0/).
Except where otherwise noted, this item's license is described as © 2022 by the authors.Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/ 4.0/).