Show simple item record

dc.contributor.authorGarciarena Hualde, Unai ORCID
dc.contributor.authorMendiburu Alberro, Alexander
dc.contributor.authorSantana Hermida, Roberto ORCID
dc.date.accessioned2025-01-20T15:16:35Z
dc.date.available2025-01-20T15:16:35Z
dc.date.issued2018-07-02
dc.identifier.citationGECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference : 849-856 (2018)es_ES
dc.identifier.isbn978-1-4503-5618-3
dc.identifier.urihttp://hdl.handle.net/10810/71604
dc.description.abstractIn the past, evolutionary algorithms (EAs) that use probabilistic modeling of the best solutions incorporated latent or hidden variables to the models as a more accurate way to represent the search distributions. Recently, a number of neural-network models that compute approximations of posterior (latent variable) distributions have been introduced. In this paper, we investigate the use of the variational autoencoder (VAE), a class of neural-network based generative model, for modeling and sampling search distributions as part of an estimation of distribution algorithm. We show that VAE can capture dependencies between decision variables and objectives. This feature is proven to improve the sampling capacity of model based EAs. Furthermore, we extend the original VAE model by adding a new, fitness-approximating network component. We show that it is possible to adapt the architecture of these models and we present evidence of how to extend VAEs to better fulfill the requirements of probabilistic modeling in EAs. While our results are not yet competitive with state of the art probabilistic-based optimizers, they represent a promising direction for the application of generative models within EDAs.es_ES
dc.description.sponsorshipThiswork has received support form the predoctoral grant that Unai Garciarena holds (ref. PIF16/238) by the University of the Basque Country and the IT-609-13 (Basque Government and TIN2016-78365-R (Spanish Ministry of Economy, Industry and Competitiveness programs http://www.mineco.gob.es/portal/site/mineco. Finally,we gratefully acknowledge the support ofNVIDIA Corporation with the donation of a Titan X Pascal GPU used to accelerate the process of training the models used in this work.es_ES
dc.language.isoenges_ES
dc.publisherACMes_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectvariational autoencoderes_ES
dc.subjectestimation of distribution algorithmes_ES
dc.subjectneural networkes_ES
dc.subjectgenerative modelinges_ES
dc.titleExpanding variational autoencoders for learning and exploiting latent representations in search distributionses_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder© 2018 Association for Computing Machineryes_ES
dc.relation.publisherversionhttps://doi.org/10.1145/3205455.320564es_ES
dc.identifier.doi10.1145/3205455.3205645
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record