dc.contributor.author | Garciarena Hualde, Unai  | |
dc.contributor.author | Mendiburu Alberro, Alexander | |
dc.contributor.author | Santana Hermida, Roberto  | |
dc.date.accessioned | 2025-01-20T15:16:35Z | |
dc.date.available | 2025-01-20T15:16:35Z | |
dc.date.issued | 2018-07-02 | |
dc.identifier.citation | GECCO '18: Proceedings of the Genetic and Evolutionary Computation Conference : 849-856 (2018) | es_ES |
dc.identifier.isbn | 978-1-4503-5618-3 | |
dc.identifier.uri | http://hdl.handle.net/10810/71604 | |
dc.description.abstract | In the past, evolutionary algorithms (EAs) that use probabilistic modeling of the best solutions incorporated latent or hidden variables to the models as a more accurate way to represent the search distributions. Recently, a number of neural-network models that compute approximations of posterior (latent variable) distributions have been introduced. In this paper, we investigate the use of the variational autoencoder (VAE), a class of neural-network based generative model, for modeling and sampling search distributions as part of an estimation of distribution algorithm. We show that VAE can capture dependencies between decision variables and objectives. This feature is proven to improve the sampling capacity of model based EAs. Furthermore, we extend the original VAE model by adding a new, fitness-approximating network component. We show that it is possible to adapt the architecture of these models and we present evidence of how to extend VAEs to better fulfill the requirements of probabilistic modeling in EAs. While our results are not yet competitive with state of the art probabilistic-based optimizers, they represent a promising direction for the application of generative models within EDAs. | es_ES |
dc.description.sponsorship | Thiswork has received support form the predoctoral grant that Unai Garciarena holds (ref. PIF16/238) by the University of the Basque Country and the IT-609-13 (Basque Government and TIN2016-78365-R (Spanish Ministry of Economy, Industry and Competitiveness programs http://www.mineco.gob.es/portal/site/mineco. Finally,we gratefully acknowledge the support ofNVIDIA Corporation with the donation of a Titan X Pascal GPU used to accelerate the process of training the models used in this work. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | ACM | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.subject | variational autoencoder | es_ES |
dc.subject | estimation of distribution algorithm | es_ES |
dc.subject | neural network | es_ES |
dc.subject | generative modeling | es_ES |
dc.title | Expanding variational autoencoders for learning and exploiting latent representations in search distributions | es_ES |
dc.type | info:eu-repo/semantics/conferenceObject | es_ES |
dc.rights.holder | © 2018 Association for Computing Machinery | es_ES |
dc.relation.publisherversion | https://doi.org/10.1145/3205455.320564 | es_ES |
dc.identifier.doi | 10.1145/3205455.3205645 | |
dc.departamentoes | Ciencia de la computación e inteligencia artificial | es_ES |
dc.departamentoeu | Konputazio zientziak eta adimen artifiziala | es_ES |