Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf
Ver/
Fecha
2021Autor
Elordi Hidalgo, Unai
Unzueta Irurtia, Luis
Goenetxea Imaz, Jon
Sánchez Carballido, Sergio
Arganda Carreras, Ignacio
Otaegui Madurga, Oihana
Metadatos
Mostrar el registro completo del ítem
IEEE Software 38(1) : 81-87 (2021)
Resumen
We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.