Pretzel: optimized Machine Learning framework for low-latency and high throughput workloads
Alberto Scolari
DEIB PhD Student, Politecnico di Milano
DEIB - NECST Meeting Room (Building 20, basement floor)
May 15th, 2018
12.00 pm
Research Line:
System architectures
On the opposite side, Pretzel adopts a white-box description of ML models, which allows the framework to perform optimizations over deployed models and running tasks, saving memory and increasing the overall system performance. If models share common state or code (which is often found in practice) Pretzel keeps unique state objects and operators and decreases the overhead of compiling and optimising the running models.
This talk will discuss the motivations behind Pretzel, its current design and possible future developments.
The NECSTLab is a DEIB laboratory, with different research lines on advanced topics in computing systems: from architectural characteristics, to hardware-software codesign methodologies, to security and dependability issues of complex system architectures.
Every week, the "NECST Friday Talk" invites researchers, professionals or entrepreneurs to share their work experiences and projects they are implementing in the "Computing Systems".