NECST Friday Talk
Natural language generators that are too dangerous to release?
Mark Carman
DEIB Associate professor, Politecnico di Milano
DEIB - NECSTLab Meeting Room (Building 20, basement floor)
February 22nd, 2019
12.00 pm
Contacts:
Marco Santambrogio
Research line:
System architectures
Abstract
There's been quite some media attention over the last few days about new deep learning based language generation techniques that are "so good" that their developers (at OpenAI) wouldn't release the models for fear that spammers would start using them to generate fake news. See for example: https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction.
In this talk prof. Carman will discuss the deep learning techniques used by the OpenAI team to build their text generation model.
The NECSTLab is a DEIB laboratory, with different research lines on advanced topics in computing systems: from architectural characteristics, to hardware-software codesign methodologies, to security and dependability issues of complex system architectures.
Every week, the "NECST Friday Talk" invites researchers, professionals or entrepreneurs to share their work experiences and projects they are implementing in the "Computing Systems".
In this talk prof. Carman will discuss the deep learning techniques used by the OpenAI team to build their text generation model.
The NECSTLab is a DEIB laboratory, with different research lines on advanced topics in computing systems: from architectural characteristics, to hardware-software codesign methodologies, to security and dependability issues of complex system architectures.
Every week, the "NECST Friday Talk" invites researchers, professionals or entrepreneurs to share their work experiences and projects they are implementing in the "Computing Systems".