Introduction to Neural Networks for Modeling and Representing Natural Language
Emre Calisir
DEIB - PhD Student
DEIB - 1A Room (building 20, first floor)
September 19th, 2019
11.00 am
Contacts:
Marco Brambilla
Research Line:
Data, web, and society
DEIB - PhD Student
DEIB - 1A Room (building 20, first floor)
September 19th, 2019
11.00 am
Contacts:
Marco Brambilla
Research Line:
Data, web, and society
Abstract
In this seminar, I will describe the basics of machine learning and artificial neural networks when applied to the natural language processing tasks. I will also cover representational learning from text: this includes algorithms such as word2vec and fastText. I will describe the differences between various algorithms for learning representations. Efficient supervised text classification with the fastText algorithm will also be discussed. In the last part of the seminar, statistical language models based on neural networks will be introduced, and certain advanced topics such as vanishing and exploding gradients, as well as learning the longer term memory in recurrent networks will be explained. I will also talk about the limitations of the current learning algorithms, and discuss the limitations of generalization in the context of sequential data, and learning from language in general.