Home

Melanie Tosik

15 September 2018

Neural networks and quantifier conservativity

In the spring, I was able to take Sam Bowman’s course on “Natural Language Understanding and Computational Semantics”. Here is an excerpt from the course description:

The course will focus on text, but will touch on the full range of applicable techniques for language understanding, including formal logics, statistical methods, distributional methods, and deep learning, and will bring in ideas from formal linguistics where they can be readily used in practice. We’ll discuss tasks like sentiment analysis, word similarity, and question answering, as well as higher level issues like how to effectively represent language meaning.

The syllabus is open-source and still available online.


As part of the course, we had to produce a substantial new research paper into applied language understanding. Inspired by previous work in psycholinguistics, our group experimented with the learnability bias towards conservative quantifiers that is exhibited by children but not during the training stage of artificial neural networks. Our final paper titled “Neural Networks and Quantifier Conservativity: Does Data Distribution Affect Learnability?” received overwhelmingly positive feedback and is now available on arXiv.


View NLU project on GitHub ☺︎


Til next time,
Melanie

scribble