Position: Postdoctoral Researcher

Current Institution: University of California, Berkeley

Abstract:
Computational Models of Natural Language Learning and Processing

People effortlessly learn languages, through observation and without any explicit supervision; however, language learning is a complex computational processes which we do not fully understand. In this talk, I will explain how computational modeling can shed light on the mechanisms underlying semantic acquisition — learning word meanings and their relations — which is a significant aspect of language learning. I introduce an unsupervised framework for semantic acquisition that mimics children: it starts with no linguistic knowledge and processes the input using general cognitive (learning) mechanisms such as memory and attention. I show that by integrating other cognitive mechanisms with word learning, our computational model can better account for child behavior. Specifically, I demonstrate that three important phenomena observed in child vocabulary development (individual differences, the role of forgetting in learning, and learning semantic relations among words) can only be explained when these cognitive mechanisms are integrated with word learning.

Bio:
Aida Nematzadeh is a post-doctoral researcher at the University of California, Berkeley. She received a PhD and an MSc in Computer Science from the University of Toronto in 2015 and 2010, respectively. Aida’s research provides a better understanding of the computational mechanisms underlying the human ability to learn and organize information, with a focus on language learning. Aida has been awarded a NSERC Postdoctoral Fellowship from Natural Sciences and Engineering Research Council of Canada.