Position: Ph.D. student
Current Institution: Stanford University
Towards the Machine Comprehension of Text
Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Recently, researchers proposed to exploit the fact that the abundant news articles of CNN and Daily Mail are accompanied by bullet point summaries in order to heuristically create large-scale supervised training data for the reading comprehension tasks. My research aims to address the following two problems: 1) simple recurrent neural networks with attention mechanism are highly effective in solving such large but synthetic tasks; 2) the models trained on such datasets can be valuable to make real progress on machine comprehension.
Danqi Chen is currently a Ph.D. candidate in the Computer Science Department of Stanford University, advised by Prof. Christopher Manning. Her main research interests lie in deep learning for natural language processing and understanding, and she is particularly interested in the intersection between text understanding and knowledge reasoning. She has been working on machine comprehension, knowledge base completion / population, dependency parsing and her works have been published in leading NLP/ML conferences. Prior to Stanford, she received her
BS from Tsinghua University in 2012. She has been awarded Microsoft Research Women’s Fellowship and also received many respectable awards in programming contests (IOI’08 gold medalist and ACM/ICPC WF’10 silver medalist).