/ælɪks wɔɹstæt/ 🔊
As a computational linguist, I use deep learning to study human language acquisition. There are three arms to my work:
1. Building benchmarks for evaluating linguistic behaviors of models.
2. Training model learners in more developmentally plausible environments.
3. Determining the impact of the learning environment on grammatical generalization through causal manipulations of environmental variables.
I’m also interested in theoretical and experimental pragmatics, and have worked on understanding gradient phenomena in presupposition and relevance.
What Artificial Neural Networks Can Tell Us About Human Language Acquisition.
Alex Warstadt, Samuel R. Bowman. To appear in Algebraic Structures in Natural Language. Shalom Lappin and Philippe Bernardy, Eds. 2022.
Testing gradient measures of relevance in discourse.
Alex Warstadt, Omar Agha. To appear in Proceedings of Sinn und Bedeutung 26. 2022.
Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually).
Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, Samuel R.~Bowman. In Proceedings of EMNLP. 2020.
BLiMP: The Benchmark of Minimal Pairs for English.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, Samuel R Bowman. Transactions of the Association for Computational Linguistics (TACL). 2020.
alexwarstadt <at> gmail <dotcom>
warstadt <at> nyu <dotedu>