Pedro Rodriquez presents " Word Embeddings: What works, what doesn’t, and how to tell the difference for applied research"
Wednesday, October 16, 2019
CGIS Knafel Building (K354) - 12-1:30 pm
Abstract: We consider the properties and performance of word embeddings techniques in the context of political science research. In particular, we explore key parameter choices—including context window length, embedding vector dimensions and the use of pre-trained vs locally fit variants—in terms of effects on the efficiency and quality of inferences possible with these models. Reassuringly, with caveats, we show that results are robust to such choices for political corpora of various sizes and in various languages. Beyond reporting extensive technical findings, we provide a novel crowd-sourced “Turing test”-style method for examining the relative performance of any two models that produce substantive, text-based outputs. Encouragingly, we show that popular, easily available pre-trained embeddings perform at a level close to---or surpassing---both human coders and more complicated locally-fit models. For completeness, we provide best practice advice for cases where local fitting is required.