«

»

Diachronic linguistics with distributional semantic models in R: G. Desagulier LADAL Webinars 2021



This talk was recorded on Sep. 30, 2021 as part of the LADAL Opening Webinar Series 2021.

LADAL Website: https://slcladal.github.io/index.html
LADAL Opening Webinar Series: https://slcladal.github.io/opening.html
LADAL on Twitter: @slcladal
Contact: slcladal@uq.edu.au

Abstract

Computational linguistics offers promising tools for tracking language change in diachronic corpora. These tools exploit distributional semantic models, both old and new. DSMs tend to perform well at the level of lexical semantics but are more difficult to fine-tune when it comes to capturing grammatical meaning.

I present ways in which the above can be improved. I start from well-trodden methodological paths implemented in diachronic construction grammar: changes in the collocational patterns of a linguistic unit reflect changes in meaning/function; distributional word representations can be supplemented with frequency-based methods. I move on to show that when meaning is apprehended with predictive models (e.g. word2vec), one can trace semantic shifts with greater explanatory power than with count models. Although this idea may sound outdated from the perspective of NLP, it actually goes great ways from the viewpoint of theory-informed corpus linguistics.

I illustrate the above with several case studies, one of which involves complex locative prepositions in the Corpus of Historical American English. I conclude my talk by defending the idea that NLP, with its focus on computational efficiency, and corpus-linguistics, with its focus on tools that maximize data inspection, have much to gain from getting closer.