Working with Data from Real-World Corpora: A Case Study on Identifying Issues and Using Scalable Solutions

06 November 2020, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Language corpora are increasingly being based on data from social and educational online platforms, and the size of these new corpora allows researchers to analyze language use in ways that were not previously possible. However, these platforms generally do not collect data with linguistic research in mind, so their data is often “messy” or “dirty” in various ways. Researchers must therefore develop new approaches for organizing and cleaning this type of data. Such approaches should generally be scalable, due to the size of these datasets, so they should rely primarily on quantitative and NLP-based techniques. Here, I present a case study based on working with a large-scale English learner database—the EFCAMDAT. I provide insights into the kinds of challenges that researchers may encounter when working with such corpora, and the kind of solutions that they may use.

Keywords

data curation
corpus cleaning
EFCAMDAT

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.