Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?

Prajit Dhar, Arianna Bisazza

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    5 Citations (Scopus)
    137 Downloads (Pure)

    Abstract

    Recent work has shown that neural models can
    be successfully trained on multiple languages
    simultaneously. We investigate whether such
    models learn to share and exploit common
    syntactic knowledge among the languages on
    which they are trained. This extended abstract
    presents our preliminary results
    Original languageEnglish
    Title of host publication2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
    Place of PublicationBrussels, Belgium
    PublisherAssociation for Computational Linguistics (ACL)
    Pages374-377
    Number of pages4
    DOIs
    Publication statusPublished - Nov-2018

    Fingerprint

    Dive into the research topics of 'Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?'. Together they form a unique fingerprint.

    Cite this