Abstract
Recent work has shown that neural models can
be successfully trained on multiple languages
simultaneously. We investigate whether such
models learn to share and exploit common
syntactic knowledge among the languages on
which they are trained. This extended abstract
presents our preliminary results
be successfully trained on multiple languages
simultaneously. We investigate whether such
models learn to share and exploit common
syntactic knowledge among the languages on
which they are trained. This extended abstract
presents our preliminary results
Original language | English |
---|---|
Title of host publication | 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP |
Place of Publication | Brussels, Belgium |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 374-377 |
Number of pages | 4 |
DOIs | |
Publication status | Published - Nov-2018 |