Evaluating and comparing language workbenches Existing results and benchmarks for the future

Sebastian Erdweg*, Tijs van der Storm, Markus Voelter, Laurence Tratt, Remi Bosman, William R. Cook, Albert Gerritsen, Angelo Hulshout, Steven Kelly, Alex Loh, Gabriel Konat, Pedro J. Molina, Martin Palatnik, Risto Pohjonen, Eugen Schindler, Klemens Schindler, Riccardo Solmi, Vlad Vergu, Eelco Visser, Kevin van der VlistGuido Wachsmuth, Jimi van der Woning

*Corresponding author for this work

    Research output: Contribution to journalArticleAcademicpeer-review

    148 Citations (Scopus)

    Abstract

    Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs. (C) 2015 Elsevier Ltd. All rights reserved.

    Original languageEnglish
    Pages (from-to)24-47
    Number of pages24
    JournalComputer Languages, Systems and Structures
    Volume44
    DOIs
    Publication statusPublished - Dec-2015
    Event7th International Conference on Software Language Engineering (SLE) - Vasteras, Sweden
    Duration: 15-Sept-201416-Sept-2014

    Keywords

    • Language workbenches
    • Domain-specific languages
    • Questionnaire language
    • Survey
    • Benchmarks

    Fingerprint

    Dive into the research topics of 'Evaluating and comparing language workbenches Existing results and benchmarks for the future'. Together they form a unique fingerprint.

    Cite this