Evaluating the quality of a set of modelling languages used in combination: A method and a tool

Faber D. Giraldo, Sergio Espana, William J. Giraldo, Oscar Pastor

    Research output: Contribution to journalArticleAcademicpeer-review

    Abstract

    Modelling languages have proved to be an effective tool to specify and analyse various perspectives of enterprises and information systems. In addition to modelling language designs, works on model quality and modelling language quality evaluation have contributed to the maturity of the model-driven engineering (MDE) field. Although consolidated knowledge on quality evaluation is still relevant to this scenario, in previous works, we have identified misalignments between the topics that academia is addressing and the needs of industry in applying MDE, thus identifying some remaining challenges. In this paper, we focus on the need for a method to evaluate the quality of a set of modelling languages used in combination within a MDE environment. This paper presents MMQEF (Multiple Modelling language Quality Evaluation Framework), describing its foundations, presenting its method components and discussing its trade-offs.
    Original languageEnglish
    Pages (from-to)48-70
    JournalInformation Systems
    Volume77
    DOIs
    Publication statusPublished - Sept 2018

    Keywords

    • Quality
    • Model-driven engineering
    • Information systems
    • The MMQEF method
    • Reference taxonomy
    • Model analytics

    Fingerprint

    Dive into the research topics of 'Evaluating the quality of a set of modelling languages used in combination: A method and a tool'. Together they form a unique fingerprint.

    Cite this