TY - JOUR
T1 - On Deriving Conceptual Models from User Requirements: An Empirical Study
AU - Dalpiaz, Fabiano
AU - Gieske, Patrizia
AU - Sturm, Arnon
N1 - Funding Information:
The two experiments obtained research ethics approval from the relevant committees at Ben-Gurion University of the Negev and at Utrecht University, respectively. At Ben-Gurion University the research was approved by the Software and Information Systems Department Ethics Committee and the project approval identifier is SISE-2019-05. At Utrecht University, the research was approved by the Science-Geosciences Ethics Review Board and the research project identifier is B?ta S-20339. We would like to thank all the students who voluntarily decided to share their produced empirical data for research purposes. Also, we are grateful to Sergio Espaa and Sjaak Brinkkemper for their terminological assistance.
Publisher Copyright:
© 2020 The Authors
PY - 2021/3
Y1 - 2021/3
N2 - Context: There are numerous textual notations and techniques that can be used in requirements engineering. Currently, practitioners make a choice without having scientific evidence regarding their suitability for given tasks. This uninformed choice may affect task performance. Objective: In this research, we investigate the adequacy of two well-known notations: use cases and user stories, as a starting point for the manual derivation of a structural conceptual model that represents the domain of the system. We also examine other factors that may affect the performance of this task. Methods: This work relies on two experiments. The first is a controlled classroom experiment. The second one is a quasi-experiment, conducted over multiple weeks, that aims at evaluating the quality of the derived conceptual model in light of the notation used, the adopted derivation process, and the complexity of the system to be. We measure quality in terms of validity and completeness of the conceptual model. Results: The results of the controlled experiment indicate that, for deriving conceptual models, user stories fit better than use cases. Yet, the second experiment indicates that the quality of the derived conceptual models is affected mainly by the derivation process and by the complexity of the case rather than the notation used. Contribution: We present evidence that the task of deriving a conceptual model is affected significantly by additional factors other than requirements notations. Furthermore, we propose implications and hypotheses that pave the way for further studies that compare alternative notations for the same task as well as for other tasks. Practitioners may use our findings to analyze the factors that affect the quality of the conceptual model when choosing a requirements notation and an elicitation technique that best fit their needs.
AB - Context: There are numerous textual notations and techniques that can be used in requirements engineering. Currently, practitioners make a choice without having scientific evidence regarding their suitability for given tasks. This uninformed choice may affect task performance. Objective: In this research, we investigate the adequacy of two well-known notations: use cases and user stories, as a starting point for the manual derivation of a structural conceptual model that represents the domain of the system. We also examine other factors that may affect the performance of this task. Methods: This work relies on two experiments. The first is a controlled classroom experiment. The second one is a quasi-experiment, conducted over multiple weeks, that aims at evaluating the quality of the derived conceptual model in light of the notation used, the adopted derivation process, and the complexity of the system to be. We measure quality in terms of validity and completeness of the conceptual model. Results: The results of the controlled experiment indicate that, for deriving conceptual models, user stories fit better than use cases. Yet, the second experiment indicates that the quality of the derived conceptual models is affected mainly by the derivation process and by the complexity of the case rather than the notation used. Contribution: We present evidence that the task of deriving a conceptual model is affected significantly by additional factors other than requirements notations. Furthermore, we propose implications and hypotheses that pave the way for further studies that compare alternative notations for the same task as well as for other tasks. Practitioners may use our findings to analyze the factors that affect the quality of the conceptual model when choosing a requirements notation and an elicitation technique that best fit their needs.
KW - Conceptual modeling
KW - Derivation process
KW - Requirements engineering
KW - Use cases
KW - User stories
UR - http://www.scopus.com/inward/record.url?scp=85097213244&partnerID=8YFLogxK
U2 - 10.1016/j.infsof.2020.106484
DO - 10.1016/j.infsof.2020.106484
M3 - Article
SN - 0950-5849
VL - 131
SP - 1
EP - 13
JO - Information and Software Technology
JF - Information and Software Technology
M1 - 106484
ER -