Giter Site home page Giter Site logo

julianfrattini / rqi-relf Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 0.0 1.25 MB

Replication package for the case study on patterns of requirements quality impact

License: MIT License

Jupyter Notebook 100.00%
case-study replication-package requirements-engineering requirements-quality-analysis

rqi-relf's Introduction

Relevant Factors of Requirements Quality: Replication Package

GitHub DOI arXiv

This repository contains the replication package for the case study on identifying relevant factors of requirements quality. In this study, we used the requirements quality theory [1] as a frame to identify which quality and context factors [2] are relevant to the specific company case. This repository contains the (anonymized) data sets, scripts to analyze them, and figures to visualize them.

Contributors and Article Information

Name Affiliation Email
Julian Frattini Blekinge Institute of Technology, Sweden [email protected]

Cite this work as follows: Frattini, J. (2024). Identifying relevant Factors of Requirements Quality: an industrial Case Study. In Requirements Engineering: Foundation for Software Quality: 30th International Working Conference, REFSQ 2024, Winterthur, Switzerland, April 8-11, 2024, Proceedings 30. Springer International Publishing.

@inproceedings{frattini2024identifying,
  title={Identifying relevant Factors of Requirements Quality: an industrial Case Study},
  author={Frattini, Julian},
  booktitle={Requirements Engineering: Foundation for Software Quality: 30th International Working Conference, REFSQ 2024, Winterthur, Switzerland, April 8--11, 2024, Proceedings 30},
  year={2024},
  organization={Springer}
}

Description of the Artifacts

This repository contains the following artifacts:

  • data/ : folder for all data-related information
    • columns.json : list of relevant columns and available values per column
    • data-description.md : further explanation of the data (i.e., sheets, values, etc.)
    • interview-data.xlsx : Excel sheet containing both the original and overlap codes assigned to the extracted interview statements
    • issue-data.xlsx : Excel sheet containing the codes assigned to issues
  • doc/ : folder containing all additional documentation and supplementary material
  • figures/ : folder for all figures
    • graphml/ : figures in editable .graphml format
      • activity-model.graphml : tree of activities and attributes
      • interaction-example.graphml : mapping a statement describing an interaction effect to the requirements quality theory
      • method-visualization.graphml : visual overview of the data collection and analysis method
      • requirements-quality-theory.graphml : simplified version of the requirements quality theory [1]
    • pdf/ : the same figures but in viewable .pdf format
  • src/ : folder containing all source code
    • analytics/ : folder for all source code related to analysis
      • evaluation.ipynb : notebook to filter and aggregate the data to themes
      • interrater-agreement : notebook to calculate the agreement between two independent raters coding the interview data
      • results.md : summary of the interview code results
    • exploration.ipynb : notebook to further explore the data set
    • requirements.txt : list of python libraries required to execute the source code

This repository does not contain verbatim interview or issue data for privacy reasons.

System Requirements

The following requirements must be met in order to utilize the artifacts contained in this repository:

  • To utilize the data, you need a spreadsheet software like Microsoft Excel to open the .xlsx file.
  • To exectute the code, Python 3.10 must be available and an editor software like Visual Studio Code is recommended.
  • To edit the figures, you require an editor capable of opening Graph Markup Language (.graphml) files, for example the yEd Graph Editor.

Installation Instructions

To execute the code contained in this repository, make sure all requirements contained in the requirements.txt are installed by executing pip install -r requirements.txt. To avoid conflicts with the local Python environment, create a separate virtual environment:

  1. Create a virtual environment with python -m venv .venv
  2. Activate the virtual environment with ./venv/Scripts/activate
  3. Install the requirements as described above.
  4. Install the ipykernel either by selecting the virtual environment .venv as the runtime environment for the Jupyter notebook in VS Code (which will trigger VS Code to automatically install the ipykernel) or install it manually.

Once the virtual environment is running and all requirements installed and ipykernel are installed, the Jupyter notebook can be executed from the virtual environment.

Steps to Reproduce

Execute the code blocks of the following Jupyter notebooks from top to bottom to reproduce the results from the manuscript:

  • interrateragreement.ipynb: calculation of the inter-rater agreement between the author and the independent rater
  • evaluation.ipynb: filtering of the raw data and aggregation of the data into larger, more meaningful units

Additionally, you can use the file exploration.ipynb to explore the data further and in more depth.

References

[1] Frattini, J., Montgomery, L., Fischbach, J., Mendez, D., Fucci, D., & Unterkalmsteiner, M. (2023). Requirements quality research: a harmonized theory, evaluation, and roadmap. Requirements Engineering, 1-14. https://doi.org/10.1007/s00766-023-00405-y

[2] Femmer, H., Mund, J., & Fernández, D. M. (2015, May). It's the activities, stupid! a new perspective on RE quality. In 2015 IEEE/ACM 2nd International Workshop on Requirements Engineering and Testing (pp. 13-19). IEEE. https://doi.org/10.1109/RE54965.2022.00041

Licensing

Copyright © 2023 Julian Frattini. This work is licensed under MIT License.

rqi-relf's People

Contributors

julianfrattini avatar

Stargazers

Andreas Bauer avatar

Watchers

 avatar

rqi-relf's Issues

Add "how to cite"

If the manuscript presenting the case study results gets accepted, add a section on how to cite the artifact.

Format results

Currently, the study results are only contained in the output of the analysis scripts. To ease readability, extract them into markdown files, visualizing the impact.

Add interview coding guideline

Add the coding guideline that has been used to evaluate the interview and the issue report data.

  • Remove real-world examples from the coding guideline
  • Update descriptions
  • Match the mention types of the paper with the ones in the guideline

Update readme

Add information to the README.md file to ease navigation around the repository.

Add interview code data

Add the Excel file containing the interview data. The file needs to be clear of any verbatim statements and may only contain the codes.

  • Remove verbatim statements
  • Update code descriptions

Add figures

Add the figures - both the PDF and the graphs - to the repository.

Add issue report code data

Add the Excel file containing the evaluation of the issue reports. The file needs to be free of any confidential data and may only contain the code labels.

Add interview guideline

Add the interview guideline that describes how the interview was conducted to facilitate replication.

Update article information

Once the proceedings are published,

  • update the article information to include the page number (after "Proceedings 30" there should be a page number, e.g., "pp. 19-36"), and
  • update the BibTeX information including the DOI of the version of record.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.