This is the replication package for the pre-print LLMs for Science: Usage for Code Generation and Data Analysis, which studies the use of LLM-tools for code generation in the scientific process.
Following open science principles, and to support replicability, we publish this replication package alongside recording all our interactions with the tools and our evaluation criteria.
The logs of our interactions with the tools are structured by use case. They contain two logs for each of our two authors conducting the experiments. The code generation folder also contains our code to conduct the performance benchmark.
We provide our assessment rubric, used to assess the above mentioned results. For convenience, they're provided in different formats.
We have compiled some examples of misleading results for the data analysis and visualization use cases.