education / autograding Goto Github PK
View Code? Open in Web Editor NEWGitHub Education Auto-grading and Feedback for GitHub Classroom
License: MIT License
GitHub Education Auto-grading and Feedback for GitHub Classroom
License: MIT License
Hello,
I have created an assignment to write a go program and created tests for the same. One test is input-output test for STDIN and STDOUT, which checks if the program returns the expected value. Second test is run command which will run "echo "3 2 1" | ./opgame" Terminal command that passes the input to test.
However when I run the script I get following error
As shown in image it is asking for build cache, GOCACHE
Is it possible to write auto-grading for GO programming language.
Currently, each autograding test "flows" into a single total: if any one test fails, the set is marked "red" while if all tests succeed the set is marked "green" (and students receive a long line of encouraging emoji). This really makes a lot of sense. On the other hand, it would be nice to be able to mark one or perhaps two tests as "Extra Credit" meaning that those tests go "outside" of the main set. For example, if a student's submission passed all of the "main" tests, the autograder would still give them a "green" (with emoji) and the "Extra Credit" tests would be independent of the "main" set.
Following on from https://education.github.community/t/autograder-score-on-readme/66289
Currently grade results are buried in the action output and are difficult for students to find. The proposal is to create a status badge of the grade which can be added to the README.
Status badges can be made using the points output from the audograding action, by customising the classroom.yml file.
Would it be possible though for the autograding action to create a grading status badge itself? (Therefore saving educators having to create their own classroom.yml file)
I added Input/Output tests and executed command go run main.go
. During the test autograding
shows
build cache is required, but could not be located: GOCACHE is not defined and neither $XDG_CACHE_HOME nor $HOME are defined
I tried to set $GOCACHE and $HOME environmentals, but it didn't work.
It seems to me that test code must be part of the starter code so that it can be graded. But, couldn't students just change the test code?
This is not a feature request, merely a comment to document what seems like undocumented API.
I found I was able to set the "points" manually (instead of actually using education/autograding
) via the following script:
- uses: actions/github-script@v6
with:
script: |
// derived from https://github.com/education/autograding/blob/1d058ce58864938499105ab5cd1c941651ce7e27/src/output.ts
// Fetch the workflow run
const workflowRunResponse = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: context.runId,
})
const checkSuiteUrl = workflowRunResponse.data.check_suite_url
const checkSuiteId = parseInt(checkSuiteUrl.match(/[0-9]+$/)[0], 10)
const checkRunsResponse = await github.rest.checks.listForSuite({
owner: context.repo.owner,
repo: context.repo.repo,
check_name: 'Autograding',
check_suite_id: checkSuiteId,
})
const checkRun = checkRunsResponse.data.total_count === 1 && checkRunsResponse.data.check_runs[0]
if (!checkRun) return;
// Update the checkrun, we'll assign the title, summary and text even though we expect
// the title and summary to be overwritten by GitHub Actions (they are required in this call)
// We'll also store the total in an annotation to future-proof
const text = "Points 11/10";
await github.rest.checks.update({
owner: context.repo.owner,
repo: context.repo.repo,
check_run_id: checkRun.id,
output: {
title: 'Autograding',
summary: text,
text: text,
annotations: [
{
// Using the `.github` path is what GitHub Actions does
path: '.github',
start_line: 1,
end_line: 1,
annotation_level: 'notice',
message: text,
title: 'Autograding complete',
},
],
},
})
Note this makes it possible to accumulate points in other ways, such as using them to indicate the (non-negative integer) result of an optimization problem.
Node 12 is being deprecated. This action needs to be updated to Node 16.
Issue Description:
Background:
GitHub Classroom's auto-grading feature is a valuable tool for instructors to assess student assignments automatically. However, one area we currently need to improve is the inability to perform partial grading based on individual test cases within the auto-grading suite. This functionality would significantly enhance the grading flexibility and provide more accurate feedback to students.
Feature Request:
I propose implementing a feature that allows instructors to perform partial grading based on the specific test cases that a student's code passes. The auto-grading workflow currently provides an all-or-nothing assessment, which may not reflect a student's true proficiency in different aspects of the assignment.
Requested Functionality:
Selective Test Execution: Introduce the ability to selectively execute specific test cases within the auto-grading suite based on their weight or importance. For instance, if an assignment has multiple requirements, the instructor should be able to assign weights to these requirements and execute tests accordingly.
Gradual Accumulation of Scores: When a test case is passed, the corresponding weight should contribute to the student's overall score. This way, a student's grade accurately reflects their proficiency in different aspects of the assignment.
Customizable Feedback: Instructors should be able to provide customized feedback for each test case. Feedback could include explanations for passed and failed test cases guiding students on areas they need to improve.
Integration with GitHub Classroom Workflow: Ensure that this feature integrates seamlessly with GitHub Classroom's existing workflow. Setting up, configuring, and managing directly within the Classroom interface should be easy.
Benefits:
Community Interest:
This feature enhancement would be valuable to many educators using GitHub Classroom for grading. Many instructors would benefit from the ability to perform granular, partial grading based on passed test cases.
Implementation Considerations:
I understand that implementing this feature involves a lot of technical considerations. I'm open to discussing potential implementation approaches and collaborating on the development process.
Conclusion:
I am excited about the possibilities this feature could bring to GitHub Classroom's auto-grading capabilities. The ability to perform partial grading based on passed test cases would significantly improve the grading experience for both instructors and students.
Thank you for considering this feature request. I look forward to the community's thoughts and discussions on this topic.
Hi,
I work with Java and Gradle. I followed the example repository .
I tried it myself, but the Action runs all 3 tests as a single test, 3 times.
So one broken test causes 0 points total.
Can you give an example that works? maybe improve on the example repository.
My assignment:
https://classroom.github.com/classrooms/83712254/assignments/olympics
shell:"bash"
(ginge dann auch für pyastgrep etc)
add readme for the repo
right now, it doesn't seem to be possible to pass additonal environment variables to one (or all) of the autograde tests. This is a bit of a limitation when we might want to use the same test suites for students and autograding, with environment parameters modulating the precise test outcome. Looks like this should be doable in the spawn
function of runner.ts, but whe nI try to make the changes myself I run into errors. Sorry I can't provide working code.
GitHub classroom assignments that have auto grading enabled produce multiple errors that prevents students from starting their submissions.
The errors occur in all sorts of configurations, i.e.: no starter code, starter code from template, starter code with source importer, i/o test, run command, etc.
There's no error message other than a never ending progress bar.
The error is solved when assignments are created without auto grading.
Note: I initially though this issue was related with degraded performance in the import queue (education/classroom issues: 1772 & 1778).
bestimmte tests wollen wir nur ausführen wenn vorherige abgeschlossen sind, dafür brauchen wir Bedingungen und Ergebnisse
Autograding allows for automatically awarding points when individual tests pass. If you create the tests and don't touch them again, it works. However: If you edit any tests, all the test points become blank. If you try resetting the points for a test, the changes are not saved.
If you have a test user and clone the repository for the assignment, you can see how the points are missing by opening .github/classroom/autograding.json . All points are set to null.
This behavior has been observed by multiple users:
Minimal example to recreate:
It would be very helpful if we could edit the autograding workflow yml and test json files directly. Reling on a slightly inflexible GUI in this situation is a bit odd; for one thing, it means we can't keep our grading schemas in version control (!)
Hello. I'm a teaching html to my class and need some help setting up autograding my assignments. I'm (very) new to github and github classroom so please be patient with me.
What I need is a simple check for erroneous tags and I experimented with the Input/Output test as shown in the official Github Classroom youtube video. I just followed it and hoped for the best (I used npm install and npm test). I have no idea about what to type on the setup/run command field, and I guess that's the reason why I'm stuck on my issue. There are also some errors as indicated on the picture.
https://drive.google.com/file/d/122ADDeQaEyHt5zTfKqaBhpR1B6mMzL0D/view?usp=share_link
https://drive.google.com/file/d/1YAUuEltic0tpC65AMnwrciDeDbm9H59D/view?usp=share_link
I have around 80 students in my laboratory, so autograding will definitely help. Thanks in advance.
Right now, I can see the points the students scored on the dashboard. But, more useful would be able to export a CSV of the grades so that I can import them into my Blackboard gradebook.
append > /dev/null 2>&1
automatically for tests marked accordingly
Currently I receive the following warning when running the education/autograding@v1 action in my github classroom:
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
From the link provided, these workflows may stop working on 1st June 2023 instead of just giving warnings. From the description of Patching your actions and workflows, they indicate that the @actions/core package should be updated to v1.10.0. I am just guessing, as I was looking around to try and fix it myself, but this may be as simple as updating the 'autograding/package.json' file to list the dependency as something like:
"dependencies": {
"@actions/core": "^1.10.0",
Currently, the output from autograding is hard to find for any learner new to GitHub. This makes it hard for learners to get feedback on which parts of their assignment they haven't completed yet. A better option would be for autograding to post a grading summary on the Feedback PR.
Feature request either autograding doing this directly, or provide a feedback step output that can be passed to other steps in a workflow.
I have a proof-of-concept setup here: https://github.com/markpatterson27/PoC-Autograding-Feedback/
Add fields that would allow for a feedback message to be given to learners based on whether the test passed or failed.
So test fields would be something like
{
"name": "Test1",
"setup": "",
"run": "run Test",
"input": "",
"output": "pass",
"comparison": "included",
"timeout": 10,
"points": 10,
"passmessage": "Test 1 passed. Well done.",
"failmessage": "Test 1 failed. Try checking your variable names."
}
When an Input/Output test fails I would like students (and I) to be able to compare the expected output to the actual output. However, the current error message is unreadable due to the many %0A (linefeed) characters where there should be newlines. I checked the autograding.json and found there were indeed \r\n between each line. I tried removing the \r part and only leaving \n, but that had no effect. What I would like to see is actual line breaks for multi-line output.
Sample Output for a test named "5":
📝 5
Enter a number: SHS Spartans
1
SHS
3
SHS
5
❌ 5
::error::The output for test 5 did not match%0AExpected:%0ASHS Spartans%0A1%0A2%0ASHS%0ASpartans%0A5%0AActual:%0AEnter a number: SHS Spartans%0A1%0ASHS%0A3%0ASHS%0A5
The repository I've been using to test this is at
https://github.com/StratfordHS-CS2/lab-22-shs-spartans-daveavis
This is not a student repo, I (the teacher) accepted the assignment and was using this repo for testing if I could somehow clean up the output.
I just learned about the automatic grading features of Github Classroom and created my first test. As I understand it, an assignment can have multiple tests, and each test has an associated grade with a corresponding pytest
file to check the validity of a student's submission. This works as advertised so far.
I created a first test on an assignment, for 90 points, to create a two-argument function named 'addition' that performed addition. I created three assertions: 1) the function named 'addition' exists, 2) the function has two arguments, and 3) the function performs as advertised. These three assertions are in three methods to ensure that each one will be checked and not fail overall if the first assertion fails. I want to assign 30 points to each assertion. Of course, I could create two additional tests in my assignment to accomplish this, but this seems very much like overkill. Instead, I'd like to assign points (or fractions of the total) to my different assertions, and if the fractions do not add up to one, perform a rescaling.
Does GitHub classroom allow for this enhanced functionality? If not, I'd like to investigate the possibility of my own implementation by enhancing the existing software, depending on the difficulty. Could anybody provide any insight into this?
Thanks for any help!
Seems like the autograder is using a Java version less than 11. How do I specify a newer Java version?
Surprisingly, this repository doesn't have a README; this is a huge barrier to usability for instructors who haven't been following community chatter about this feature, and stands in stark contrast to GitHub's general philosophy about beautiful, robust, and useful documentation. Please create a README (or GitHub Pages site?) which, at a minimum, provides documentation of the following:
Hello,
I was testing running some pytest to test some Python files and it fails when getting the GitHub Secrets from environment variables.
autograding workflow
name: GitHub Classroom Workflow
on: [push]
permissions:
checks: write
actions: read
contents: read
env:
COG_SERVICE_ENDPOINT: ${{ secrets.COG_SERVICE_ENDPOINT }}
COG_SERVICE_KEY: ${{ secrets.COG_SERVICE_KEY }}
jobs:
build:
name: Autograding
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: education/autograding@v1
autograding.json
{
"tests": [
{
"name": "Test 1.1",
"setup": "sh .devcontainer/post-create.sh",
"run": "pytest 1-rest-client.py",
"input": "",
"output": "",
"comparison": "included",
"timeout": 10,
"points": null
}
]
}
post-create.sh installs all neccesary python libraries.
This is the error I get, it does not read the env variable.
Running another custom workflow (one below) executing my pytest file works. What am I missing?
name: Python execution
on: [push]
permissions:
checks: write
actions: read
contents: read
env:
COG_SERVICE_ENDPOINT: ${{ secrets.COG_SERVICE_ENDPOINT }}
COG_SERVICE_KEY: ${{ secrets.COG_SERVICE_KEY }}
jobs:
build:
name: Python test execution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: sh .devcontainer/post-create.sh
- run: pytest 1-rest-client.py
I'm getting this warning message lately:
Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: education/autograding@v1. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
for additional output
For a moment i have this error when launching autograding :
File not found: '/home/runner/work/_actions/education/autograding/v1/./dist/index.js'
Also with old repository that's work previously.
Zum Beispiel für Dateinamen / Verzeichnisse (übergreifend auf intro + alle Tests):
${DATEI}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.