Giter Site home page Giter Site logo

autograding's People

Contributors

ashishkeshan avatar dependabot[bot] avatar jeffrafter avatar jessrudder avatar markpatterson27 avatar octosteve avatar zrdaley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

autograding's Issues

Unable to run command test for a go program

Hello,

I have created an assignment to write a go program and created tests for the same. One test is input-output test for STDIN and STDOUT, which checks if the program returns the expected value. Second test is run command which will run "echo "3 2 1" | ./opgame" Terminal command that passes the input to test.
However when I run the script I get following error
image

As shown in image it is asking for build cache, GOCACHE
Is it possible to write auto-grading for GO programming language.

Be able to mark some tests as "Extra Credit" and these are tallied outside of the "main" tests

Currently, each autograding test "flows" into a single total: if any one test fails, the set is marked "red" while if all tests succeed the set is marked "green" (and students receive a long line of encouraging emoji). This really makes a lot of sense. On the other hand, it would be nice to be able to mark one or perhaps two tests as "Extra Credit" meaning that those tests go "outside" of the main set. For example, if a student's submission passed all of the "main" tests, the autograder would still give them a "green" (with emoji) and the "Extra Credit" tests would be independent of the "main" set.

Add grade status badge

Following on from https://education.github.community/t/autograder-score-on-readme/66289

Currently grade results are buried in the action output and are difficult for students to find. The proposal is to create a status badge of the grade which can be added to the README.

Status badges can be made using the points output from the audograding action, by customising the classroom.yml file.

Would it be possible though for the autograding action to create a grading status badge itself? (Therefore saving educators having to create their own classroom.yml file)

Cant execute Golang command

I added Input/Output tests and executed command go run main.go. During the test autograding shows

build cache is required, but could not be located: GOCACHE is not defined and neither $XDG_CACHE_HOME nor $HOME are defined

I tried to set $GOCACHE and $HOME environmentals, but it didn't work.

Setting points via other actions

This is not a feature request, merely a comment to document what seems like undocumented API.

I found I was able to set the "points" manually (instead of actually using education/autograding) via the following script:

      - uses: actions/github-script@v6
        with:
          script: |
            // derived from https://github.com/education/autograding/blob/1d058ce58864938499105ab5cd1c941651ce7e27/src/output.ts
            // Fetch the workflow run
            const workflowRunResponse = await github.rest.actions.getWorkflowRun({
              owner: context.repo.owner,
              repo: context.repo.repo,
              run_id: context.runId,
            })
            const checkSuiteUrl = workflowRunResponse.data.check_suite_url
            const checkSuiteId = parseInt(checkSuiteUrl.match(/[0-9]+$/)[0], 10)
            const checkRunsResponse = await github.rest.checks.listForSuite({
              owner: context.repo.owner,
              repo: context.repo.repo,
              check_name: 'Autograding',
              check_suite_id: checkSuiteId,
            })
            const checkRun = checkRunsResponse.data.total_count === 1 && checkRunsResponse.data.check_runs[0]
            if (!checkRun) return;
            // Update the checkrun, we'll assign the title, summary and text even though we expect
            // the title and summary to be overwritten by GitHub Actions (they are required in this call)
            // We'll also store the total in an annotation to future-proof
            const text = "Points 11/10";
            await github.rest.checks.update({
              owner: context.repo.owner,
              repo: context.repo.repo,
              check_run_id: checkRun.id,
              output: {
                title: 'Autograding',
                summary: text,
                text: text,
                annotations: [
                  {
                    // Using the `.github` path is what GitHub Actions does
                    path: '.github',
                    start_line: 1,
                    end_line: 1,
                    annotation_level: 'notice',
                    message: text,
                    title: 'Autograding complete',
                  },
                ],
              },
            })

Note this makes it possible to accumulate points in other ways, such as using them to indicate the (non-negative integer) result of an optimization problem.

Request for Support for Partial Grading based on Passed Test Cases

Issue Description:

Background:

GitHub Classroom's auto-grading feature is a valuable tool for instructors to assess student assignments automatically. However, one area we currently need to improve is the inability to perform partial grading based on individual test cases within the auto-grading suite. This functionality would significantly enhance the grading flexibility and provide more accurate feedback to students.

Feature Request:

I propose implementing a feature that allows instructors to perform partial grading based on the specific test cases that a student's code passes. The auto-grading workflow currently provides an all-or-nothing assessment, which may not reflect a student's true proficiency in different aspects of the assignment.

Requested Functionality:

  1. Selective Test Execution: Introduce the ability to selectively execute specific test cases within the auto-grading suite based on their weight or importance. For instance, if an assignment has multiple requirements, the instructor should be able to assign weights to these requirements and execute tests accordingly.

  2. Gradual Accumulation of Scores: When a test case is passed, the corresponding weight should contribute to the student's overall score. This way, a student's grade accurately reflects their proficiency in different aspects of the assignment.

  3. Customizable Feedback: Instructors should be able to provide customized feedback for each test case. Feedback could include explanations for passed and failed test cases guiding students on areas they need to improve.

  4. Integration with GitHub Classroom Workflow: Ensure that this feature integrates seamlessly with GitHub Classroom's existing workflow. Setting up, configuring, and managing directly within the Classroom interface should be easy.

Benefits:

  • Enhanced Grading Accuracy: Partial grading allows for a more nuanced assessment of student submissions, accurately reflecting their skills and understanding.
  • Focused Feedback: Students can receive specific feedback on the aspects of the assignment they need to improve, fostering a more targeted learning process.
  • Flexibility: Instructors can tailor grading criteria to the assignment's requirements, making grading fair and relevant.

Community Interest:

This feature enhancement would be valuable to many educators using GitHub Classroom for grading. Many instructors would benefit from the ability to perform granular, partial grading based on passed test cases.

Implementation Considerations:

I understand that implementing this feature involves a lot of technical considerations. I'm open to discussing potential implementation approaches and collaborating on the development process.

Conclusion:

I am excited about the possibilities this feature could bring to GitHub Classroom's auto-grading capabilities. The ability to perform partial grading based on passed test cases would significantly improve the grading experience for both instructors and students.

Thank you for considering this feature request. I look forward to the community's thoughts and discussions on this topic.

give child processes access to all of process.env

right now, it doesn't seem to be possible to pass additonal environment variables to one (or all) of the autograde tests. This is a bit of a limitation when we might want to use the same test suites for students and autograding, with environment parameters modulating the precise test outcome. Looks like this should be doable in the spawn function of runner.ts, but whe nI try to make the changes myself I run into errors. Sorry I can't provide working code.

Enabling autograding in GH Classroom halts repository creation

GitHub classroom assignments that have auto grading enabled produce multiple errors that prevents students from starting their submissions.

The errors occur in all sorts of configurations, i.e.: no starter code, starter code from template, starter code with source importer, i/o test, run command, etc.

There's no error message other than a never ending progress bar.

The error is solved when assignments are created without auto grading.

How to reproduce the error?

  1. Create a new assignment with i/o test or run command grading method.
  2. Use the assignment link to start a new submission.
  3. Watch the process get stuck in either the "Creating repository" or "Importing started code" step.

Note: I initially though this issue was related with degraded performance in the import queue (education/classroom issues: 1772 & 1778).

pre-conditions für tests

bestimmte tests wollen wir nur ausführen wenn vorherige abgeschlossen sind, dafür brauchen wir Bedingungen und Ergebnisse

Autograding with points stops working if any tests are edited

Autograding allows for automatically awarding points when individual tests pass. If you create the tests and don't touch them again, it works. However: If you edit any tests, all the test points become blank. If you try resetting the points for a test, the changes are not saved.

If you have a test user and clone the repository for the assignment, you can see how the points are missing by opening .github/classroom/autograding.json . All points are set to null.

This behavior has been observed by multiple users:

Minimal example to recreate:

  • Create a new assignment
  • Create a test, e.g. based on pytest (run pytest test.py) and set a number of points for the test (e.g. 5)
  • Save the test and the assignment
  • Re-open the assignment and edit the test - the point field is now blank

Give instructors access to source code of autograding actions

It would be very helpful if we could edit the autograding workflow yml and test json files directly. Reling on a slightly inflexible GUI in this situation is a bit odd; for one thing, it means we can't keep our grading schemas in version control (!)

Need help with autograding HTML files.

Hello. I'm a teaching html to my class and need some help setting up autograding my assignments. I'm (very) new to github and github classroom so please be patient with me.

What I need is a simple check for erroneous tags and I experimented with the Input/Output test as shown in the official Github Classroom youtube video. I just followed it and hoped for the best (I used npm install and npm test). I have no idea about what to type on the setup/run command field, and I guess that's the reason why I'm stuck on my issue. There are also some errors as indicated on the picture.

https://drive.google.com/file/d/122ADDeQaEyHt5zTfKqaBhpR1B6mMzL0D/view?usp=share_link
https://drive.google.com/file/d/1YAUuEltic0tpC65AMnwrciDeDbm9H59D/view?usp=share_link

I have around 80 students in my laboratory, so autograding will definitely help. Thanks in advance.

Export autograding grades

Right now, I can see the points the students scored on the dashboard. But, more useful would be able to export a CSV of the grades so that I can import them into my Blackboard gradebook.

set-output command is deprecated warning when running education/autograding action

Currently I receive the following warning when running the education/autograding@v1 action in my github classroom:

Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

From the link provided, these workflows may stop working on 1st June 2023 instead of just giving warnings. From the description of Patching your actions and workflows, they indicate that the @actions/core package should be updated to v1.10.0. I am just guessing, as I was looking around to try and fix it myself, but this may be as simple as updating the 'autograding/package.json' file to list the dependency as something like:

  "dependencies": {
    "@actions/core": "^1.10.0",

Auto-grading Support for Scala

Hi team,

In Grading and feedback, currently there is no option for running Scala tests. Kindly include feature for running tests in Scala.

image

Thanks !

Add an option to send autograding output to the Feedback PR as a comment

Currently, the output from autograding is hard to find for any learner new to GitHub. This makes it hard for learners to get feedback on which parts of their assignment they haven't completed yet. A better option would be for autograding to post a grading summary on the Feedback PR.

Feature request either autograding doing this directly, or provide a feedback step output that can be passed to other steps in a workflow.

I have a proof-of-concept setup here: https://github.com/markpatterson27/PoC-Autograding-Feedback/

Add an option for pass/fail message

Add fields that would allow for a feedback message to be given to learners based on whether the test passed or failed.

So test fields would be something like

{
    "name": "Test1",
    "setup": "",
    "run": "run Test",
    "input": "",
    "output": "pass",
    "comparison": "included",
    "timeout": 10,
    "points": 10,
    "passmessage": "Test 1 passed. Well done.",
    "failmessage": "Test 1 failed. Try checking your variable names."
}

Expected vs. Actual output for failed tests unreadable due to %0A instead of line breaks

When an Input/Output test fails I would like students (and I) to be able to compare the expected output to the actual output. However, the current error message is unreadable due to the many %0A (linefeed) characters where there should be newlines. I checked the autograding.json and found there were indeed \r\n between each line. I tried removing the \r part and only leaving \n, but that had no effect. What I would like to see is actual line breaks for multi-line output.

Sample Output for a test named "5":

📝 5


  Enter a number: SHS Spartans
  1
  SHS
  3
  SHS
  5
  
❌ 5
::error::The output for test 5 did not match%0AExpected:%0ASHS Spartans%0A1%0A2%0ASHS%0ASpartans%0A5%0AActual:%0AEnter a number: SHS Spartans%0A1%0ASHS%0A3%0ASHS%0A5

The repository I've been using to test this is at
https://github.com/StratfordHS-CS2/lab-22-shs-spartans-daveavis
This is not a student repo, I (the teacher) accepted the assignment and was using this repo for testing if I could somehow clean up the output.

Achieving finer granularity

I just learned about the automatic grading features of Github Classroom and created my first test. As I understand it, an assignment can have multiple tests, and each test has an associated grade with a corresponding pytest file to check the validity of a student's submission. This works as advertised so far.

I created a first test on an assignment, for 90 points, to create a two-argument function named 'addition' that performed addition. I created three assertions: 1) the function named 'addition' exists, 2) the function has two arguments, and 3) the function performs as advertised. These three assertions are in three methods to ensure that each one will be checked and not fail overall if the first assertion fails. I want to assign 30 points to each assertion. Of course, I could create two additional tests in my assignment to accomplish this, but this seems very much like overkill. Instead, I'd like to assign points (or fractions of the total) to my different assertions, and if the fractions do not add up to one, perform a rescaling.

Does GitHub classroom allow for this enhanced functionality? If not, I'd like to investigate the possibility of my own implementation by enhancing the existing software, depending on the difficulty. Could anybody provide any insight into this?

Thanks for any help!

Please provide basic usage instructions for this action

Surprisingly, this repository doesn't have a README; this is a huge barrier to usability for instructors who haven't been following community chatter about this feature, and stands in stark contrast to GitHub's general philosophy about beautiful, robust, and useful documentation. Please create a README (or GitHub Pages site?) which, at a minimum, provides documentation of the following:

  • How this action works in tandem with the autograding config section of the Classroom assignment template UI
  • Configuring test cases in the Classroom environment (if this is how the tests should be configured?)
  • Setting up this action to run in student repositories

Autograding action not able to read GitHub Secrets (Actions)

Hello,

I was testing running some pytest to test some Python files and it fails when getting the GitHub Secrets from environment variables.

autograding workflow


name: GitHub Classroom Workflow

on: [push]

permissions:
  checks: write
  actions: read
  contents: read
  
env:
  COG_SERVICE_ENDPOINT: ${{ secrets.COG_SERVICE_ENDPOINT }}
  COG_SERVICE_KEY: ${{ secrets.COG_SERVICE_KEY }}

jobs:
  build:
    name: Autograding
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: education/autograding@v1

autograding.json

{
  "tests": [
    {
      "name": "Test 1.1",
      "setup": "sh .devcontainer/post-create.sh",
      "run": "pytest 1-rest-client.py",
      "input": "",
      "output": "",
      "comparison": "included",
      "timeout": 10,
      "points": null
    }
  ]
}

post-create.sh installs all neccesary python libraries.

This is the error I get, it does not read the env variable.
image

Running another custom workflow (one below) executing my pytest file works. What am I missing?

name: Python execution
on: [push]

permissions:
  checks: write
  actions: read
  contents: read

env:
  COG_SERVICE_ENDPOINT: ${{ secrets.COG_SERVICE_ENDPOINT }}
  COG_SERVICE_KEY: ${{ secrets.COG_SERVICE_KEY }}

jobs:
  build:
    name: Python test execution
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: sh .devcontainer/post-create.sh
      - run: pytest 1-rest-client.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.