Giter Site home page Giter Site logo

exercism / chapel Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 4.0 182 KB

Exercism exercises in Chapel.

Home Page: https://exercism.org/tracks/chapel

License: MIT License

Shell 3.42% PowerShell 2.03% Chapel 94.55%
exercism-track community-contributions-paused wip-track

chapel's Introduction

Exercism Chapel Track

configlet tests

Exercism exercises in Chapel.

Testing

To test the exercises, run ./bin/test. This command will iterate over all exercises and check to see if their exemplar/example implementation passes all the tests.

Track linting

configlet is an Exercism-wide tool for working with tracks. You can download it by running:

$ ./bin/fetch-configlet

Run its lint command to verify if all exercises have all the necessary files and if config files are correct:

$ ./bin/configlet lint

The lint command is under development.
Please re-run this command regularly to see if your track passes the latest linting rules.

Basic linting finished successfully:
- config.json exists and is valid JSON
- config.json has these valid fields:
    language, slug, active, blurb, version, status, online_editor, key_features, tags
- Every concept has the required .md files
- Every concept has a valid links.json file
- Every concept has a valid .meta/config.json file
- Every concept exercise has the required .md files
- Every concept exercise has a valid .meta/config.json file
- Every practice exercise has the required .md files
- Every practice exercise has a valid .meta/config.json file
- Required track docs are present
- Required shared exercise docs are present

chapel's People

Contributors

erikschierboom avatar exercism-bot avatar kytrinyx avatar lucaferranti avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

chapel's Issues

circular-buffer: What should a typical test look like?

Here's a sample entry in the canonical data for the circular-buffer exercise:

    {
      "uuid": "547d192c-bbf0-4369-b8fa-fc37e71f2393",
      "description": "a read frees up capacity for another write",
      "property": "run",
      "input": {
        "capacity": 1,
        "operations": [
          {
            "operation": "write",
            "item": 1,
            "should_succeed": true
          },
          {
            "operation": "read",
            "should_succeed": true,
            "expected": 1
          },
          {
            "operation": "write",
            "item": 2,
            "should_succeed": true
          },
          {
            "operation": "read",
            "should_succeed": true,
            "expected": 2
          }
        ]
      },
      "expected": {}
    }

In case it's useful, here are a couple of implementations in other tracks:

The corresponding test for this from the C# track looks like this:

    public void A_read_frees_up_capacity_for_another_write()
    {
        var buffer = new CircularBuffer<int>(capacity: 1);
        buffer.Write(1);
        Assert.Equal(1, buffer.Read());
        buffer.Write(2);
        Assert.Equal(2, buffer.Read());
    }

In Java it's like this:

    public void readFreesUpSpaceForWrite() throws BufferIOException {
        CircularBuffer<Integer> buffer = new CircularBuffer<>(1);

        buffer.write(1);
        assertThat(buffer.read()).isEqualTo(1);
        buffer.write(2);
        assertThat(buffer.read()).isEqualTo(2);
    }

The exercise doesn't mandate any particular implementation.
What would the test look like in Chapel if the solution is idiomatic?

If I have an idea what it needs to look like I can generate the exercise stub.

fix broken tests

some tests got out of sync and fail with the new version of chapel

  • allergies
  • dominoes
  • high-score
  • resistors-colors
  • yacht
  • deprecation warnings

document workflow to contribute exercises

Would be good to have a CONTRIBUTING.md documenting the workflow. Writing it down would also help me clarify the steps to myself.

As a first step, let's focus on practice exercises. My understanding so far

  1. Choose an exercise and update config.json, adding at least the fields name, difficulty, stub. (I am a little surprised configlet cannot do this for me).
  2. run bin/configlet uuid and add the code to the uuid field in config.json
  3. run bin/configlet sync --update --docs --metadata --tests --exercise $(EXERCISE-STUB) to initialize the folder
  4. initialize the folder. This could be done with mason init. The add test/tests.chpl (would be nice to automate with next step)
  5. Looking at problem specifications, add exercises to the tests folder.
  6. write a reference solution in .meta/reference.chpl
  7. from main directory, run bin/test to test the reference solution passes the tests.

Setup Continuous Integration

initial draft (assumes now I'm in exercises folder, can be easily adjusted)

mkdir tmp-exercises

for exercise in practice/*/ concept/*/; do
    if [ -d "$exercise" ]; then
        echo Testing $exercise
        mkdir -p tmp-exercises/$exercise
        cp -a $exercise/. tmp-exercises/$exercise
        cd tmp-exercises/$exercise
        cat .meta/example.chpl > src/*.chpl
        mason test
        cd -
    fi
done

rm -rf tmp-exercises

TODO

  • open draft PR and check that it works (CI fails when should and passes when it should)
  • (optional), count in the script how many tests are passing/failing and print at the end summary

How to test exercises that expect an error or exception?

The nucleotide exercise has one test that asserts that an error occurs. What would a test like that look like in Chapel?

Here's what the default test would look like:

proc testSomething(test : borrowed Test) throws {
  test.assertEqual(nucleotideCounts("GGGGGGG"), {"A"=>0, "C"=>0, "G"=>7, "T"=>0});
}

A, C, G, and T are considered valid nucleotides, and the test for the error case passes in a strand of AGXXACT.

The problem specification says {"error"=>"Invalid nucleotide in strand"}. Some tracks specify the exact error text as part of the test, others test only for the right type of error, leaving the text up to the individual.

Protected branch settings on this repository

I updated the settings as follows (everything below it did not get a checkbox).

I couldn't find a setting to enforce squash-and-merge; do you know if that exists and if so what it is called?

Screen Shot 2022-10-10 at 9 07 03 AM

Should we implement skipping?

In #12 we started discussing whether or not we should implement some way of incrementally enabling tests as people solve an exercise. This helps guide people to thinking about just the next useful step rather than overwhelming them with all the failures all at once.

@lucaferranti said:

I think this could be implemented using Chapel skipIf and dependsOn. I can try drafting a quick prototype.

And also asked a great question about how this works in the online editor. (@ErikSchierboom -- do you know what we currently do for other tracks?)

revisit difficulty levels

So far I assumed there were only 3 difficulty levels (1=easy, 2=medium, 3=hard), but now I see in the exercism docs the levels actually go from 1 to 10 (1-3 easy, 4-7 medium, 8 - 10 hard).

Is the difficulty level for each exercise standardized somewhere or can each track decide those freely? If so, is there much variation between languages or where do other take inspiration to decide the difficulty level?

Launch tracking

Overall documentation for building an Exercism track lives at https://exercism.org/docs/building/tracks/new

This issue helps keep track of the tasks you're working on towards launching this track.

The next steps are:

Once you've finished a task, you can check them in this list.

Questions

Please ask if you have any questions or if anything is confusing!

fix syntax highlight

I was looking at the track homepage (atm only visible to maintainers at the moment) and I noticed a couple of issues

  • the logo is not displayed (which I guess it's understandable, since I dont remeber loading it anywhere)
    image

  • the code snippet does not have syntax highlights (chapel should be supported from highlights.js)

image

any pointers on how to debug / fix this are very welcome

Implement Hello World exercise

I've added some notes for you to follow once you're ready to implement the first exercise (https://exercism.org/docs/building/tracks/new/add-first-exercise)

If these notes work and are helpful, let me know and I'll update the official documentation with them. If not please holler so we can figure out what's missing!

You can get the basics for the exercise in place by doing the following:

bin/fetch-configlet

Then you'll need to add some bits and pieces to the config.json in the "exercises.practice" array


      {
        "slug": "hello-world",
        "name": "Hello World",
        "uuid": "",
        "practices": [],
        "prerequisites": [],
        "difficulty": 1,
        "topics": []
      }

And then add the output from the following command to the uuid field for hello-world:

bin/configlet uuid

Then run the sync command:

bin/configlet sync --update --docs --metadata --tests

Next, add your GitHub username to "authors" in exercises/practice/hello-world/.meta/config.json (no "@" or anything).

Next you'll need to create the following in the exercise/practice/hello-world directory:

  • a test suite with a single test ("the tests")
  • a full solution with the text "Goodbye, Mars!" instead of the text "Hello, World!", causing the test to fail. ("the stub")
  • a full, correct solution ("the example solution")

To do so you'll have to decide what the file paths are going to look like for the exercise. Ideally, you'll have filenames and paths that are idiomatic for the language. That said, if there's not a fairly strict requirement/expectation, prefer a shallower file structure (maybe even no subdirectories).

The example solution will go in the .meta/ directory of the exercise, and you can name it anything. example is a common choice for the basename.

Once you have created the files you'll need to update the exercises/practice/hello-world/.meta/config.json file to have those paths.

Series: what would a sample test look like?

Here is a sample entry in the canonical data for the series problem.

{
  "uuid": "19bbea47-c987-4e11-a7d1-e103442adf86",
  "description": "slices of two overlap",
  "property": "slices",
  "input": {
    "series": "9142",
    "sliceLength": 2
  },
  "expected": [
    "91",
    "14",
    "42"
  ]
}

In Ruby the test looks like this:

  def test_slices_of_two_overlap
    series = Series.new("9142")
    assert_equal ["91", "14", "42"], series.slices(2)
  end

What would the test look like in Chapel?

If I have an idea what it needs to look like I can generate the exercise stub.

update meeting model solution after 2.1 release

indexing week days from 0 is deprecated, now it still works with deprecation note, after 2.1 it will be disabled and start from 1. This is a note to myself to update the exercise solution after the new release

Auto-formatting tool?

Is there an autoformatting tool for Chapel?

I'm wondering if there's something I could use to make sure that test suites that are generated follow standard guidelines or practices.

Meetup: what would a sample test look like?

Here is a sample entry in the canonical data for the meetup problem.

{
  "uuid": "b08a051a-2c80-445b-9b0e-524171a166d1",
  "description": "when third Wednesday is some day in the middle of the third week",
  "property": "meetup",
  "input": {
    "year": 2013,
    "month": 7,
    "week": "third",
    "dayofweek": "Wednesday"
  },
  "expected": "2013-07-17"
}

The corresponding test for this data in Ruby is:

  def test_when_third_wednesday_is_some_day_in_the_middle_of_the_third_week
    meetup = Meetup.new(7, 2013).day(:wednesday, :third)
    assert_equal Date.parse("2013-07-17"), meetup
  end

In C# it's like this:

    public void Third_wednesday_of_july_2013()
    {
        var sut = new Meetup(7, 2013);
        var expected = new DateTime(2013, 7, 17);
        Assert.Equal(expected, sut.Day(DayOfWeek.Wednesday, Schedule.Third));
    }

The exercise doesn't mandate any particular implementation.
What would the test look like in Chapel if the solution is idiomatic?

If I have an idea what it needs to look like I can generate the exercise stub.

New exercise: Leap

I'd like to generate this exercise, which is a very simple exercise about boolean logic.

Doing a simple search/replace into the HelloWorld test, I get the following:

proc test_something(test : borrowed Test) throws {
  test.assertEqual(is_leap_year(2015), false);
}

Do you use assertEqual to test true and false, or is there a different test method?

What would the idiomatic name be for the function under test? (here: is_leap_year)

Are there any other changes you'd want to make?

exercises instruction

currently the exercises stub only have the module definition

module MyExercise {
  // write your solution here
}

and students would have to go reverse engineer the tests to figure out what exactly to implement. This is fine for more advanced users/exercises, but for beginners it can constitute a pointless extra cognitive load.

A few options:

  • add the function definition to the template, e.g.
proc myFunction(a: int, b:string) {
  // write your code here
}

this would be fine for the very first exercises as it also shows students how to define a function in chapel.

  • Have an instructions.append.md file in each exercise saying what functions should be implemented (or write this directly in instructions.md, but maybe better not to change that if it's automatically generated.

I would maybe lean towards the second option. Comments, suggestions how this is dealt with in other tracks?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.