Giter Site home page Giter Site logo

karolsluszniak / ex_check Goto Github PK

View Code? Open in Web Editor NEW
305.0 6.0 11.0 295 KB

One task to efficiently run all code analysis & testing tools in an Elixir project. Born out of ๐Ÿ’œ to Elixir and pragmatism.

Home Page: https://hex.pm/packages/ex_check

License: MIT License

Elixir 100.00%
elixir pragmatic code-analysis command-line-tool clean-code continuous-integration

ex_check's People

Contributors

asummers avatar bamorim avatar dvic avatar gerbal avatar karolsluszniak avatar kianmeng avatar srcrip avatar ypconstante avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ex_check's Issues

Feature request โ€“ retry everything

Currently the retry option will only run what failed, which is useful for random failures.

But I have another use case. I follow the following coding process:

  • write a test
  • implement a solution quickly until the test is green
  • refactor a bit
  • loop

I never check for formatting, credo stuff, or dialyzer problems at every loop, that would be too slow. I like to do there every N loops depending on what I am implementing.

So generally when I run mix check is before commit, once everything is green and I have a complete thing working. So in general I'll have several checks that fail ; most of the time the formatter, credo and some compiler warnings.

So I would like mix check to write in its state that 3 checks are failing. And then, when I run it again, it will only retry what failed (the current behaviour) until all the previously failing checks are now green. At this point I would like mix check to re-run everything (except those it just ran in the same CLI call that were green for the first time) to be sure that I did not make another check fail when fixing one of the 3. If everything is green, then the state is reset and I will run everything anew.

Some example of successive calls to mix check

  1. runs everything, credo and formatter are red.
  2. runs only credo and formatter, still red
  3. runs only credo and formatter, now formatter is green and credo red
  4. runs only credo, credo is green so it runs also everything except credo (because it happens in the same command
    call), dialyzer is now red, oops.
  5. runs only diyalizer, dialyzer is green, so it runs everything else, everything is green.
  6. runs everything โ€ฆ

What do you think?

To not mess with people CI configs it could be another option, incremental_retry that would turn on this behaviour. If retry is also true, incremental_retry would take precedence.

Running part of an umbrella app in parallel

Hello again! I have been getting a great deal of value out of this tool, thank you!

I have another hopefully interesting puzzle. I am working in an umbrella app where global state is an issue, and have been running the tests sequentially rather than in parallel as I work to fix them over time. I was wondering this morning whether it might be possible to run the badly behaving apps sequentially and the rest in parallel.

apps = Mix.Project.apps_paths() |> Map.keys
non_parallel_apps = [:bad_app1, :bad_app2, :bad_app3]
[
  tools: [
    {:formatter, ["mix", "format"]},
    {:coveralls, umbrella: [parallel: true, apps: apps -- non_parallel_apps], env: %{"MIX_ENV" => "test"}, command: "mix cmd mix coveralls.html"},
    {:coveralls_non_parallel, umbrella: [parallel: false, apps: non_parallel_apps], env: %{"MIX_ENV" => "test"}, command: "mix cmd mix coveralls.html", deps: [:coveralls]},
    {:ex_unit, enabled: false},
    {:dialyzer, enabled: false},
    {:sobelow, enabled: false},
    {:npm_test, enabled: false},
  ]
]

The trouble I am encountering with this is that deps seems to be applied to the apps within the umbrella task rather than the umbrella task itself

   coveralls_non_parallel in bad_app1 skipped due to unsatisfied dependency coveralls in bad_app1
   coveralls_non_parallel in bad_app2 skipped due to unsatisfied dependency coveralls in bad_app2
   ...etc

Not sure if I am missing something, do you have a direction that you could point me in?

Thank you again!

Executive Summary

My use case is that I would like to lift the end summary of which checks ran/passed/failed and post them into a comment back to the PR to alleviate the needs for developers to log into CI to see what failed. Because I run my CI checks centrally for many projects, I worry that one or more projects will be in a state where a skipped check will not be apparent to users from e.g. not having credo installed.

Would you be interested in something like a e.g. --summary-to-file opt that allowed me to redirect this to a file/env var or some mechanism by which I can export this data somewhere?

Deps Check Unused Check

Elixir 1.10 added a new useful check. While this can obviously be accomplished with the addition of

    {:unused_deps, command: "mix deps.unlock --check-unused"},

in .check.exs, what is the plan for doing checks that are built into the language based on the language version? Would this be something you'd want to see in ex_check?

halt_exit_status is no longer a valid CLI argument.

The above warning is displayed when running mix check. The --halt-exit-status option that is passed here has been deprecated in this pull request. Assuming the user is running at least dialyxir 1.0.0-rc7, released September 2019, the warning will be shown. It's easy to fix this by overriding the tool definition in the project .check.exs file, but it would be nice to have the correct up-to-date behaviour by default.

Order of test runs in umbrella app

Hi,

Thank you for publishing this library! It's a super convenient library to have in our stack and we were quite happy when we are able to remove extra logic from our Makefiles/ build pipelines for things like dyalixir and credo โค๏ธ

We've been having some issues when trying to upgrade to the latest version (0.11) and the new parallel testing of umbrella apps. Unfortunately, some of our service tests seem to depend on each other (we commonly have a :core app for shared database-related functionality and other api and ui apps depending on core); ideally, our tests would not have those implicit dependencies, but for now this is what we have to live with ๐Ÿ˜‰

I already tried to disable the parallel testing by setting parallel: false in the umbrella config, but the order of apps still seems to be different to what it was in 0.10.

  • Previously, tests for apps seem to have been run in dependency order, i.e. in our example: :core, :api, :ui.
  • Now the always seem to be run in alphabetical order, i.e. :api, :core, :ui.

Is there any way for us to configure the order of apps in tests, or define :core as one that always needs to run first?

Feature request: add JS tests

It would be amazing if mix check would also run the tests of the javascript part of an phoenix app. Would you love to have a PR for this?

Test and formatter not running with Elixir v1.16.0-rc.0

I'm testing Elixir v1.16, and ex_check is not running test and formatter steps, saying it's missing files

.check.exs

[
  fix: true,
  retry: false,
  tools: [
    {:doctor, false},
    {:ex_doc, false},
    {:mix_audit, false},
  ]
]

mix check output

=> finished in 0:38

 โœ“ compiler success in 0:16
 โœ“ credo success in 0:22
 โœ“ dialyzer success in 0:18
 โœ“ sobelow success in 0:11
 โœ“ unused_deps fix success in 0:00
   ex_unit skipped due to missing file ../test
   formatter skipped due to missing file ../.formatter.exs

Task 'checks' could not be found

Hello!

Big fan of your library, and have been enjoying using it!

0.6.0 worked great, but my CI broke with 0.7.0
Screenshot from 2019-07-29 21-00-10

I have no problem running mix check locally.

I'm just going through your commits right now, but was wondering if you already had the answer :).

Missing ex_unit and formatter

I'm experiencing an issue with Elixir 1.16.1 (Erlang/OTP 26), in my project (it's not an umbrella project) it's giving me this report:

โœ“ compiler success in 0:03
 โœ“ credo success in 0:03
 โœ“ dialyzer success in 0:08
 โœ“ doctor success in 0:02
 โœ“ ex_doc success in 0:04
 โœ“ mix_audit success in 0:03
 โœ“ unused_deps success in 0:01
   ex_unit skipped due to missing file ../test
   formatter skipped due to missing file ../.formatter.exs
   sobelow skipped due to missing package sobelow

why is it trying to get these files from an upper directory ../test and ../.formatter.exs?

By the way, mix format --check-formatted and mix test work perfectly if I run them manually.

Respect mix.exs test_path configuration

First, I want to say I really like this project a lot and thank you for your hard work!

I have a test directory structure in an app within an umbrella project that looks like this:

test/
  unit/
    test_helper.exs
    some_test.exs
  acceptance/
    test_helper.exs
    another_test.exs

I've done this by customizing test_paths as specified in the mix documentation.

When I run mix check, I get the following output:

=> finished in 0:10

 โœ“ compiler success in 0:03
 โœ“ credo success in 0:05
 โœ“ dialyzer success in 0:06
 โœ“ ex_doc success in 0:07
 โœ“ ex_unit in my_app success in 0:04
 โœ“ formatter success in 0:01
 โœ“ sobelow in my_app_web success in 0:02
   ex_unit in my_app_web skipped due to missing file apps/my_app_web/test/test_helper.exs

When I run mix test alone, all tests run successfully within my umbrella project. As a workaround, I've gotten mix check to run tests if I add a blank test_helper.exs to the test directory in my app. While this is a decent solution for the interim, I'd prefer to not have that file there if it's not needed. Looking at the source code in the project, it looks like test/test_helper.exs is hardcoded here:

{:ex_unit, "mix test", detect: [{:file, "test/test_helper.exs"}]},

Would it be possible to respect the test_paths configuration in mix.exs or not require test_helper.exs at that specific path?

Compiler error 13

Thanks for this great tool, really useful.

I started getting these errors when running mix check too quickly after switching from neovim to the console (which in my setup auto saves all opened buffers):

$ mix check

=> running compiler

=> reprinting errors from compiler

=> finished in 0:00

 โœ• compiler error code 13 in 0:00

I upgraded to elixir 1.14.0 and added the auto save roughly at the same time so both could be the cause.

Specify Config Path

Would you be open to the addition of an opt where you can specify the config path, e.g.

mix check --config ~/my/path/to/config

taking precedent over any dir local or home dir configs?

Funciton ExCheck.GraphExecutor.run/2 is undefined.

Hi, i was using analyze to run the tests, but then i switched to ex_check; it was working nicely (i have adopted it in 2 or 3 microservices) but suddendly it started to be pretty unreliable and i don't understand why.

I was using 0.10.0, so today I also updated to 0.11.0 but it keeps on failing with the following error.

** (UndefinedFunctionError) function ExCheck.GraphExecutor.run/2 is undefined (module ExCheck.GraphExecutor is not available)
    ExCheck.GraphExecutor.run([{:formatter, [], {:pending, {:formatter, "mix format --check-formatted", []}}}, {:credo, [], {:pending, {:credo, "mix credo --strict --format oneline", []}}}, {:sobelow, [], {:pending, {:sobelow, "mix sobelow --config", []}}}, {:ex_doc, [], {:pending, {:ex_doc, "mix docs", []}}}, {:ex_unit, [], {:pending, {:ex_unit, "mix test", []}}}], [parallel: true, start_fn: #Function<14.31308330/1 in ExCheck.Check.run_tools/2>, collect_fn: #Function<15.31308330/1 in ExCheck.Check.run_tools/2>])
    lib/ex_check/check.ex:52: ExCheck.Check.run_others/2
    lib/ex_check/check.ex:18: ExCheck.Check.compile_and_run_tools/2
    (mix) lib/mix/task.ex:331: Mix.Task.run_task/3
    (mix) lib/mix/cli.ex:79: Mix.CLI.run_task/2

My pakcage deps are (one is redacted for privacy,, is a source package we developed):

      {:phoenix, "~> 1.4.9"},
      {:phoenix_pubsub, "~> 1.1"},
      {:phoenix_ecto, "~> 4.0"},
      {:ecto_sql, "~> 3.1"},
      {:postgrex, ">= 0.0.0"},
      {:gettext, "~> 0.11"},
      {:jason, "~> 1.0"},
      {:plug_cowboy, "~> 2.0"},

      # Dev
      {:ex_check, "~> 0.11", only: [:dev], runtime: false},
      {:credo, "~> 0.8", only: [:dev], override: true},
      {:ex_doc, "~> 0.21", only: [:dev], override: true},
      {:sobelow, "~> 0.8", only: [:dev]},
      {:excoveralls, "~> 0.11", only: [:dev, :test]}

How to re-run failed tests

Hello!

I've been really enjoying this tool and using everywhere I can, and have one question:

I am working on a codebase with intermittently failing tests, causing me to have to frequently re-run the whole suite. I'm leading a push to fix the intermittent cases, but it is not going to be quick.

In the meantime I am hoping to speed up my local development by being able to retry mix test with just failed cases in the case of failure. Something like:

    {:ex_unit, umbrella: [parallel: false], env: %{"MIX_ENV" => "test"}, command: "mix test || mix test --failed"},

(I am running this on an umbrella app, and a lot of the issues are the result of a large global cache causing tests to share state)

But I have not been able to find a way to pass anything beyond a single command into :command.

I understand that what I am asking for is an anti-pattern and you may not wish the add the option of doing this, if that is the case would you be able to point me to where I could make the change in a fork of my own?

Thank you ๐Ÿ™‚

Coverage

Could it be possible to have a check also for the test coverage? It could be great to have even the configuration to know if that should be appearing in green or red based on the percentage.

Check for unused dependencies

Maybe this is a nice addition to the list of things ex_check is checking:

mix deps.unlock --check-unused

This checks if the "mix.lock" file has unused dependencies.

Add prechecks

Hi Karol!

I've been really enjoying using this library and consider it a standard part of my tooling.

I am wondering if I can contribute to your library? My idea is to add additional "prechecks" that are run synchronously after run_compiler/1 and before run_others/2

defp compile_and_run_tools(tools, opts) do

My use case is that I would like to run:

  • mix compile --warnings-as-errors (halt if failure)
  • mix credo (halt if failure)
  • everything else in parallel

My reasoning is that a credo check takes only a second or two, so if it fails I would rather address it immediately rather than wait a couple of minutes for the whole test suite to run.

In order to not break your current API, I was thinking about another list of :prechecks that uses the same format as :tools in .check.exs.

[
  ## all available options with default values (see `mix check` docs for description)
  # skipped: true,
  # exit_status: true,
  # parallel: true,

  # Tools to run synchronously before running the tools below
  prechecks: [
    # Have same options as the tools below
    # A non-zero exit code will halt all further execution
  ]

  ## list of tools (see `mix check` docs for defaults)
  tools: [
    ## curated tools may be disabled (e.g. the check for compilation warnings)
    # {:compiler, false},

    ## ...or adjusted (e.g. use one-line formatter for more compact credo output)
    # {:credo, command: "mix credo --format oneline"},

    ## custom new tools may be added (mix tasks or arbitrary commands)
    # {:my_mix_check, command: "mix release", env: %{"MIX_ENV" => "prod"}},
    # {:my_arbitrary_check, command: "npm test", cd: "assets"},

    # {:my_arbitrary_script, command: ["my_script", "argument with spaces"], cd: "scripts"}
  ]
]

I am not sure about the naming, it can probably be improved. Is this a desirable addition to your library?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.