Giter Site home page Giter Site logo

microsoft / adbench Goto Github PK

View Code? Open in Web Editor NEW
100.0 16.0 37.0 142.75 MB

Benchmarking various AD tools.

License: MIT License

Julia 1.05% C++ 71.18% C 1.54% MATLAB 18.86% F# 0.16% Python 1.72% HTML 2.40% PowerShell 0.28% TeX 0.75% Jupyter Notebook 0.76% CMake 0.29% M 0.06% Batchfile 0.01% C# 0.89% Shell 0.04% Dockerfile 0.01% Objective-C 0.01%

adbench's Introduction

ADBench - autodiff benchmarks

This project aims to provide a running-time comparison for different tools for automatic differentiation, as described in https://arxiv.org/abs/1807.10129, (source in Documentation/ms.tex). It outputs a set of relevant graphs (see Graph Archive).

At the start of the 20s, the graph for GMM (Gaussian Mixture Model, a nice "messy" workload with interesting derivatives) looked like this:

Jan 2020

For information about the layout of the project, see Development.

For information about the current status of the project, see Status.

Methodology

For explanations on how do we perform the benchmarking see Benchmarking Methodology, Jacobian Correctness Verification.

Build and Run

The easiest way to build and run the benchmarks is to use Docker. If that doesn't work for you, please, refer to our build and run guide.

Plot Results

Use ADBench/plot_graphs.py script to plot graphs of the resulting timings.

python ADBench/plot_graphs.py --save

This will save graphs as .png files to tmp/graphs/static/

Refer to PlotCreating for other possible command line arguments and the complete documentation.

Graph Archive

From time to time we run the benchmarks and publish the resulting plots here: https://adbenchwebviewer.azurewebsites.net/

The cloud infrastructure that generates these plots is described here.

Contributing

Contributions to fix bugs, test on new systems or add new tools are welcomed. See Contributing for details on how to add new tools, and Issues for known bugs and TODOs. This project has adopted the Microsoft Open Source Code of Conduct.

Known Issues

See Issues for a list of some of the known problems and TODOs.

There's GitHub's issue page as well.

adbench's People

Contributors

amirsh avatar athas avatar awf avatar cgravill avatar iliaeg avatar laurent-hascoet avatar microsoft-github-policy-service[bot] avatar mikhailnikolaev avatar msdmkats avatar novikov-alexander avatar toelli-msft avatar tomjaguarpaw avatar tomsmeding avatar zsmith3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adbench's Issues

Use of std:fill is dubious

Hi @zsmith3

I've just written 4d127b8 to partially correct 84184f5. I can't help thinking that the usage of std:fill in the latter commit was accidental. The latter seems to be indended as a purely documentation commit.

In any case, I don't think that any of the std:fill calls needed sizeof(double) and one of them was missing an initial J.

Do you remember the original purpose of this commit, or whether it was accidental? It was causing Manual to error out with heap corruption on Azure DevOps and on an Azure VS VM I set up (but, strangely, not on my local machine).

Thanks

We should have a GPU run as well as a CPU run

Typically we run the benchmarks on CPU only. We should also have a GPU run that compares how the tools run when given a GPU. PyTorch will be expected to be much faster, for example.

Unify old/new naming

We used to call the things we benchmarked "tools", now they are "modules".
We should unify -- any suggestions welcome.

PyTorch Hand differentiation is wrong

PyTorch calculates jacobian of Hand objective incorrectly. In its result derivatives by several last thetas variables are always zero, but in fact they aren't. By the way, Hand objective itself is calculated correctly.

GCC chokes on the UTF-16-encoded source files

I'm trying to build the benchmark suite on Linux, but GCC manages to produce an impressively verbose error message:

$ make
Scanning dependencies of target Tools-Finite-LSTM
[  3%] Building CXX object tools/Finite/CMakeFiles/Tools-Finite-LSTM.dir/main.cpp.o
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:1: error: stray ‘\377’ in program
 ��# i n c l u d e   < i o s t r e a m > 
 ^
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:2: error: stray ‘\376’ in program
 ��# i n c l u d e   < i o s t r e a m > 
  ^
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:3: error: stray ‘#’ in program
 ��# i n c l u d e   < i o s t r e a m > 
   ^
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:4: warning: null character(s) ignored
 ��# i n c l u d e   < i o s t r e a m > 
    ^
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:6: warning: null character(s) ignored
 ��# i n c l u d e   < i o s t r e a m > 
      ^
/home/athas/repos/ADBench/tools/Finite/main.cpp:1:8: warning: null character(s) ignored
 ��# i n c l u d e   < i o s t r e a m > 
        ^
...

(On and on for thousands of lines)

I think the best solution is to recode all the UTF-16 files as UTF-8 (as most other files in the repository), which should work everywhere. Alternatively, GCC supports the -finput-charset option, but this feels like a hack.

Matters arising from plot_graphs.py

Hi @mikhailnikolaev, thanks very much for all your work on the new graph drawing code. The new graphs look a lot better than the old ones, contain more information, and the code is more coherent too. There are a few matter that arise from that that we should consider:

  1. Your original PR contained a lot of whitespace changes. Personally, I'm happy with the formatting of the as it currently is. On the other hand I'm open to discussion and I'm not particularly opposed to whitespace changes. If we're going to do that we should decide on the format (and perhaps choose an autoformatter) and do the whole file in one whitespace-only commit. Whitespace changes that occur at the same time as refactorings or functional changes make diffs hard to read.
    a. I am totally fine with wrapping long lines though
  2. A lot of the processing occurs at the module top level. This is pretty bad form. I think we have two options:
    1. If we're not going to make any more changes to that file then I think it's fine to leave it as it is. It works. Let's not try to fix what works.
    2. If we make more changes to this file then first we should pull all executable code into functions and have a main() function that is the entry point of the script. Then new functional changes can be made.

What do you think?

Make Zygote more Julian

  1. make cam in project a
struct Camera{T}
   rot :: SVector{3,T}
   centre :: SVector{2,T}
   radial :: SVector{2,T}
   focal :: T
   X0 : SVector{2,T}

Tidy up subtool logic

Some tools, e.g Eigen, Manual, have multiple "subtools", e.g. *-Split or *-Vector configurations.
There is much grotty logic e.g. in run-all:

  if ($objective.contains("Eigen")) { $out_name = "$($this.name.ToLower())_eigen" }
  elseif ($objective.contains("Light")) { $out_name = "$($this.name.ToLower())_light" }
  elseif ($objective.endswith("SPLIT")) { $out_name = "$($this.name)_split" }
  else { $out_name = $this.name }

which should be cleaned up. It's probably best for each tool's cmake script to emit to cmake-vars-$buildtype (see comments in run-all.ps1) the list of EXEs it has created, and then to uniformly name benchmarks by the name of the EXE.

Unit tests fail

Jacobian correctness tests for c++ hand complicated and lstm problems fail for all modules.
The reason is that the required accuracy was increased, but the accuracy of the expected results was not.

Run-all.ps1 cannot run 2.5M point GMM

All our 2.5M points GMM benchmark expect to work in the "replicate point" mode, where all points have the same coordinates, stated in the input file only once. All of our benchmark runners support this mode, but require a command line argument to be passed to them to activate it.

Currently, run-all.ps1 can not do that.

Remove "replicate point" option

The "replicate point" option is strange, and might lead to unrealistic timings, e.g. with branch prediction or cache behaviour. We should remove it.

Post graphs to gitub pages

It would be great to have the graphs for the latest build visible somewhere more permanent. Could we use GitHub pages to semi-automate the process?

Strange Eigen compilation issue

After setting up cmake and running make I get strange compiler errors in gmm_eigen.h:

In file included from /home/athas/repos/ADBench/tools/Manual/gmm_d.cpp:176:
/home/athas/repos/ADBench/tools/Manual/../cpp-common/gmm_eigen.h: In function ‘void gmm_objective_no_priors(int, int, int, const Eigen::Map<const Eigen::Array<T, -1, 1> >&, const std::vector<Eigen::Map<const Eigen::Matrix<Scalar, -1, 1> > >&, ArrayX<T>&, const std::vector<Eigen::Matrix<LhsScalar, -1, -1, 0> >&, const double*, Wishart, T*)’:
/home/athas/repos/ADBench/tools/Manual/../cpp-common/gmm_eigen.h:165:68: error: expected primary-expression before ‘)’ token
         Qxcentered.noalias() = Qs[ik].triangularView<Eigen::Lower>()*xcentered;
                                                                    ^
/home/athas/repos/ADBench/tools/Manual/../cpp-common/gmm_eigen.h:174:61: error: expected primary-expression before ‘)’ token
         lse(ik) = -0.5*(Qs[ik].triangularView<Eigen::Lower>() * (curr_x - mus[ik])).squaredNorm();
                                                             ^
tools/Manual/CMakeFiles/Tools-Manual-GMM-Eigen.dir/build.make:86: recipe for target 'tools/Manual/CMakeFiles/Tools-Manual-GMM-Eigen.dir/gmm_d.cpp.o' failed
make[2]: *** [tools/Manual/CMakeFiles/Tools-Manual-GMM-Eigen.dir/gmm_d.cpp.o] Error 1
CMakeFiles/Makefile2:442: recipe for target 'tools/Manual/CMakeFiles/Tools-Manual-GMM-Eigen.dir/all' failed
make[1]: *** [tools/Manual/CMakeFiles/Tools-Manual-GMM-Eigen.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

From all the Eigen documentation I can find, this should work. It fails with both GCC 8.3 and Clang 6.0, with similar error messages.

Playing around, either of the following changes make it build:

  • Removing the template instantiation, i.e: Qs[ik].triangularView()*xcentered
  • Explicitly passing an argument of any type (!!!): Qs[ik].triangularView<Eigen::Lower>("hello")*xcentered

I think my C++ skills are too rusty to figure out how these appease the compiler spirits. I also haven't gotten to running the code yet, so it may not actually work.

No __init__.py in python_common

There is no __init__.py file in tools/python_common. Without one it is not possible to import the python_common package. It seems to be that one should be added.

Nonetheless, I am still confused: people have been running the PyTorch and Autograd tools, right? So how did that ever work in the absense of __init__.py? Has everyone who ever run them got an __init__.py there that is not checked in to the repo?

If people could help me understand the situation then I will do a PR to add this file.

Rename autodiff->ADbench.

The repo has been renamed from https://github.com/awf/autodiff to https://github.com/awf/ADBench.
Check for occurrences and fix.

CI: Long-running PyTorch benchmarks cause "lost communication with server"

If we extend the range of d and k values that the PyTorch benchmarks run on, even rather conservatively to d = 2,10,20, k = 5,10,25, then the Azure DevOps CI process fails with "The agent: Hosted Agent lost communication with the server". I have no idea why this is. I will run the PyTorch benchmarks with only a limited range of d and k values for now, until there is time to investigate.

Run all process for all branches

We should modify the Azure Batch script to produce full graphs not only for master branch, but also for other branches. Moreover, Web Viewer should be modified to display results for different branches separately, or with additional labels.

Remove graphs from repo?

@zsmith3 I'm finding it awkward that the graphs are checked into the repo. Every time I generate new graphs my working copy shows an enormous number of changes.

Am I right in presuming that these graphs were checked in so that visitors to GitHub have a convenient way of seeing them? If so, would you mind if I remove the graphs from the repo but add them as distributed build artifacts somewhere else on the GitHub page?

TensorFlow missing in Windows PR tests

When running the "ADBench PR Build Win Test" it complains that TensorFlow is missing, for example

======================================================================
ERROR: test_simple_objective_runs_multiple_times (__main__.PythonModuleCommonHandTests)
Checks if objective can be calculated multiple times
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:/a/1/s/test/python/modules/common/hand_tests.py", line 174, in setUp
    self.test = module_load(module_path)
  File "D:\a\1\s\src\python\runner\ModuleLoader.py", line 21, in module_load
    spec.loader.exec_module(test)
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "D:\a\1\s\src\python\modules\TensorflowGraph\TensorflowGraphHand.py", line 8, in <module>
    import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
...
The following tests FAILED:
	172 - PythonBaModuleCommonTests (Failed)
	173 - PythonGmmModuleCommonTests (Failed)
	174 - PythonLstmModuleCommonTests (Failed)
	175 - PythonHandModuleCommonTests (Failed)

Should it be pip installed, or obtained some other way?

https://msrcambridge.visualstudio.com/Knossos/_build/results?buildId=48452&view=logs&j=2d2b3007-3c5c-5840-9bb0-2b1ea49925f3&t=f33089cd-d3fe-578c-2c5e-4ee560e10a0d&l=2139

Path to binaries broken

@awf, I'm having trouble understanding the intent of this change. It adds "\$script:buildtype" to the path to each tool except DiffSharp. But the tools don't have that in their path, at least not when I run on Azure DevOps.

The path to the generated Manual binary is tools\Manual\Tools-Manual-GMM.exe but, with this change, the script looks for it in tools\Manual\Release\Tools-Manual-GMM.exe.

Was it perhaps supposed to be -neq, or something else?

06ee8d1#diff-098416d604d473d6991aa60aad51b3bcL165

ADOLC is rerun

Does anyone have any idea why Tools-ADOLC-Hand-Light-complicated.exe and Tools-ADOLC-Hand-Eigen-complicated.exe keep being rerun?

I see the following even though the ADOLC results have already been generated and ADOLC should be skipped. It's interesting that number 5 in each case is skipped.

    complicated_big
      1
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Light-complicated.exe
          Hit time limit after 689 loops
          Hit time limit after 18 loops
      2
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Light-complicated.exe
          Hit time limit after 682 loops
          Hit time limit after 17 loops
      3
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Light-complicated.exe
          Hit time limit after 689 loops
          Hit time limit after 17 loops
      4
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Light-complicated.exe
          Hit time limit after 671 loops
          Hit time limit after 17 loops
      5
          Skipped test (already completed)
...
    complicated_big
      1
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Eigen-complicated.exe
          Hit time limit after 691 loops
          Hit time limit after 17 loops
      2
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Eigen-complicated.exe
          Hit time limit after 700 loops
          Hit time limit after 18 loops
      3
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Eigen-complicated.exe
          Hit time limit after 687 loops
          Hit time limit after 17 loops
      4
Running C:/Users/toelli/CMakeBuilds/b077ec1f-8a7a-9735-b8cb-76f7cb2fcc8e/build/x64-Release\tools\ADOLC\Tools-ADOLC-Hand-Eigen-complicated.exe
          Hit time limit after 665 loops
          Hit time limit after 18 loops
      5
          Skipped test (already completed)

tool_descriptors should be configured with descriptive flags

At the moment tool_descriptors looks as below. We should configure it with descriptive flags rather than binary strings!

$tool_descriptors = @(
        #[Tool]::new("Adept", "bin", "1110", 1, 0, "101010")
         [Tool]::new("ADOLC", "bin", "1110", 1, 0, "101011")
        #[Tool]::new("Ceres", "bin", "1100", 0, 1, "101011")
         [Tool]::new("Coconut", "bin", "1000", 0, 0, "100000"),
         [Tool]::new("Finite", "bin", "1111", 0, 0, "101011")
         [Tool]::new("Knossos", "bin", "1000", 0, 0, "100000"),
         [Tool]::new("KnossosRev", "bin", "1000", 0, 0, "100000"),
         [Tool]::new("Manual", "bin", "1110", 0, 0, "110101")
         [Tool]::new("DiffSharp", "bin", "0100", 1, 0, "101010")
         [Tool]::new("Autograd", "py", "1100", 1, 0, "101010")
         [Tool]::new("PyTorch", "py", "1011", 0, 0, "101010")
         [Tool]::new("Julia", "julia", "1100", 0, 0, "101010")
        #[Tool]::new("Theano", "pybat", "1110", 0, 0, "101010")
        #[Tool]::new("MuPad", "matlab", 0, 0, 0)
        #[Tool]::new("ADiMat", "matlab", 0, 0, 0)
)

https://github.com/awf/autodiff/blob/master/ADBench/run-all.ps1#L326

CI: ADOLC fails with "no space left on device"

When running under Azure DevOps CI with the full set of d and v (2,10,20,32,64 and 5,10,25,50,100,200 respectively) I see lots of ADOL-C output lines say "No space left on device!". For example

2018-12-05T16:01:34.3309855Z           stderr> ADOL-C error: Fatal error-doing a read or write!
2018-12-05T16:01:34.3310273Z           stderr>               >>> No space left on device! <<<

https://msrcambridge.visualstudio.com/Knossos/_build/results?buildId=12842

I don't know why this is. I will investigate another time.

Tapenade produces wrong output

Subject. When I run run-all.ps1 with tapenade module enabled, jacobians saved in ..._J_Tapenade.txt files for all GMM, BA, and LSTM problems are entirely wrong. Jacobians for hand are fine, btw. Interestingly, unit test for LSTM jacobian computation passes, but the jacobian that end up being saved for the very same problem is, as I said, completely wrong.

Julia warns that `lgamma` is deprecated

Warning: `lgamma(x::Real)` is deprecated, use `(logabsgamma(x))[1]` instead.
caller = log_wishart_prior(::Wishart, ::Array{Float64,2}, ::Array{Array{Float64,2},1}, ::Array{Float64,2}) at common.jl:46

Consistent naming

We should name tools with their tool name and the language/platform, e.g. Julia-Zygote, Python-Autograd. C++-Finite, C++-Manual, C++-ManualEigen

Debug build

@zmith3 When building I saw a warning that something was being built as a debug build and that performance would be terrible. Do we need to change it to a release build?

CMake Warning in internal/ceres/CMakeLists.txt:
  The object file directory
    C:/Users/toelli/source/repos/autodiff/etc/HunterGate-Root/_Base/951e8da/6ce82c9/3073817/Build/ceres-solver/Build/ceres-solver-Release-prefix/src/ceres-solver-Release-build/internal/ceres/CMakeFiles/ceres.dir/
  has 208 characters.  The maximum full path to an object file is 250
  characters (see CMAKE_OBJECT_PATH_MAX).  Object file
    generated/partitioned_matrix_view_2_3_d.cc.obj
  cannot be safely placed under this directory.  The build may not work
  correctly.
-- Found Eigen version : C:/Users/toelli/source/repos/autodiff/etc/HunterGate-Root/_Base/951e8da/6ce82c9/3073817/Install/include/eigen3
   ===============================================================
   Disabling the use of Eigen as a sparse linear algebra library.
   This does not affect the covariance estimation algorithm 
   which can still use the EIGEN_SPARSE_QR algorithm.
   ===============================================================
-- Building without LAPACK.
-- Building without SuiteSparse.
-- Failed to find CXSparse - Could not find CXSparse include directory, set CXSPARSE_INCLUDE_DIR to directory containing cs.h
-- Did not find CXSparse, Building without CXSparse.
   ===============================================================
   Compiling without any sparse library: SuiteSparse, CXSparse 
   & Eigen (Sparse) are all disabled or unavailable.  No sparse 
   linear solvers (SPARSE_NORMAL_CHOLESKY & SPARSE_SCHUR)
   will be available when Ceres is used.
   ===============================================================
-- Google Flags disabled; no tests or tools will be built!
-- Found Google Log (glog). Assuming glog was NOT built with gflags support as gflags was not found.  If glog was built with gflags, please set the gflags search locations such that it can be found by Ceres.  Otherwise, Ceres may fail to link due to missing gflags symbols.
-- Building with OpenMP.
-- Found unordered_map/set in std namespace.
-- Found shared_ptr in std namespace using <memory> header.
-- Building Ceres as a static library.
=================================================================================
-- Build type: Debug. Performance will be terrible!
-- Add -DCMAKE_BUILD_TYPE=Release to the CMake command line to get an optimized build.
=================================================================================

Add DiffSharp1.0

Currently we're using the older DV/DM in DiffSharp.
We should add a new tool folder DiffSharp-Tensor, using the dev branch 1.0 API.

Export data file for the graphs

It would be nice if the plotting process exported a data file (perhaps JSON) containing the all the datapoints created by the training run. Ideally the plotting code could be used to replot the graphs from the data file.

Which tools currently work?

I am setting up a CI build for the repository. At the moment I have enabled Manual, ManualEigen and Finite. I wonder if you could tell me which tools are currently in a working state and should be tested.

BA and Hand plot wrong X axis

I've realized what is the problem with graphs for BA and Hand objectives. First of all, X axis has to contain information about input variable count. So, now in the script ADBench/plot_graphs.py information about run variable count is extracted from the time file name (respected functions are defined in ADBench/utils.py). File names for GMM and LSTM objectives contain needed information about parameter count (e.g. the typical LSTM time file has the next form lstm_l2_c1024_times.txt, so, the script can understand that there was 2 * 1024 variables). But BA and Hand file names have the folowing form:

<tool_name><number in order>.txt

and for such files the script uses the standard extracting logic, actually, it just concatenates all digits from the file name. Note, that this file name pattern is just taken from data files, stored in the directory data.

So, I think that the script ADBench/run_all.ps1 must generate files for BA and Hand using the same pattern as it is provided for LSTM and GMM. That means that the script has to genrate new file names for result that in general case can be different from data input file names.

Another option is just leaving things as they are. Eventually, we just want to compare tools by their efficiency, I think, we don't need to know how many variables were used.

What do you think about this?

Pipeline interfaces should be well specified

We have a pipeline that looks a bit like this

  • CMake
    • accepts a complicated family of options
    • builds some dependencies
    • creates a build.ninja file to tell ninja how to build the tools
  • ninja
    • builds binaries of tools
    • puts them somewhere
  • run-all.ps1
    • runs the tools to calculate benchmarks
    • the timing information is written to specially-named files
  • plot_graphs.py
    • looks for the specially-named timing files
    • generates graphs
    • puts the graphs somewhere

It would be great to have a specification of each step of the pipeline. What are the arguments it accepts? Where does it have to be run?[1] What are its outputs? Where are files placed and what is their specification?

[1] Ideally any tool should be able to be run in any directory. It shouldn't decide where to read or write files based on its current working directory.

Is there test data for graph generation?

Is there any test data for graph generation? At the moment I can't test the graph generation with violation info because I would have to generate the correctness files by hand. If there is no test data then perhaps we can put some in a suitable directory.

Unable to find type [JacobianComparisonLib.JacobianComparison]

I'm trying to run a build on my local machine. Manual completes fine. Then after the first run of Finite I see

Unable to find type [JacobianComparisonLib.JacobianComparison].
At C:\Users\toelli\source\repos\autodiff\ADBench\run-all.ps1:146 char:12
+     return [JacobianComparisonLib.JacobianComparison]::new($tolerance ...
+            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (JacobianComparisonLib.JacobianComparison:TypeName) [], RuntimeExcepti
   on
    + FullyQualifiedErrorId : TypeNotFound

(I understand that Finite is the first tool to be checked for correctness because Manual is not checked for correctness.) Now, in run-all.ps1 I see the line

Add-Type -Path "$bindir/src/dotnet/utils/JacobianComparisonLib/JacobianComparisonLib.dll"

ADBench/cmake-vars.ps1 contains

$bindir = "C:/Users/toelli/source/repos/autodiff"
$buildtype = "Release"
$gmm_d_vals = @(2, 10, 20, 32, 64)
$gmm_k_vals = @(5, 10, 25, 50, 100, 200)

The JacobianComparisonLib.dll file does indeed seem to be where it should be

PS C:\Users\toelli\source\repos\autodiff> dir .\src\dotnet\utils\JacobianComparisonLib\JacobianComparisonLib.dll


    Directory: C:\Users\toelli\source\repos\autodiff\src\dotnet\utils\JacobianComparisonLib


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       08/10/2019     14:19           3072 JacobianComparisonLib.dll

so at this point I don't see what's wrong? Why can run-all.ps1 not access the JacobianComparisonLib.dll? How should I fix this?

Out of memory checking

The runners should check out of memory situations (e.g. catch OOM exceptions or perform apriori tests if it's possible) and return a special exit code if such a situation appears or will able to appear. The global runner in its turn should check this exit code and don't run tests of the bigger sizes if tests of the smaller sizes can't be performed due to OOM.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.