Giter Site home page Giter Site logo

xunit-performance's Introduction

xunit.performance

This repo has been archived, as this project is no longer maintained. We recommend that you use BenchmarkDotNet for your benchmarking needs.

Authoring benchmarks

  1. Create a new class library project
  2. Add a reference to the latest xunit.performance.api.dll
  3. Add a reference to the latest Microsoft.Diagnostics.Tracing.TraceEvent (It deploys native libraries needed to merge the *.etl files)
  4. Tag your test methods with [Benchmark] instead of [Fact]
  5. Make sure that each [Benchmark]-annotated test contains a loop of this form:
[Benchmark]
void TestMethod()
{
    // Any per-test-case setup can go here.
    foreach (var iteration in Benchmark.Iterations)
    {
        // Any per-iteration setup can go here.
        using (iteration.StartMeasurement())
        {
            // Code to be measured goes here.
        }
        // ...per-iteration cleanup
    }
    // ...per-test-case cleanup
}

The simplest possible benchmark is therefore:

[Benchmark]
void EmptyBenchmark()
{
    foreach (var iteration in Benchmark.Iterations)
        using (iteration.StartMeasurement())
            ; //do nothing
}

Which can also be written as:

[Benchmark]
void EmptyBenchmark()
{
    Benchmark.Iterate(() => { /*do nothing*/ });
}

In addition, you can add inner iterations to the code to be measured.

  1. Add the for loop using Benchmark.InnerIterationCount as the number of loop iterations
  2. Specify the value of InnerIterationCount using the [Benchmark] attribute
[Benchmark(InnerIterationCount=500)]
void TestMethod()
{
    // The first iteration is the "warmup" iteration, where all performance
    // metrics are discarded. Subsequent iterations are measured.
    foreach (var iteration in Benchmark.Iterations)
        using (iteration.StartMeasurement())
            // Inner iterations are recommended for fast running benchmarks
            // that complete very quickly (microseconds). This ensures that
            // the benchmark code runs long enough to dominate the harness's
            // overhead.
            for (int i=0; i<Benchmark.InnerIterationCount; i++)
                // test code here
}

If you need to execute different permutation of the same benchmark, then you can use this approach:

public static IEnumerable<object[]> InputData()
{
    var args = new string[] { "foo", "bar", "baz" };
    foreach (var arg in args)
        // Currently, the only limitation of this approach is that the
        // types passed to the [Benchmark]-annotated test must be serializable.
        yield return new object[] { new string[] { arg } };
}

// NoInlining prevents aggressive optimizations that
// could render the benchmark meaningless
[MethodImpl(MethodImplOptions.NoInlining)]
private static string FormattedString(string a, string b, string c, string d)
{
    return string.Format("{0}{1}{2}{3}", a, b, c, d);
}

// This benchmark will be executed 3 different times,
// with { "foo" }, { "bar" }, and { "baz" } as args.
[MeasureGCCounts]
[Benchmark(InnerIterationCount = 10)]
[MemberData(nameof(InputData))]
public static void TestMultipleStringInputs(string[] args)
{
    foreach (BenchmarkIteration iter in Benchmark.Iterations)
    {
        using (iter.StartMeasurement())
        {
            for (int i = 0; i < Benchmark.InnerIterationCount; i++)
            {
                FormattedString(args[0], args[0], args[0], args[0]);
            }
        }
    }
}

Creating a simple harness to execute the API

Option #1: Creating a self containing harness + benchmark

using Microsoft.Xunit.Performance;
using Microsoft.Xunit.Performance.Api;
using System.Reflection;

public class Program
{
    public static void Main(string[] args)
    {
        using (XunitPerformanceHarness p = new XunitPerformanceHarness(args))
        {
            string entryAssemblyPath = Assembly.GetEntryAssembly().Location;
            p.RunBenchmarks(entryAssemblyPath);
        }
    }

    [Benchmark(InnerIterationCount=10000)]
    public void TestBenchmark()
    {
        foreach(BenchmarkIteration iter in Benchmark.Iterations)
        {
            using(iter.StartMeasurement())
            {
                for(int i=0; i<Benchmark.InnerIterationCount; i++)
                {
                    string.Format("{0}{1}{2}{3}", "a", "b", "c", "d");
                }
            }
        }
    }
}

Option #2: Creating a harness that iterates through a list of .NET assemblies containing the benchmarks.

using System.IO;
using System.Reflection;
using Microsoft.Xunit.Performance.Api;

namespace SampleApiTest
{
    public class Program
    {
        public static void Main(string[] args)
        {
            using (var harness = new XunitPerformanceHarness(args))
            {
                foreach(var testName in GetTestNames())
                {
                    // Here, the example assumes that the list of .NET
                    // assemblies are dropped side-by-side with harness
                    // (the current executing assembly)
                    var currentDirectory = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
                    var assemblyPath = Path.Combine(currentDirectory, $"{testName}.dll");

                    // Execute the benchmarks, if any, in this assembly.
                    harness.RunBenchmarks(assemblyPath);
                }
            }
        }

        private static string[] GetTestNames()
        {
            return new [] {
                "Benchmarks",
                "System.Binary.Base64.Tests",
                "System.Text.Primitives.Performance.Tests",
                "System.Slices.Tests"
            };
        }
    }
}

Command line options to control the collection of metrics

--perf:collect [metric1[+metric2[+...]]]

    default
        Set by the test author (This is the default behavior if no option is specified. It will also enable ETW to capture some of the Microsoft-Windows-DotNETRuntime tasks).

    stopwatch
        Capture elapsed time using a Stopwatch (It does not require ETW).

    BranchMispredictions|CacheMisses|InstructionRetired
        These are performance metric counters and require ETW.

    gcapi
        It currently enable "Allocation Size on Benchmark Execution Thread" and it is currently available through ETW.

Examples
  --perf:collect default
    Collect metrics specified in the test source code by using xUnit Performance API attributes

  --perf:collect BranchMispredictions+CacheMisses+InstructionRetired
    Collects BranchMispredictions, CacheMisses, and InstructionRetired PMC metrics

  --perf:collect stopwatch
    Collects the benchmark elapsed time (If this is the only specified metric on the command line, then no ETW will be captured)

  --perf:collect default+BranchMispredictions+CacheMisses+InstructionRetired+gcapi
    '+' implies union of all specified options

Supported metrics

Currently, the API collect the following data *:

Metric Type Description
Allocated Bytes in Current Thread GC API call Calls GC.GetAllocatedBytesForCurrentThread around the benchmark (Enabled if available on the target .NET runtime)
Branch Mispredictions Performance Monitor Counter Enabled if the collection option BranchMispredictions is specified and the counter is available on the machine
(It requires to run as Administrator)
Cache Misses Performance Monitor Counter Enabled if the collection option CacheMisses is specified and the counter is available on the machine
(It requires to run as Administrator)
Duration Benchmark execution time in milliseconds Always enabled
GC Allocations ** GC trace event Use the [MeasureGCAllocations] attribute in the source code
GC Count ** GC trace event Use the [MeasureGCCounts] attribute in the source code
Instructions Retired ** Performance Monitor Counter Enabled if the collection option InstructionRetired is specified or the [MeasureInstructionsRetired] attribute is used in the source code, and the counter is available on the machine
(It requires to run as Administrator)

* The default metrics are subject to change, and we are currently working on enabling more metrics and adding support to have more control around the metrics being captured. ** These attributes can be overriden using the --perf:collect option

Collected data

Currently the API generates different output files with the collected data:

Format Data
csv File contaning statistics of the collected metrics
etl Trace file (Windows only)
md Markdown file with statistics rendered as a table (github friendly)
xml Serialized raw data of all of the tests with their respective metrics

Authoring Scenario-based Benchmarks

A Scenario-based benchmark is one that runs in a separate process. Therefore, in order to author this kind of test you need to provide an executable, as well as some information for xunit-performance to run. You are responsible for all the measurements, not only to decide what to measure but also how to get the actual numbers.

  1. Create a new Console Application project
  2. Add a reference to the "xUnit" NuGet package
  3. Add a reference to the latest xunit.performance.api.dll
  4. Define PreIteration and PostIteration delegates
  5. Define PostRun Delegate
  6. In the main function of your project, specify a ProcessStartInfo for your executable and provide it to xunit-performance.

You have the option of doing all the setup for your executable (downloading a repository, building, doing a restore, etc.) or you can indicate the location of your pre-compiled executable.

PreIteration and PostIteration are delegates that will be called once per run of your app, before and after, respectively. PostRun is a delegate that will be called after all the iterations are complete, and should return an object of type ScenarioBenchmark filled with your tests and metrics names, as well as the numbers you obtained.

Example

In this example, HelloWorld is a simple program that does some stuff, measures how much time it spent, and outputs this number to a txt file. The test author has decided that it only has one Test, called "Doing Stuff", and this test has only one metric to measure, "Execution Time".

The authoring might look something like this:

private const double TimeoutInMilliseconds = 20000;
private const int NumberOfIterations = 10;
private static int s_iteration = 0;
private static double[] s_startupTimes = new double[NumberOfIterations];
private static double[] s_requestTimes = new double[NumberOfIterations];
private static ScenarioConfiguration s_scenarioConfiguration = new ScenarioConfiguration(TimeoutInMilliseconds, NumberOfIterations);

public static void Main(string[] args)
{
  // Optional setup steps. e.g.)
  //  Clone repository
  //  Build benchmark

  using (var h = new XunitPerformanceHarness(args))
  {
    var startInfo = new ProcessStartInfo() {
      FileName = "helloWorld.exe"
    };

    h.RunScenario(
      startInfo,
      PreIteration,
      PostIteration,
      PostRun,
      s_scenarioConfiguration);
  }
}

private static void PreIteration()
{
  // Optional pre benchmark iteration steps.
}

private static void PostIteration()
{
  // Optional post benchmark iteration steps. For example:
  //  - Read measurements from txt file
  //  - Save measurements to buffer (e.g. s_startupTimes and s_requestTimes)
  ++s_iteration;
}

// After all iterations, we create the ScenarioBenchmark object, and we add
// only one test with one metric. Then we add one Iteration for each iteration
// that run.
private static ScenarioBenchmark PostRun()
{
  var scenarioBenchmark = new ScenarioBenchmark("MusicStore") {
    Namespace = "JitBench"
  };

  var startup = new ScenarioTestModel("Startup");
  scenarioBenchmark.Tests.Add(startup);

  var request = new ScenarioTestModel("Request Time");
  scenarioBenchmark.Tests.Add(request);

  // Add the measured metrics to the startup test
  startup.Performance.Metrics.Add(new MetricModel {
    Name = "ExecutionTime",
    DisplayName = "Execution Time",
    Unit = "ms"
  });

  // Add the measured metrics to the request test
  request.Performance.Metrics.Add(new MetricModel {
    Name = "ExecutionTime",
    DisplayName = "Execution Time",
    Unit = "ms"
  });

  for (int i = 0; i < s_scenarioConfiguration.Iterations; ++i)
  {
      var startupIteration = new IterationModel {
        Iteration = new Dictionary<string, double>()
      };
      startupIteration.Iteration.Add("ExecutionTime", s_startupTimes[i]);
      startup.Performance.IterationModels.Add(startupIteration);

      var requestIteration = new IterationModel {
        Iteration = new Dictionary<string, double>()
      };
      requestIteration.Iteration.Add("ExecutionTime", s_requestTimes[i]);
      request.Performance.IterationModels.Add(requestIteration);
  }

  return scenarioBenchmark;
}

Once you create an instance of the XunitPerformanceHarness, it comes with a configuration object of type ScenarioConfiguration, which has default values that you can edit to properly apply to your test requirements.

public class ScenarioConfiguration
{
  public int Iterations { get; }
  public TimeSpan TimeoutPerIteration { get; }
}

Controlling the order of executed benchmarks

To control the order of benchmarks executed within a type you need to use an existing xunit feature. All you have to do is to implement a type which implements ITestCaseOrderer interface and configure it by using [TestCaseOrderer] attribute.

Example:

public class DefaultTestCaseOrderer : ITestCaseOrderer
{
    public IEnumerable<TTestCase> OrderTestCases<TTestCase>(IEnumerable<TTestCase> testCases) where TTestCase : ITestCase
        => testCases.OrderBy(test => test.DisplayName); // OrderBy provides stable sort ([msdn](https://msdn.microsoft.com/en-us/library/bb534966.aspx))
}

[assembly: TestCaseOrderer("namespace.OrdererTypeName", "assemblyName")]

Note: Please make sure that you have provided full type name (with namespace) and the correct assembly name. Wrong configuration ends up with a silent error.

xunit-performance's People

Contributors

aarnott avatar adamsitnik avatar ahsonkhan avatar alexanderkozlenko avatar brianrob avatar davmason avatar dotnet-bot avatar drewscoggins avatar dsgouda avatar dsplaisted avatar ericeil avatar guhuro avatar jorive avatar joshfree avatar karajas avatar lt72 avatar mellinoe avatar michellemcdaniel avatar mmitche avatar nulltoken avatar pharring avatar robbert229 avatar swgillespie avatar vancem avatar viktorhofer avatar zoe-ms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xunit-performance's Issues

Need package compatible with latest xunit CLI support

The recommended xunit for CLI release is not compatible with the latest preview release of xunit-performance.

This causes basic xunit testing to fail.

Running 'dotnet test' on a project with xunit-performance dependencies fails as follows:

Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 
'xunit.runner.utility.dotnet, Version=2.2.0.3300, Culture=neutral, PublicKeyToken=8d05b1bb7a6fdb6c'. 
The located assembly's manifest definition does not match the assembly reference. 
(Exception from HRESULT: 0x80131040)
   at Xunit.Runner.DotNet.Program..ctor()
   at Xunit.Runner.DotNet.Program.Main(String[] args)

because the 1.0.0-prerelease-00508-1 version of xunit.runner.utility.dotnet is already loaded (via xunit-performance's dependence on DotNet.Build.Tools.TestSuite)

Perf runner ETW tracing insufficient system resources

I've been getting the following error frequently:

 Copyright (C) 2015 Microsoft Corporation.

  Discovering tests for D:\git\corefx\bin\tests\Windows_NT.AnyCPU.Release\System.IO.FileSystem.Tests\dnxcore50\System.IO.FileSystem.T
  ests.dll.
  Discovered a total of 6 tests.
  Creating output directory: .
  Starting ETW tracing. Logging to .\latest-perf-build
EXEC : warning : Insufficient system resources exist to complete the requested service. (Exception from HRESULT: 0x800705AA) [D:\git\
corefx\src\System.IO.FileSystem\tests\System.IO.FileSystem.Tests.csproj]
  The previous error was converted to a warning because the task was called with ContinueOnError=true.
     at Microsoft.ProcessDomain.ProcDomain.<ExecuteAsync>d__26`1.MoveNext()
  --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Microsoft.Xunit.Performance.ETWLogging.<StartAsync>d__4.MoveNext()
  --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Microsoft.Xunit.Performance.Program.StartTracing(IEnumerable`1 tests, String pathBase)
     at Microsoft.Xunit.Performance.ProgramCore.RunTests(IEnumerable`1 tests, String runnerHost, String runnerCommand, String runnerA
  rgs, String runId, String outDir)
     at Microsoft.Xunit.Performance.ProgramCore.Run(String[] args)

While running the tests on Windows. I believe the issue is caused by my workflow:

  • Start running the tests
  • Tests hang indefinitely for some reason (I'm not writing infinite loops in my tests so there must be something else wrong here too, but I haven't yet discerned what).
  • Pass a CTRL-C to the runner to stop it

ISSUE: The test directory can not be deleted due to the ETW.user file.

After doing force stopping a few times, the runner will fail with the above error "Insufficient System Resources" and a complete system restart is required to enable me to run my tests again.

I assume this error has something to do with ETW files not being properly cleaned up when I CTRL-C the program, but google/StackOverflow gave no working solution to manually do it after the fact.

Discuss: how to ensure that the measured region of an iteration is "large enough" to be measured

It's currently very easy and natural to write tests that cannot be measured accurately by our system. For example:

[Benchmark]  
[MemberData("TestHashtables")]  
public void GetItem(Hashtable table)  
{  
    object result;  
    foreach (var iteration in Benchmark.Iterations)  
    {  
        table.Add("key", "value");  
        using (iteration.StartMeasurement())  
            result = table["key"];  
        table.Remove("key");  
    }  
} 

This looks good, but the resulting measurements may be meaningless, because the "measured region" does so little. For example, our time measurements are limited by the system "performance counter." On my machine, that timer ticks 3,507,510 per second. So anything that can execute in the ballpark of 1,000,000 times per second is basically unmeasurable, and something like this Dictionary access definitely could fall into that category.

I wonder what we can do to help users avoid this pitfall. We could flag any test that runs "too fast" with some sort of warning or error. But then what will the user do about it?

One idea is to add support for "inner iterations." The idea would be to make it easy to run the code in the measured region multiple times, with the number of "inner iterations" for a particular test determined at runtime, through actual measurements. Something like this:

        [Benchmark]
        [MemberData("TestHashtables")]
        public void GetItem(Hashtable table)
        {
            object result;
            foreach (var iteration in Benchmark.Iterations)
            {
                table.Add("key", "value");
                using (iteration.StartMeasurement())
                    foreach (var innerIteration in Benchmark.InnerIterations)
                        result = table["key"];
                table.Remove("key");
            }
        }

For the first iteration, we'd run the inner loop until enough timer ticks elapsed to get "good" timer resolution, then we'd run it the same number of times for each subsequent execution of the inner loop.

That seems like it would work well for this particular test case, but another test case might actually mutate the dictionary in the inner loop; in that case, doing that multiple times would make it a very different test! So I'm not sure this is the right solution.

Also, I'm not sure how this would work if we're concerned about metrics that may not be available at run-time (most metrics other than time are only available by parsing ETW data later).

We could simply tell users that the test is "too fast" and let them figure it out on a case-by-case basis.

Any other ideas?

Discuss: custom metrics

The simple proof-of-concept we have now only collects data about test execution time (wall-clock time) and GC counts. This clearly will not be sufficient for many test scenarios. We will certainly add a few more default metrics, but we'll probably also need a way to collect and report custom metrics. And maybe a way to customize reporting of even the default metrics.

As a vague proposal, I offer this: Provide an attribute (or set of attributes) that we can put on tests (or maybe the test assembly) to indicate that additional ETW providers should be enabled when those tests are run. Perhaps those attributes can also specify custom reporting plug-ins, that will be used to summarize the data in the analysis phase. Design of this is left as an exercise for the reader.

We should also offer a way to enable additional events per-run, for diagnostic purposes. For example, we may not collect CSWITCH stacks by default, or use them for reporting, but a dev may want to enable them for a particular run to help diagnose some problem. Design of this will depend on where we land WRT the mechanism for collecting ETW traces in the first place.

Add a Warning for Non-Serializable MemberData - Tests Don't Run

When I use MemberData attribute with non-primitive type as return values such as an custom type, the perf tests don't get run and there is no perf result.

Here is the code I use:

using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Runtime.Serialization;
using Microsoft.Xunit.Performance;
using Xunit;

namespace Tests
{
    public interface IPerformanceTestData
    {
        void RunTest();
    }

    public class TestData1 : IPerformanceTestData
    {
        public void RunTest()
        {
            long sum = 0;
            for (int i = 0; i < 1000; ++i)
                for (int j = 0; j < 1000; ++j)
                    sum += i + j;
        }

        public override string ToString()
        {
            return "TestData1";
        }
    }

    public class PerformanceTest
    {
        public static IEnumerable<object[]> MemberData()
        {
            yield return new object[] { new TestData1() };
        }

        [Benchmark]
        [MemberData(nameof(MemberData))]
        public void UnitTest(IPerformanceTestData testData)
        {
            foreach (var iteration in Benchmark.Iterations)
            {
                using (iteration.StartMeasurement())
                {
                    testData.RunTest();
                }
            }
        }
    }
}

Expectation: perf number is produced in -analysis.html
Actual: the -analysis.html file is empty

Am I missing anything? How do I use custom type with MemberData attribute?

Nuget package dependency on BuildTools is specified incorrectly

Nuget package relies on Microsoft.DotNet.BuildTools.TestSuite version >= 1.0.0-prerelease-508-01. The format for the BuildTools nuget packages version is specified like this: 1.0.0-prerelease-00508-01. It is not allowing me to install the xunit-performance package because the version of BuildTools it's looking for is incorrectly specified.

issue

Add a -outfile option to xunit.performance.run

On xunit.performance.run.exe we have a "-runid" switch. The value of that switch is used to create output filenames. The runid is also embedded in the .xml file.

When running as part of a distributed test job, it will be useful to pass the job's correlation ID as the run id. However, there will be many tasks associated with that job and we don't want the results to have the same file name.

It would therefore be useful to have a separate switch to control the output filename of the .xml and .etl files. If not specified, the output filename(s) should default to the runid, as they do today.

(In the meantime, we can work around this limitation by renaming the output files on the test machine before uploading the results)

Separate[BenchMark] setup and iteration phases

Many interesting multi-machine or multi-process performance tests need to do some basic test-specific setup once prior to the part that is iterated and timed. Provide a way to decouple these. One approach might be to base [BenchMark] on [Theory] rather than fact. This would allow the "argument instantiation" part to do setup for the test. The benchmark framework would do that once per test (untimed) and then iterate over the body of the [BenchMark] test (timed).

Each unique set of [Theory] argument values also constitutes a unique test and could be timed separately.

The tests in https://github.com/dotnet/wcf which would be marked [BenchMark] could easily factor their setup part into something like a [Theory] data provider.

Discuss: iteration "philosophy"

Several other issues have mentioned how iterations should be handled, so I thought this warranted a dedicated issue.

The basic question is, for any given test, how many times should it be run? My thinking on this is as follows:

It should be trivial to author a simple perf test that yields useful results.

I should be able to write something like:

[Benchmark]
void MyTest()
{
SomeMethod();
}

...and get a useful measure of that method's performance.

A useful result typically requires multiple iterations.

Performance, particularly in .NET, is inherently non-deterministic. The only way to get a useful idea of the performance of a given piece of code is to run it multiple times, and then use statistical techniques to characterize the data gathered.

It's not generally possible to predict the number of iterations required, ahead of time.

The statistical methods used determine how many "sample" measurements are needed, and this will vary depending on several factors:

a) The particular mix of operations in the test itself. Some methods are simple, single-threaded, CPU-bound, and therefore are not inherently non-deterministic, but most will end up allocating (involving the GC, which is non-deterministic), interacting with other threads, interacting with I/O devices, etc.

b) The state of the machine on which the test is running. It's nearly impossible to isolate the execution of a single process from everything else happening on/to the machine. For example, a sudden surge in network traffic may consume CPU resources for interrupt processing, making the test code seem to take longer for the duration of the surge. Or, there may be more physical memory usage in other processes on one machine vs. another, or at one time vs. another, causing the GC to collect more aggressively, temporarily.

The effects in (a) are largely predictable, though it may require a fair amount of experimentation to find an iteration count that consistently leads to useful results. However, the effects in (b) are completely unpredictable, especially in a system that needs to work reliably outside of the controlled environment of a dedicated performance lab.

Performance testing should be fast

On the CLR team, we've traditionally compensated for the unpredictability in perf tests by hard-wiring them for a large number of iterations. Much of the time this ends up being a waste of (computer) time. If we have to run every test enough times that we could get a useful result if we were running in a noisy environment, we limit the number of tests we can run in a clean environment, in a given amount of time. Also, we've ended up running many relatively stable tests as if they were unstable, simply because it's a pain (and time-consuming) to go analyze every test case and determine the optimal number of iterations; so we run even the stable, predictable tests more than needed, further wasting machine time on every run.

The wasted machine time becomes a much larger issue when we start asking OSS contributors to run these tests on their own machines prior to submitting changes.

The above, to me, argues for a dynamic system that compensates for unpredictability at run-time, by default. We should be running the test until we have some confidence that we have a useful result, for that particular run. I think the above also argues that it's harmful to offer any kind of mechanism for hard-wiring particular iteration counts, as they will inevitably lead to tests that either don't produce useful results, or waste time producing overly-accurate results. I can see configuration of extreme upper limits, as "back-stops" against the run-time algorithms going astray, but I worry that those would be quickly, and quite accidentally, abused. A backstop might be better implemented as an overall execution time limit; if the test run exceeds that limit, it simply fails, rather than producing useless, but seemingly valid, results.

Of course, that's just one point of view. I'm eager to hear others.

A Benchmark with MemberData can hang in discovery

Add a new C# file to the SimplePerfTests project with the following content:

using System.Collections.Generic;
using System.Linq;
using Microsoft.Xunit.Performance;
using Xunit;

namespace SimplePerfTests
{
    public class Document
    {
        private string _text;

        public Document(string text)
        {
            _text = text;
        }

        public void Format()
        {
            _text = _text.ToUpper();
        }
    }

    public class FormattingTests
    {
        static IEnumerable<object[]> MakeArgs(params object[] args)
        {
            return args.Select(arg => new object[] { arg });
        }

        public static IEnumerable<object[]> FormatCurlyBracesMemberData => MakeArgs(
            new Document("Hello, world!")
        );

        [Theory]
        [MemberData(nameof(FormatCurlyBracesMemberData))]
        public void FormatCurlyBracesTest(Document document)
        {
            document.Format();
        }
    }
}

Build SimplePerfTests and verify that you can run the FormatCurlyBracesTest.
Now change [Theory] to [Benchmark] and rebuild. Visual Studio's test discovery mechanism spins forever.

I'm not sure how to debug this, but presumably the problem lies in the code in BenchmarkTestCaseRunner.

Build warnings for missing XML doc comments

We get the following warnings from a Release build due to missing XML Doc Comments:

  AllocatesAttribute.cs(11,16): warning CS1591: Missing XML comment for publicly visible type or member 'AllocatesAttribute.AllocatesAttribute(bool)' [D:\xunit-performance\src\xunit.p
erformance.core\xunit.performance.core.csproj]
  AllocatesAttribute.cs(13,21): warning CS1591: Missing XML comment for publicly visible type or member 'AllocatesAttribute.Allocates' [D:\xunit-performance\src\xunit.performance.core
\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(7,25): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource' [D:\xunit-performance\src\xunit.performance.core\xunit.
performance.core.csproj]
  BenchmarkEventSource.cs(9,22): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Tasks' [D:\xunit-performance\src\xunit.performance.core\
xunit.performance.core.csproj]
  BenchmarkEventSource.cs(11,36): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Tasks.BenchmarkStart' [D:\xunit-performance\src\xunit.p
erformance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(12,36): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Tasks.BenchmarkStop' [D:\xunit-performance\src\xunit.pe
rformance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(13,36): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Tasks.BenchmarkIterationStart' [D:\xunit-performance\sr
c\xunit.performance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(14,36): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Tasks.BenchmarkIterationStop' [D:\xunit-performance\src
\xunit.performance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(17,44): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.Log' [D:\xunit-performance\src\xunit.performance.core\x
unit.performance.core.csproj]
  BenchmarkEventSource.cs(20,28): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.BenchmarkStart(string, string)' [D:\xunit-performance\s
rc\xunit.performance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(41,28): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.BenchmarkStop(string, string, string)' [D:\xunit-perfor
mance\src\xunit.performance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(65,28): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.BenchmarkIterationStart(string, string, int)' [D:\xunit
-performance\src\xunit.performance.core\xunit.performance.core.csproj]
  BenchmarkEventSource.cs(88,28): warning CS1591: Missing XML comment for publicly visible type or member 'BenchmarkEventSource.BenchmarkIterationStop(string, string, int, bool)' [D:\
xunit-performance\src\xunit.performance.core\xunit.performance.core.csproj]

Support data-driven performance tests

The [Theory] model would be very useful for perf tests. For example, the CLR's string throughput tests currently have each test case run a single String method over a set of string literal inputs, all in one test case. It would be great if these could be refactored into a single method call, parameterized over a bunch of strings, with each input treated as a separate test case in the output.

It shouldn't be hard to flesh out the existing benchmark support to support data discoverers, etc.

Add parameters to BenchmarkAttribute

The defaults are great, but I'd like to be able to configure a few things:

  1. Manual iteration count (instead of the heuristic one)
  2. Whether there is an additional "warm-up" iteration (true/false)
  3. If using a heuristic iteration count, what are the 'escape' values (e.g. maximum elapsed time which is currently set at 10ms)

I'm sure we will think of more.

Make .etl output as optional

The .etl file is large, and we do not want it most of times. For example, when the perf run is nightly, and we just want the xml output to analyze the trend.

CPU counters

Need to add support for "instructions retired," and possibly other CPU counters. The ETW support for this is unfortunately currently limited to Win 10, and only works with Hyper-V disabled on the machine. But where we can get it, it would be nice.

Data precision of xunit.performance.analysis.exe

I got results like:
<iterations> <iteration index="0" Duration="1766.2689153251486" /> <iteration index="1" Duration="1625.3662320883595" /> .... </iterations>

While the analysis result is:
<Duration min="1.56E+03" mean="1.62E+03" max="1.77E+03" marginOfError="0.0247" stddev="59.5" />

Seems precision is lost in the result.

Insuffcient System resources when running tests concurrently

For CoreFX, when running build.cmd we run the tests concurrently. With the current perf runner (build 22), we run Corerun regardless of any test discovery results. This has the effect that ETW tracing is started for a number of csproj that don't actually have any perf tests associated with them. This is generally not an issue, but occasionally I'll get an error like this:

 xUnit.net console test runner (64-bit .NET Core)
  Copyright (C) 2014 Outercurve Foundation.

EXEC : warning : Insufficient system resources exist to complete the requested service. (Exception from HRESULT: 0x800705AA) [D:\git\
corefx\src\System.IO.FileSystem.Primitives\tests\System.IO.FileSystem.Primitives.Tests.csproj]
     at Microsoft.ProcessDomain.ProcDomain.<ExecuteAsync>d__26`1.MoveNext()
  --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Microsoft.Xunit.Performance.ETWLogging.<StartAsync>d__4.MoveNext()
  --- End of stack trace from previous location where exception was thrown ---
     at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
     at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
     at Microsoft.Xunit.Performance.EtwPerformanceMetricLogger.StartLogging(ProcessStartInfo runnerStartInfo)
     at Microsoft.Xunit.Performance.ProgramCore.RunTests(XunitPerformanceProject project)
     at Microsoft.Xunit.Performance.ProgramCore.Run(String[] args)

I don't come upon this issue when building and running the tests on a project-by-project basis.

CoreCLR runner

For non-Windows platforms, we need a version of xunit.performance.run that runs on CoreCLR.

Some issues:

  1. Do we use DNX to run this, CoreRun, or both?
  2. What do we replace ETW with?

For now, the answer to the ETW question is likely that we simply log iteration durations to a file, and ignore other metrics, until we get some cross-platform ETW-equivalent in .net.

running in xunit-perf in Windows 7.

Hi,
When I am trying to run for collecting more detail by following command in Windows 7:
xunit.performance.run MyTests.dll -runner xunit.console.exe -runid MyRun1234
It occurs this error:
Error: Profile source only availabe on Win8 and beyond.
by Microsoft.ProcessDomain.ProcDomain.d__26`1.MoveNext() in D:
\work station\projects\perf_test_dataexchange\xunit-performance-master\src\ProcD
omain\ProcDomain.cs:row 207.

Discuss: Out-of-proc ETW tracing

It was mentioned in #3 that "internal" ETW tracing might not be the best approach. I've been thinking the same thing myself.

A few downsides:

  1. It requires that the xunit test runner runs elevated which may be undesirable (consider what a malicious test case could do).
  2. Right now, the set of providers is hard-coded. Even if configuration options were provided via the Benchmark attribute, not all projects will want the same set.
  3. Users may need to configure other things about the session like the buffer sizes.

Publish to NuGet

I admit, I have no idea what state the project is in right now (will clone and use a local build for now) but I can't see this on NuGet; any chance of a beta/pre-release build making it's way there?

Fix tracking of sample interval for Instructions Retired

My recent change to add the Instructions Retired metric has a bug: we lose track of the actual sample interval after scanning the first test case's ETW results. We should be tracking this in the "evaluation context" so the value previously reported in the trace can flow to subsequent test cases.

Deadlock when failing to deserialize a CrossDomainInvokeRequest

This is an interesting issue that I encountered when loading a custom performance metric that depended on a version of TraceEvent that was different that the one that the Logger process depended upon. In my case, when deserializing a CrossDomainInvokeRequest, an exception is thrown by the runtime for the assembly version mismatch which is propegated here:

while ((parentMessage = await ReadNextRequestAsync()) != null)
{
    Task throwaway = HandleInvokeRequest(parentMessage);
}

Since the faulted task is awaited, it continues to unwind to here:

var throwaway = ListenToParentDomainAsync();

where the task is never awaited, so the exception stops propagating. At this point, the child procdomain is waiting to be unloaded, but it will never be unloaded because ListenToParentDomainAsync unwound past the call to this.Unload() and is no longer listening to the parent domain. Meanwhile, the parent domain is waiting for a response from the child domain, which will never come (since it's not listening).

Simple repro:

  1. Insert an exception here so that this method begins to unwind.
  2. Run a test suite as normal. The test runner gets stuck waiting for the trace to begin:
xunit.performance Console Runner (64-bit .NET 4.0.30319.42000)
Copyright (C) 2015 Microsoft Corporation.

Creating output directory: .
Discovering tests for C:\Users\segilles\Documents\Visual Studio 2015\Projects\Microsoft.Performance.GC\Microsoft.Performance.GC.Tests\bin\Debug\Microsoft.Performance.GC.Tests.dll.
Discovered a total of 1 tests.
Starting ETW tracing. Logging to .\Crash.etl

It will never progress past this point because it's waiting on the child procdomain which is partially faulted.

I looked into fixing this but there are a couple factors that make this tricky.

  1. Allowing the child procdomain to die via async void is not good because it leaks a ETW session in the Logger process, which is unacceptable.
  2. Allowing the child procdomain to immediately unload on a fault causes the parent procdomain to wait forever on a response, still deadlocking.

Perhaps it would be possible to introduce a sort of protocol where the child procdomain could inform its parent (or vice versa) that it has faulted and will no longer respond to messages? Unfortunately, the scenario that would cause someone to run into this issue is probably quite common - the version mismatch in TraceEvent.

When running in console mode, it should run properly

Specially when you are running under the debugger to ensure your test is correct, there shouldnt be any failure.

For example I had to resort to:

public class BenchBase
    {
        protected void ExecuteBenchmark( Action action )
        {
            if ( Debugger.IsAttached )
            {
                action();
            }
            else
            {
                foreach (var iteration in Benchmark.Iterations)
                {
                    using (iteration.StartMeasurement())
                    {
                        action();
                    }
                }
            }
        }
    }

It could be great that you can have a ConsoleIteration running instead. If that is not possible a NullIteration at least so we dont get a NullReferenceException

Minimum number of iterations?

Think something like this may have value? I've noticed from my data that many tests will only run one or two iterations and the data will be skewed oddly. An example:

      <test name="System.Diagnostics.Tests.Perf_Process.Kill" type="System.Diagnostics.Tests.Perf_Process" method="Kill" time="11.4533988" result="Pass">
        <performance runid="System.Diagnostics.Process.Tests.dll-WindowsCore" etl="D:\git\corefx\bin\tests\Windows_NT.AnyCPU.Release\System.Diagnostics.Process.Tests\dnxcore50\System.Diagnostics.Process.Tests.dll-WindowsCore.etl">
          <metrics>
            <Duration displayName="Duration" unit="msec" />
          </metrics>
          <iterations>
            <iteration index="0" Duration="248.70042802554235" />
            <iteration index="1" Duration="93.757350318643148" />
          </iterations>
        </performance>
      </test>

It seems like we would want to run this test a few more times to get a better idea of the expected duration.

Consider processing ETW data in parallel

For large amounts of ETW data, we spend a long time parsing the results. We could perhaps speed this up by running the tests in smaller "chunks" with separate ETL files for each chunk, and then parsing the files in parallel.

Need better error reporting if outdir does not exist.

When running on our build server no ETL file is created and an exception is thrown in ETWLogging.StartAsync(). If I copy the dll with the tests (built on the build server) and the xunit.performance.run.exe with it dependencies to a directory on my local machine everything works as expected. My machine is running WIndows 8.1 and the build server Windows Server 2012 R2.

What should I do to continue investigate the issue? Where can the difference be?

Here's the stack trace:
The system cannot find the path specified. (Exception from HRESULT: 0x80070003) [C:\Builds\3\Common components\AwtSG.CommonComponent.PerformanceTests\src\Common components\build\performancetests.proj] at Microsoft.ProcessDomain.ProcDomain.<ExecuteAsync>d__26``1.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Xunit.Performance.ETWLogging.<StartAsync>d__4.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Xunit.Performance.EtwPerformanceMetricLogger.StartLogging(ProcessStartInfo runnerStartInfo) at Microsoft.Xunit.Performance.ProgramCore.RunTests(XunitPerformanceProject project) at Microsoft.Xunit.Performance.ProgramCore.Run(String[] args)

.NET 4.5 support

How come there isn't any .NET 4.5 support, is it just that you don't need it or are there features missing in the framework?

I tried creating .NET 4.5 projects (see this branch) but the XML-files generated are missing the <iterations> tags. If you don't know of any issues I might dig further into the problem.

Inconsistent test results when running one test vs several tests

I've been running into this problem with a few different classes now. The issue is that when I run just a single test method I get expected results, but when I run the entire test class those test results get skewed. An example:

Run all tests for Process:

<assemblies>
  <assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (1 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-30" run-time="00:42:14" total="10" passed="10" failed="0" skipped="0" time="150.045" errors="0">
    <errors />
    <collection total="10" passed="10" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="149.711">
      <test name="System.Diagnostics.Tests.Perf_Process.GetHasExited" type="System.Diagnostics.Tests.Perf_Process" method="GetHasExited" time="2.4069179" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.Start" type="System.Diagnostics.Tests.Perf_Process" method="Start" time="2.6243841" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetStandardOutput" type="System.Diagnostics.Tests.Perf_Process" method="GetStandardOutput" time="1.8604058" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.Kill" type="System.Diagnostics.Tests.Perf_Process" method="Kill" time="2.8133242" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="4.141252" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="8.1157165" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="11.5350007" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetStartInfo" type="System.Diagnostics.Tests.Perf_Process" method="GetStartInfo" time="1.6823507" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetId" type="System.Diagnostics.Tests.Perf_Process" method="GetId" time="3.888143" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetExitCode" type="System.Diagnostics.Tests.Perf_Process" method="GetExitCode" time="110.6433883" result="Pass" />
    </collection>
  </assembly>
</assemblies>

Run just the tests for Process.GetProcessesByName:

<assemblies>
  <assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (1 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-30" run-time="00:47:26" total="3" passed="3" failed="0" skipped="0" time="4.906" errors="0">
    <errors />
    <collection total="3" passed="3" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="4.559">
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.2312171" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.5436275" result="Pass" />
      <test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.7846085" result="Pass" />
    </collection>
  </assembly>
</assemblies>

The tests being run are those at dotnet/corefx#3523.

Statistics computation assume that the distribution is normal

Statistics are computed under the assumption that the distribution is normal. Such assumption is questionable, especially for small populations, which is typically the case.

Consider using a non-parametric estimation technique for metrics on populations, and some variant of hypothesis-based testing for determining if some metric is significantly different between populations.

Need Automated Full Cross-Platform Build

CLI components are executables, but can only be built for an individual platform. We want these CLI components to be available for all platforms, so we need to figure out how to build and package them appropriately.

Running external test processes

Some of our tests really need their own process. For example, a "hello world" test that is measuring the end-to-end time to execute the canonical "hello world" console app. That's a very simple example, but this comes up a lot in our testing.

One idea would be to just have a simple API for invoking an external process. Test methods would just be normal [Benchmark] methods that happen to call this API. However, I wonder if it would make sense for the harness to understand that it's executing an external process? For example, this could influence the iteration heuristics, which really should not be looking at local GC counters when the interesting code is running in a different process.

Maybe we should have a different test type for this ([ExternalBenchmark]). Or, a property on BenchmarkAttribute giving the name of an executable. In either case, the method could be declared as "extern", with no method body.

Warm-up iterations

As I continue writing perf tests for CoreFX I’ve realized there isn’t yet an intuitive way to handle warmup. Are there any plans in motion to add this functionality?

The way that I’ve currently been thinking about warmup lends a structure like so:

    [Benchmark]
    public void CreateDirectory()
    {
        // Warmup
        for (int i = 0; i < 100; i++)
        {
            string path = GetTestFilePath();
            Directory.CreateDirectory(path);
            Directory.Delete(path);
        }

        foreach (var iteration in Benchmark.Iterations)
        {
            // Setup
            string testFile = GetTestFilePath();

            // Actual perf testing
            using (iteration.StartMeasurement())
                Directory.CreateDirectory(testFile);

            // Teardown
            Directory.Delete(testFile);
        }
    }

What I’d like, however, would be to hardly worry about warmup at all. What do you think of adding a “WarmupIterations” variable that disables logging the information for the first X iterations? This could be in a config file or a Property of “Benchmark” like so:

    [Benchmark]
    public void CreateDirectory()
    {
        Benchmark.WarmupIterations = 100;
        foreach (var iteration in Benchmark.Iterations)
        {
            // Setup
            string testFile = GetTestFilePath();

            // Actual perf testing
            using (iteration.StartMeasurement())
                Directory.CreateDirectory(testFile);

            // Teardown
            Directory.Delete(testFile);
        }
    }

This would also require that we know/can set how many iterations are going to be run, but that functionality isn't present since iteration count seems to be determined at runtime.

Thoughts?

@ericeil @mellinoe

Ignore warmup iterations

I tried the xunit-performance, and by default, it would run for 1000 times for every test, and seems the mean duration for a test would be calculated based on the 1000 durations. However, the first few iterations are doing warm-up, and better to be considered as exceptional value.

For example:
<iterations>
<iteration index="0" Duration="0.90437474681425556" GCCount="0" />
<iteration index="1" Duration="0.0021130251095655694" GCCount="0" />
<iteration index="2" Duration="0.00090558218971636961" GCCount="0" />
...

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.