Giter Site home page Giter Site logo

kostya / benchmarks Goto Github PK

View Code? Open in Web Editor NEW
2.7K 100.0 252.0 1.69 MB

Some benchmarks of different languages

License: MIT License

C 7.43% Crystal 3.14% D 4.42% Go 5.83% JavaScript 3.68% Nim 4.06% Python 3.66% Ruby 7.13% Scala 4.15% Brainfuck 3.58% C++ 12.20% Rust 5.65% Java 6.10% Julia 3.22% Shell 0.39% C# 5.35% Perl 3.65% Makefile 12.69% Haskell 3.24% Clojure 0.41%
benchmarks languages

benchmarks's Introduction

Table of Content

Overview

The benchmarks follow the criteria:

  • They are written as the average software developer would write them, i.e.

    • The algorithms are implemented as cited in public sources;
    • The libraries are used as described in the tutorials, documentation and examples;
    • The used data structures are idiomatic.
  • The used algorithms are similar between the languages (as the reference implementations), variants are acceptable if the reference implementation exists.

  • All final binaries are releases (optimized for performance if possible) as debug performance may vary too much depending on the compiler.

My other benchmarks: jit-benchmarks, crystal-benchmarks-game

Measurements

The measured values are:

  • time spent for the benchmark execution (loading required data and code self-testing are not measured);
  • memory consumption of the benchmark process, reported as base + increase, where base is the RSS before the benchmark and increase is the peak increase of the RSS during the benchmark;
  • energy consumption of the CPU package during the benchmark: PP0 (cores) + PP1 (uncores like GPU) + DRAM. Currently, only Intel CPU are supported via the powercap interface.

All values are presented as: median±median absolute deviation.

UPDATE: 2023-12-07

Test Cases

Brainfuck

Testing brainfuck implementations using two code samples (bench.b and mandel.b). Supports two mode:

  • Verbose (default). Prints the output immediately.
  • Quiet (if QUIET environment variable is set). Accumulates the output using Fletcher-16 checksum, and prints it out after the benchmark.

Brainfuck

bench.b

Language Time, s Memory, MiB Energy, J
Scala (Staged) 0.368±0.002 216.63±01.13 + 20.92±01.02 23.48±00.35
Racket (Staged) 0.886±0.000 100.59±00.24 + 0.00±00.00 34.30±00.04
Rust 1.010±0.000 0.93±00.02 + 0.00±00.00 42.79±00.06
V/gcc 1.052±0.000 1.81±00.01 + 0.00±00.00 43.23±00.07
C++/g++ 1.108±0.003 1.82±00.01 + 0.00±00.00 45.61±00.06
C/gcc 1.114±0.004 0.87±00.02 + 0.00±00.00 47.04±00.72
D/gdc 1.119±0.000 6.30±00.03 + 0.00±00.00 48.52±00.20
C++/clang++ 1.126±0.001 1.62±00.01 + 0.00±00.00 47.12±00.56
C/clang 1.140±0.000 0.88±00.01 + 0.00±00.00 48.18±00.11
Nim/gcc 1.172±0.001 0.88±00.02 + 0.00±00.00 49.58±00.03
Java 1.186±0.001 39.87±00.15 + 1.12±00.07 49.02±00.14
Vala/gcc 1.208±0.002 4.52±00.05 + 0.00±00.00 49.92±00.20
D/ldc2 1.212±0.000 1.39±00.02 + 0.00±00.00 49.90±00.08
Vala/clang 1.232±0.001 4.54±00.04 + 0.00±00.00 51.95±00.55
Kotlin/JVM 1.241±0.006 43.17±00.10 + 0.73±00.13 51.37±00.39
Zig 1.250±0.000 0.90±00.02 + 0.00±00.00 52.00±00.50
Go 1.266±0.001 2.92±00.02 + 0.00±00.00 51.67±00.06
C#/.NET Core 1.367±0.002 32.48±00.13 + 0.35±00.00 57.82±00.38
Go/gccgo 1.478±0.002 23.52±00.06 + 0.00±00.00 62.23±00.14
Nim/clang 1.564±0.000 1.14±00.03 + 0.00±00.00 64.25±00.05
Crystal 1.587±0.001 2.94±00.03 + 0.00±00.00 67.04±00.08
F#/.NET Core 1.596±0.005 37.18±00.05 + 0.38±00.00 67.54±00.26
OCaml 1.666±0.003 3.22±00.03 + 2.40±00.06 77.38±01.12
Chez Scheme 1.728±0.009 24.76±00.03 + 4.32±00.04 73.06±00.25
Racket 1.743±0.028 93.00±00.16 + 21.70±00.24 71.50±01.19
Julia 2.005±0.002 249.55±00.06 + 0.39±00.02 80.47±00.28
C#/Mono 2.053±0.011 25.56±00.05 + 0.00±00.00 86.98±00.39
V/clang 2.072±0.043 1.87±00.03 + 0.00±00.00 91.45±02.59
MLton 2.104±0.016 1.65±00.03 + 0.25±00.00 86.58±00.78
Scala 2.719±0.005 72.47±00.19 + 186.38±00.23 117.38±00.49
Node.js 3.109±0.040 39.22±00.03 + 3.05±00.00 131.56±01.70
D/dmd 3.325±0.002 3.35±00.06 + 0.00±00.00 124.44±00.12
Haskell (MArray) 3.351±0.003 3.33±00.04 + 5.01±00.05 137.97±00.12
Haskell (FP) 3.729±0.002 3.29±00.02 + 5.06±00.01 157.92±00.16
Ruby/truffleruby (JVM) 4.835±0.315 392.60±11.44 + 613.72±75.70 231.20±15.70
Ruby/truffleruby 5.359±0.111 231.64±04.96 + 614.89±40.60 261.55±05.19
Swift 5.687±0.035 16.49±00.01 + 0.00±00.00 211.51±01.74
Lua/luajit 5.885±0.013 2.46±00.04 + 0.00±00.00 241.48±01.05
Python/pypy 9.528±0.067 59.68±00.22 + 29.54±00.16 421.32±02.48
Idris 15.038±0.009 20.65±00.04 + 8.81±00.04 655.65±03.19
Elixir 20.364±0.033 69.97±00.69 + 0.00±00.00 803.28±03.13
PHP 34.238±0.051 17.89±00.24 + 0.00±00.00 1444.37±03.95
Lua 37.552±0.101 2.23±00.02 + 0.00±00.00 1541.38±02.60
Ruby (--jit) 48.990±0.097 16.17±00.03 + 1.79±00.03 2076.12±03.94
Python 59.034±0.179 10.14±00.02 + 0.00±00.00 2610.69±16.45
Ruby 67.526±0.444 14.92±00.05 + 0.00±00.00 2925.73±37.41
Ruby/jruby 82.926±1.428 196.23±03.00 + 196.83±09.04 3684.55±67.38
Tcl (FP) 190.823±0.879 3.91±00.11 + 0.00±00.00 8438.34±32.09
Perl 243.859±0.390 7.08±00.08 + 0.00±00.00 10803.89±32.06
Tcl (OOP) 375.323±1.694 3.92±00.06 + 0.00±00.00 16589.53±109.18

mandel.b

Mandel in Brainfuck

Language Time, s Memory, MiB Energy, J
Scala (Staged) 7.775±0.137 216.68±03.72 + 104.91±04.86 477.84±10.73
C++/g++ 9.742±0.022 1.86±00.03 + 2.28±00.04 392.83±01.08
C#/.NET Core 12.028±0.044 32.54±00.13 + 1.40±00.00 479.38±01.33
C/gcc 12.436±0.010 0.87±00.00 + 0.90±00.06 499.51±01.50
Java 12.789±0.104 39.88±00.07 + 2.20±00.06 506.58±07.58
Kotlin/JVM 12.881±0.169 43.14±00.10 + 1.86±00.42 539.50±07.57
F#/.NET Core 13.105±0.036 37.20±00.05 + 2.06±00.00 523.37±01.78
C/clang 13.128±0.023 0.88±00.01 + 0.90±00.00 565.41±03.18
Zig 13.367±0.016 0.89±00.02 + 1.42±00.06 556.41±00.94
V/gcc 13.757±0.038 1.82±00.02 + 1.16±00.03 552.45±02.88
C++/clang++ 13.831±0.010 1.59±00.01 + 1.94±00.04 561.99±03.09
Racket (Staged) 13.962±0.085 100.54±00.37 + 74.55±01.45 554.37±02.00
D/ldc2 14.037±0.042 3.02±00.03 + 0.79±00.02 563.77±01.66
Rust 14.202±0.012 0.91±00.01 + 1.11±00.04 565.92±00.87
Go 14.223±0.027 2.90±00.01 + 0.00±00.00 564.53±01.56
D/gdc 14.392±0.025 6.29±00.02 + 1.43±00.04 606.89±01.67
Vala/gcc 14.402±0.034 4.45±00.04 + 1.21±00.05 570.53±01.91
Vala/clang 14.821±0.013 4.45±00.05 + 1.23±00.03 604.34±04.34
Crystal 15.370±0.015 2.90±00.04 + 0.75±00.04 641.58±02.85
Nim/gcc 15.526±0.017 2.07±00.03 + 1.29±00.00 655.88±00.74
Scala 15.947±0.026 72.63±00.25 + 136.99±00.15 718.79±03.07
Swift 18.190±0.029 16.31±00.04 + 0.00±00.00 742.06±03.24
Go/gccgo 18.785±0.181 23.45±00.07 + 0.00±00.00 782.84±07.00
Nim/clang 19.782±0.317 2.34±00.05 + 1.29±00.00 811.47±18.83
V/clang 20.726±0.114 1.84±00.02 + 1.18±00.01 908.93±07.01
OCaml 25.253±0.009 4.03±00.04 + 3.51±00.03 1202.82±02.37
Node.js 27.677±0.770 39.36±00.02 + 6.38±00.45 1158.13±33.62
Chez Scheme 27.820±0.019 25.51±00.04 + 3.69±00.02 1208.12±05.81
Julia 29.420±0.209 249.62±00.04 + 0.40±00.02 1125.11±15.07
C#/Mono 31.007±0.025 25.64±00.05 + 0.83±00.00 1292.31±07.06
MLton 33.812±0.028 1.68±00.02 + 4.11±00.00 1543.08±11.31
Lua/luajit 34.664±0.047 2.55±00.05 + 0.44±00.00 1388.00±04.02
Racket 35.182±0.883 92.97±00.11 + 22.08±00.90 1563.87±32.83
Haskell (MArray) 35.583±0.080 4.41±00.03 + 4.74±00.00 1430.91±02.75
D/dmd 37.966±0.007 3.24±00.04 + 0.87±00.01 1375.78±00.62
Python/pypy 40.842±0.111 59.56±00.06 + 30.34±00.12 1798.94±06.36
Ruby/truffleruby 47.770±1.153 231.23±02.27 + 596.94±70.17 2293.29±24.16
Ruby/truffleruby (JVM) 49.197±0.669 404.03±07.49 + 486.99±42.94 2250.75±38.16
Idris 66.213±0.198 21.99±00.01 + 9.54±00.01 2841.60±11.91
Haskell (FP) 78.752±0.164 3.81±00.50 + 75.23±00.47 3253.62±11.87

Base64

Testing base64 encoding/decoding of the large blob into the newly allocated buffers.

Base64

Language Time, s Memory, MiB Energy, J
C/clang (aklomp) 0.096±0.000 2.05±00.01 + 0.00±00.00 4.57±00.03
C/gcc (aklomp) 0.098±0.000 2.12±00.06 + 0.00±00.00 4.65±00.03
PHP 0.105±0.000 18.61±00.11 + 0.00±00.00 4.91±00.01
Go (base64x) 0.265±0.003 6.24±00.05 + 0.00±00.00 12.61±00.12
Rust 0.849±0.000 2.32±00.04 + 0.00±00.00 34.90±00.07
C/clang 0.997±0.000 1.97±00.07 + 0.00±00.00 36.59±00.03
D/ldc2 1.070±0.003 3.60±00.03 + 3.40±00.00 44.54±00.20
C/gcc 1.091±0.005 1.99±00.05 + 0.00±00.00 39.84±00.12
Crystal 1.099±0.000 3.56±00.02 + 1.31±00.03 44.77±00.26
Nim/clang 1.102±0.001 1.97±00.02 + 5.82±00.04 44.69±00.16
Nim/gcc 1.327±0.003 1.66±00.03 + 5.27±00.06 54.63±00.31
Java 1.517±0.003 40.73±00.03 + 223.62±25.54 59.90±00.23
V/clang 1.520±0.001 2.34±00.01 + 2376.76±01.68 57.27±00.17
V/gcc 1.555±0.001 2.32±00.01 + 2384.53±02.19 56.53±00.08
Scala 1.584±0.001 68.49±00.42 + 310.69±04.21 64.45±00.28
Vala/clang 1.644±0.001 5.63±00.05 + 0.01±00.00 62.66±00.31
Vala/gcc 1.644±0.001 5.69±00.03 + 0.00±00.00 62.74±00.27
Kotlin/JVM 1.645±0.001 44.08±00.26 + 246.42±02.94 66.00±00.48
Ruby (--jit) 1.653±0.001 16.86±00.04 + 39.29±00.26 64.04±00.12
Ruby 1.654±0.002 15.61±00.01 + 42.85±00.44 64.36±00.09
C++/g++ (libcrypto) 1.707±0.002 5.66±00.04 + 0.64±00.06 68.31±00.81
Go 1.708±0.003 3.69±00.05 + 0.00±00.00 71.71±00.42
C++/clang++ (libcrypto) 1.711±0.002 4.99±00.05 + 0.67±00.00 67.93±00.47
Node.js 1.718±0.008 39.64±00.08 + 36.76±00.25 70.98±00.40
Perl (MIME::Base64) 1.899±0.007 14.79±00.04 + 0.13±00.04 75.33±00.56
F#/.NET Core 2.056±0.022 38.13±00.12 + 12.78±00.98 81.15±00.66
C#/.NET Core 2.168±0.011 33.64±00.09 + 12.27±01.86 85.60±00.41
D/gdc 2.376±0.002 7.32±00.02 + 3.36±00.00 102.54±00.71
Go/gccgo 2.946±0.004 24.36±00.07 + 0.00±00.00 139.41±00.45
Julia 2.976±0.003 265.73±00.10 + 43.68±00.12 119.09±01.00
Python 3.029±0.028 10.27±00.02 + 0.09±00.00 117.09±00.92
Zig 3.199±0.006 1.51±00.02 + 0.00±00.00 123.98±00.21
Python/pypy 3.252±0.006 59.46±00.07 + 31.64±00.06 141.73±00.94
D/dmd 3.307±0.003 3.06±00.03 + 3.89±00.02 138.40±00.38
Tcl 3.563±0.002 5.12±00.01 + 0.00±00.00 143.20±01.71
Ruby/truffleruby (JVM) 3.598±0.167 389.55±04.55 + 282.15±14.81 181.14±06.48
Racket 3.899±0.017 91.14±00.07 + 19.11±00.54 154.12±01.14
C#/Mono 4.639±0.004 26.31±00.12 + 18.68±00.04 187.27±01.21
Ruby/jruby 6.161±0.019 194.23±07.33 + 147.01±29.67 268.70±02.36
Ruby/truffleruby 7.748±0.008 226.42±02.02 + 532.61±40.35 372.98±00.98
Perl (MIME::Base64::Perl) 10.104±0.066 16.18±00.06 + 0.39±00.10 445.74±02.69

Json

Testing parsing and simple calculating of values from a big JSON file.

Few notes:

Json

Language Time, s Memory, MiB Energy, J
C++/clang++ (simdjson On-Demand) 0.060±0.000 112.36±00.09 + 60.11±00.03 2.50±00.01
C++/g++ (simdjson On-Demand) 0.061±0.000 113.47±00.03 + 59.81±00.00 2.55±00.01
C++/clang++ (DAW JSON Link NoCheck) 0.082±0.000 112.40±00.05 + 0.00±00.00 3.36±00.02
C++/clang++ (DAW JSON Link) 0.083±0.000 112.32±00.06 + 0.00±00.00 3.49±00.02
C++/g++ (DAW JSON Link NoCheck) 0.083±0.000 113.07±00.03 + 0.00±00.00 3.33±00.01
C++/g++ (DAW JSON Link) 0.087±0.000 113.14±00.03 + 0.00±00.00 3.64±00.02
Rust (Serde Typed) 0.098±0.001 111.63±00.02 + 11.25±00.07 4.16±00.02
C++/clang++ (simdjson DOM) 0.100±0.001 112.38±00.04 + 177.15±00.03 4.55±00.06
Rust (Serde Custom) 0.102±0.000 111.65±00.03 + 0.00±00.00 4.28±00.03
C++/g++ (simdjson DOM) 0.106±0.001 113.49±00.02 + 173.38±00.57 4.84±00.05
D/ldc2 (Mir Asdf DOM) 0.133±0.000 112.76±00.03 + 61.22±00.00 5.51±00.06
C++/clang++ (gason) 0.139±0.000 112.40±00.01 + 96.97±00.06 5.63±00.02
C++/g++ (gason) 0.140±0.000 113.09±00.06 + 96.97±00.06 5.55±00.01
C++/g++ (RapidJSON) 0.152±0.000 113.12±00.04 + 128.94±00.06 6.47±00.07
Scala (jsoniter-scala) 0.156±0.002 291.63±00.25 + 19.39±00.22 8.76±00.12
Go (rjson custom) 0.198±0.000 114.75±00.04 + 0.00±00.00 7.64±00.01
C++/clang++ (RapidJSON) 0.202±0.000 112.41±00.04 + 129.00±00.00 8.61±00.07
C++/g++ (RapidJSON Precise) 0.216±0.001 113.09±00.02 + 126.54±01.43 9.20±00.03
D/ldc2 (Mir Amazon's Ion DOM) 0.217±0.000 112.86±00.02 + 80.70±00.00 9.14±00.05
Go (Sonic) 0.219±0.003 122.98±00.08 + 0.00±00.00 9.50±00.14
Zig 0.222±0.000 110.92±00.01 + 39.28±00.29 9.65±00.04
Go (rjson) 0.233±0.000 114.82±00.01 + 0.00±00.00 9.01±00.02
Go (goccy/go-json) 0.269±0.000 115.49±00.05 + 0.00±00.00 10.55±00.04
C++/clang++ (RapidJSON Precise) 0.284±0.001 112.39±00.04 + 129.00±00.00 12.32±00.14
C++/g++ (RapidJSON SAX) 0.332±0.000 112.96±00.04 + 0.00±00.00 14.76±00.05
C/gcc (yajl) 0.359±0.001 110.89±00.05 + 0.00±00.00 15.48±00.08
C/clang (yajl) 0.360±0.000 110.84±00.04 + 0.00±00.00 15.53±00.03
C++/g++ (Boost.JSON) 0.360±0.001 113.23±00.01 + 308.16±00.02 15.36±00.10
C++/clang++ (Boost.JSON) 0.369±0.001 112.51±00.04 + 308.15±00.03 15.73±00.08
C++/g++ (RapidJSON SAX Precise) 0.385±0.000 112.97±00.04 + 0.00±00.00 17.20±00.18
Nim/clang (jsony) 0.392±0.000 111.41±00.05 + 146.15±00.10 16.44±00.06
C++/clang++ (RapidJSON SAX) 0.404±0.000 194.70±00.01 + 0.00±00.00 17.14±00.03
Nim/gcc (jsony) 0.408±0.001 111.10±00.03 + 154.80±00.25 17.45±00.15
C++/clang++ (RapidJSON SAX Precise) 0.492±0.001 194.62±00.07 + 0.00±00.00 21.71±00.10
Go (jsoniter) 0.502±0.001 115.53±00.04 + 0.00±00.00 20.34±00.12
Rust (Serde Untyped) 0.532±0.001 111.58±00.01 + 840.04±00.01 22.22±00.07
C#/.NET Core (System.Text.Json) 0.543±0.003 489.95±00.13 + 140.83±00.09 24.34±00.10
Julia (JSON3) 0.581±0.001 468.48±00.06 + 221.19±00.97 24.74±00.22
Node.js 0.587±0.008 150.63±00.03 + 195.31±00.62 27.66±00.37
Java (DSL-JSON) 0.606±0.015 262.58±00.11 + 198.42±47.26 31.13±00.90
Python/pypy 0.609±0.002 279.78±00.07 + 125.78±00.10 26.14±00.20
V/gcc 0.612±0.002 111.38±00.03 + 496.18±00.03 25.55±00.05
V/clang 0.614±0.001 111.46±00.03 + 496.21±00.00 25.87±00.30
Nim/gcc (Packedjson) 0.627±0.002 111.85±00.02 + 294.16±00.00 26.56±00.17
Crystal (Pull) 0.630±0.001 113.23±00.02 + 18.39±00.03 27.56±00.13
Nim/clang (Packedjson) 0.645±0.002 112.19±00.04 + 294.16±00.00 27.66±00.14
Crystal (Schema) 0.657±0.001 113.25±00.01 + 48.84±00.02 28.76±00.16
Perl (Cpanel::JSON::XS) 0.760±0.005 125.54±00.05 + 402.80±00.03 31.86±00.10
PHP 0.806±0.002 127.71±00.09 + 517.86±00.06 34.34±00.11
Go 0.865±0.002 114.90±00.10 + 0.00±00.00 35.63±00.07
Crystal 0.930±0.004 113.27±00.03 + 392.50±00.00 40.26±00.42
Nim/gcc 1.011±0.002 111.87±00.03 + 1001.34±00.00 42.14±00.24
Nim/clang 1.050±0.003 112.14±00.01 + 999.02±00.00 43.68±00.16
C#/.NET Core 1.054±0.005 495.82±00.18 + 273.33±00.02 50.59±00.39
C++/clang++ (json-c) 1.124±0.002 112.61±00.04 + 1216.08±00.00 46.57±00.40
C++/g++ (json-c) 1.125±0.004 113.22±00.05 + 1216.08±00.05 47.30±00.59
C++/clang++ (Nlohmann) 1.169±0.003 112.58±00.02 + 360.17±00.03 50.24±00.37
Clojure 1.189±0.018 453.04±02.63 + 627.61±03.05 64.02±00.87
CPython (UltraJSON) 1.275±0.007 122.68±00.01 + 495.97±02.20 47.32±00.25
Python 1.310±0.003 120.04±00.01 + 326.36±00.04 51.84±00.08
Go/gccgo 1.312±0.001 138.89±00.08 + 0.00±00.00 53.70±00.06
C++/g++ (Nlohmann) 1.318±0.004 113.27±00.03 + 448.05±00.00 55.69±00.47
Ruby 1.431±0.007 125.11±00.02 + 261.54±00.04 59.35±00.32
F#/.NET Core (System.Text.Json) 1.490±0.003 498.50±00.19 + 228.02±01.81 68.39±00.53
Ruby (--jit) 1.511±0.005 126.20±00.03 + 263.91±00.04 63.62±00.52
D/ldc2 1.709±0.005 112.68±00.04 + 680.34±00.05 71.02±00.52
Ruby (YAJL) 1.710±0.007 125.05±00.04 + 276.06±00.03 71.72±00.46
C#/Mono 1.794±0.018 252.95±00.09 + 31.51±00.01 78.14±00.87
Haskell 1.954±0.010 115.54±00.19 + 724.11±00.16 83.60±00.51
Rust (jq) 2.514±0.006 113.34±00.06 + 904.16±01.28 104.73±00.81
C++/g++ (Boost.PropertyTree) 2.610±0.004 113.10±00.02 + 1440.12±00.00 111.42±00.31
C++/clang++ (Boost.PropertyTree) 2.675±0.004 194.86±00.08 + 1232.84±00.00 112.42±00.25
Ruby/jruby 2.852±0.034 459.95±04.84 + 924.41±111.79 147.11±01.36
D/dmd 3.062±0.004 113.09±00.02 + 708.78±00.06 130.38±00.62
Vala/clang 3.219±0.007 114.97±00.06 + 980.04±00.01 137.44±01.21
Vala/gcc 3.237±0.020 114.98±00.05 + 980.04±00.03 138.29±00.82
D/gdc 3.507±0.007 116.49±00.02 + 681.00±00.13 148.49±01.16
Racket 3.822±0.020 315.73±01.52 + 228.04±01.93 158.45±00.60
Perl (JSON::Tiny) 9.261±0.027 125.80±00.05 + 528.96±00.07 403.49±05.37
Ruby/truffleruby 10.559±0.056 453.64±07.39 + 1996.42±188.53 606.35±02.82
Ruby/truffleruby (JVM) 10.889±0.111 507.95±09.50 + 2524.50±190.33 684.04±07.51

Matmul

Testing allocating and multiplying matrices.

Matmul

Language Time, s Memory, MiB Energy, J
D/ldc2 (lubeck) 0.042±0.000 6.05±00.02 + 57.76±00.04 4.39±00.03
V/gcc (VSL + CBLAS) 0.046±0.000 6.65±00.01 + 58.28±00.00 4.67±00.02
V/clang (VSL + CBLAS) 0.046±0.000 6.66±00.00 + 58.28±00.00 4.65±00.03
Nim/gcc (Arraymancer) 0.057±0.001 5.46±00.01 + 57.54±00.08 5.12±00.12
Python (NumPy) 0.063±0.000 31.89±00.03 + 58.47±00.05 6.01±00.03
Nim/clang (Arraymancer) 0.069±0.002 6.08±00.12 + 57.64±00.11 6.21±00.46
Java (ND4J) 0.076±0.001 111.19±00.44 + 92.15±00.01 6.10±00.09
Rust (ndarray) 0.084±0.001 2.28±00.02 + 68.47±00.00 5.88±00.03
Julia (threads: 2) 0.084±0.000 285.04±00.19 + 57.15±00.10 5.30±00.02
Julia (threads: 1) 0.134±0.000 285.02±00.10 + 56.87±00.07 6.67±00.04
C++/g++ (Eigen) 0.141±0.000 4.33±00.03 + 85.25±00.00 7.02±00.08
C++/clang++ (Eigen) 0.143±0.000 4.72±00.03 + 85.37±00.00 6.98±00.04
V/clang (VSL) 0.265±0.002 7.33±00.05 + 51.57±00.00 18.57±00.09
V/gcc (VSL) 0.508±0.005 7.05±00.07 + 51.83±00.00 36.92±00.31
Julia (no BLAS) 1.019±0.001 267.03±00.09 + 51.55±00.02 45.33±00.41
D/ldc2 1.716±0.002 3.25±00.03 + 70.41±00.03 63.27±00.12
D/gdc 1.869±0.001 7.30±00.05 + 70.16±00.01 73.09±00.06
D/dmd 1.879±0.001 3.17±00.01 + 70.45±00.06 71.11±00.08
C/gcc 3.024±0.000 1.47±00.04 + 68.69±00.02 111.60±00.32
V/gcc 3.029±0.001 2.50±00.07 + 68.58±00.00 112.30±00.15
Vala/clang 3.055±0.000 5.45±00.03 + 68.32±00.00 104.58±00.38
V/clang 3.058±0.000 2.82±00.02 + 68.58±00.00 104.68±00.27
C/clang 3.059±0.000 1.48±00.02 + 68.69±00.02 104.43±00.07
Rust 3.060±0.000 2.08±00.01 + 68.57±00.00 104.85±00.38
Zig 3.063±0.001 1.77±00.04 + 68.58±00.00 108.28±00.08
Nim/gcc 3.086±0.001 2.50±00.01 + 58.65±00.90 114.10±00.18
Swift 3.087±0.000 7.92±00.01 + 68.75±00.00 110.56±00.70
Nim/clang 3.115±0.001 2.81±00.03 + 59.55±01.80 107.17±00.80
Vala/gcc 3.120±0.000 4.07±00.12 + 69.68±00.07 114.14±00.13
Java 3.122±0.054 40.63±00.08 + 68.78±00.42 121.93±01.25
Go 3.144±0.000 3.16±00.10 + 0.00±00.00 113.86±00.30
Crystal 3.148±0.000 3.58±00.04 + 60.04±00.05 115.34±00.21
Go/gccgo 3.149±0.001 24.08±00.11 + 0.00±00.00 110.56±00.11
Kotlin/JVM 3.198±0.004 41.84±00.09 + 69.12±00.11 129.91±00.57
Node.js 3.215±0.003 44.18±00.03 + 73.52±00.25 129.44±00.13
Python/pypy 3.254±0.002 60.25±00.06 + 68.93±00.04 135.10±00.11
Scala 3.291±0.004 68.87±00.16 + 160.97±00.31 119.99±00.09
C#/.NET Core 4.892±0.001 34.50±00.06 + 68.91±00.00 196.60±00.37
C#/Mono 7.391±0.000 26.07±00.06 + 69.47±00.01 303.45±02.00
Ruby/truffleruby 17.994±0.531 393.10±15.17 + 540.22±45.98 642.97±17.80
Ruby/truffleruby (JVM) 24.992±0.621 426.21±15.07 + 375.57±32.72 878.03±25.98
Ruby (--jit) 130.222±0.647 17.96±00.11 + 69.69±00.08 5759.30±56.19
Python 137.008±1.286 10.50±00.01 + 68.84±00.00 5966.24±69.75
Ruby 147.268±0.419 15.82±00.06 + 69.13±00.02 6486.49±54.62
Tcl 203.860±0.610 7.29±00.04 + 400.44±00.00 9399.36±48.95
Perl 211.497±2.918 9.52±00.03 + 599.66±00.06 8506.55±88.52
Ruby/jruby 372.794±11.975 286.44±15.01 + 731.29±47.64 15595.51±522.26

Primes

Testing:

  • generating primes using the optimized sieve of Atkin;
  • prefix search for their decimal numbers using Trie data structure.

Notes:

  • All languages but V and Python use unordered hashmaps (V and Python don't provide those out of box, and their hashmaps use keys in the insertion order);
  • The results are always sorted (could be unstable or stable though).

Primes

Language Time, s Memory, MiB Energy, J
Zig 0.055±0.000 0.89±00.02 + 52.96±00.13 2.30±00.03
C++/clang++ 0.061±0.000 3.08±00.03 + 55.45±00.13 2.33±00.02
C++/g++ 0.061±0.000 3.62±00.03 + 71.33±00.13 2.41±00.02
Go 0.075±0.001 2.98±00.06 + 0.00±00.00 3.12±00.05
V/clang 0.094±0.000 1.85±00.01 + 213.78±00.77 3.88±00.04
V/gcc 0.100±0.000 1.85±00.04 + 200.32±00.52 4.10±00.05
Rust 0.116±0.000 2.04±00.06 + 72.94±00.00 4.51±00.04
Crystal 0.137±0.000 3.67±00.05 + 88.43±00.00 5.60±00.05
Java 0.142±0.004 39.67±00.23 + 152.19±02.26 8.17±00.25
Scala 0.205±0.006 73.00±00.19 + 211.33±01.15 11.51±00.26
Node.js 0.259±0.001 38.48±00.01 + 151.74±00.18 12.62±00.09
Nim/clang 0.274±0.000 2.03±00.02 + 605.34±01.29 10.54±00.08
Nim/gcc 0.281±0.001 1.71±00.06 + 615.91±00.00 10.62±00.09
Lua/luajit 0.296±0.001 2.60±00.05 + 156.12±01.21 11.79±00.16
Python/pypy 0.616±0.003 59.06±00.05 + 249.15±00.07 24.84±00.17
Julia 0.647±0.001 267.38±00.10 + 343.34±00.06 24.95±00.23
Racket 0.733±0.002 102.52±00.21 + 242.97±01.65 29.02±00.33
Ruby/truffleruby 0.859±0.019 227.35±02.72 + 720.35±89.33 58.20±01.77
Lua 1.240±0.003 2.62±00.06 + 282.88±00.68 49.93±00.49
Ruby/truffleruby (JVM) 1.342±0.060 393.08±08.09 + 511.72±50.34 87.16±03.88
Ruby (--jit) 1.562±0.009 15.93±00.03 + 170.80±00.75 72.25±00.40
Ruby 1.611±0.012 14.91±00.05 + 142.89±00.01 67.00±00.55
Python 2.115±0.012 10.14±00.01 + 180.79±00.90 90.66±01.17
Ruby/jruby 2.262±0.046 190.88±03.18 + 527.51±36.65 119.07±04.10

Tests Execution

Environment

CPU: Intel(R) Xeon(R) E-2324G

Base Docker image: Debian GNU/Linux bookworm/sid

Language Version
.NET Core 8.0.100
C#/.NET Core 4.8.0-3.23524.11 (f43cd10b)
C#/Mono 6.12.0.200
Chez Scheme 9.5.8
Clojure "1.11.1"
Crystal 1.10.1
D/dmd v2.106.0
D/gdc 13.2.0
D/ldc2 1.35.0
Elixir 1.14.0
F#/.NET Core 12.8.0.0 for F# 8.0
Go go1.21.4
Go/gccgo 13.2.0
Haskell 9.4.8
Idris 2 0.6.0
Java 21.0.1
Julia v"1.9.4"
Kotlin 1.9.21
Lua 5.4.4
Lua/luajit 2.1.1700206165
MLton 20210117
Nim 2.0.0
Node.js v21.3.0
OCaml 5.1.0
PHP 8.2.10
Perl v5.36.0
Python 3.11.6
Python/pypy 7.3.13-final0 for Python 3.10.13
Racket "8.11.1"
Ruby 3.2.2p53
Ruby/jruby 9.4.5.0
Ruby/truffleruby 23.1.1
Rust 1.74.0
Scala 3.3.1
Swift 5.9.1
Tcl 8.6
V 0.4.3 b5ba122
Vala 0.56.13
Zig 0.11.0
clang/clang++ 16.0.6 (15)
gcc/g++ 13.2.0

Using Docker

Build the image:

$ docker build docker/ -t benchmarks

Run the image:

$ docker run -it --rm -v $(pwd):/src benchmarks <cmd>

where <cmd> is:

  • versions (print installed language versions);
  • shell (start the shell);
  • brainfuck bench (build and run Brainfuck bench.b benchmarks);
  • brainfuck mandel (build and run Brainfuck mandel.b benchmarks);
  • base64 (build and run Base64 benchmarks);
  • json (build and run Json benchmarks);
  • matmul (build and run Matmul benchmarks);
  • primes (build and run Primes benchmarks);

Please note that the actual measurements provided in the project are taken semi-manually (via shell) as the full update takes days and could have occassional issues in Docker.

There is a ./run.sh that could be used to simplify Docker usage:

  • ./run.sh build (build the image);
  • ./run.sh make versions (run the image with the versions command);
  • sudo ./run.sh shell (run the image with the `shell' command, sudo is required to read energy levels).

Manual Execution

Makefiles contain recipes for building and executing tests with the proper dependencies. Please use make run (and make run2 where applicable). The measurements are taken using analyze.rb script:

$ cd <test suite>
$ ../analyze.rb make run
$ ../analyze.rb make run[<single test>]

Please note that the measurements could take hours. It uses 10 iterations by default, but it could be changed using ATTEMPTS environment variable:

$ ATTEMPTS=1 ../analyze.rb make run

Prerequisites

Please use Dockerfile as a reference regarding which packages and tools are required.

For all (optional):

  • Powercap for reading energy counters in Linux (Debian package powercap-utils).

For Python:

  • NumPy for matmul tests (Debian package python3-numpy).
  • UltraJSON for JSON tests (Debian package python3-ujson).

For C++:

  • Boost for JSON tests (Debian package libboost-dev).
  • JSON-C for JSON tests (Debian package libjson-c-dev).

For Rust:

  • libjq for jq test (Debian packages libjq-dev, libonig-dev and environment variable JQ_LIB_DIR=/usr/lib/x86_64-linux-gnu/).

For Java, Scala:

  • Coursier for downloading Maven artifacts.

For Haskell:

  • network for TCP connectivity between the tests and the test runner.
  • raw-strings-qq for raw string literals used in tests.

For Perl:

  • cpanminus for installing modules from CPAN (Debian package cpanminus).

For Vala:

  • JSON-GLib for JSON tests (Debian package libjson-glib-dev).

Contribution

Please follow the criteria specified in the overview. Besides that please ensure that the communication protocol between a test and the test runner is satisfied:

  • The test runner listens on localhost:9001;
  • All messages are sent using TCP sockets closed immediately after the message has been sent;
  • There are two messages sent from a test (it establishes the measurement boundary):
    1. The beginning message having the format name of the test/tprocess ID (the process ID is used to measure the memory consumption). Please note that the name of the test couldn't use Tab character as it's a delimiter;
    2. The end message with any content (mostly it's "stop" for consistency).
  • The test runner could be unavailable (if the test is launched as is) and the test should gracefully handle it.

Makefile guide

Binary executables

If the test is compiled into a single binary, then two sections of the Makefile require changes:

  • append a new target (the final binary location) into executables variable;
  • append the proper target rule.

Compiled artifacts

If the test is compiled, but can't be executed directly as a binary, then three sections of the Makefile require changes:

  • append a new target (the final artifact location) into artifacts variable;
  • append the proper target rule to compile the test;
  • append run[<target_artifact>] rule to run the test.

Scripting language

If the test doesn't require compilation, then two sections of the Makefile requires changes:

  • append run[<script_file>] into all_runners variable;
  • append run[<script_file>] rule to run the test.

README update

TOC is regenerated using git-markdown-toc:

./run.sh toc

benchmarks's People

Contributors

9il avatar akarin123 avatar beached avatar cmcaine avatar dbohdan avatar dtolnay avatar gohryt avatar goldenreign avatar jackstouffer avatar k-bx avatar kostya avatar lqdc avatar martinnowak avatar miloyip avatar nuald avatar orthoxerox avatar philnguyen avatar pmarcelll avatar proyb6 avatar radszy avatar rap2hpoutre avatar ricvelozo avatar sfesenko avatar snadrus avatar tchaloupka avatar w-diesel avatar willabides avatar yardanico avatar zapov avatar zhaozhixu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarks's Issues

Update PyPy to latest.

Latest PyPy release is 5.6.0.
Speed will improve, but not that much. (but it's a newest release)

Update Kotlin

With the benchmark a very out of date version of Kotlin is used (1.0.3). Please update Kotlin to the latest stable version (1.3.11).

Runtime & Compiler Updates

Crystal 0.20.0 [b0cc6f7] (2016-11-22)
[latest is 0.21.0]

LDC - the LLVM D compiler (0.15.2-beta1)
[latest is 1.2.0-beta1]

DMD64 D Compiler v2.068.0
[latest is v2.073.2]

gdc (crosstool-NG crosstool-ng-1.20.0-232-gc746732 - 20150830-2.066.1-dadb5a3784) 5.2.0
[latest is 2.068.2]

Brainfuck v2 implementations are broken

The benchmarked implementations of brainfuck are (mostly) not correct. I tested the C and Python versions, but I suspect they all share an algorithmic bug.

Failing testcase: http://esoteric.sange.fi/brainfuck/bf-source/prog/BOTTLES.BF

Correct reference interpreter (generator): https://github.com/pablojorge/brainfuck/blob/master/haskell/bf2c.hs

Expected behavior: print bottles from 99 to zero, quickly.

Actual behavior: BF interpreters freeze after 91 bottles remain.

Matmul - Julia - Single-precision floats

You use single-precision (32-bit) floats for the Julia version of Matmul. That's kind of cheating compared to the other implementations that use double-precision (64-bit) floats.

Consider use JMH to run JVM benchmarks

Hi.

Once the JVM (until JDK8) have several issues regarding its warm up process, I would like to suggest you to use JMH to make JVM-related benchmarks (Kotlin, Scala and Java it self). It will generate results more close to the production environment - where the JVM have already applied most of its JIT optimizations.

BTW, I'm glad with your benchmark initiative. Good job!

Cheers!

What about PHP?

PHP is slow, we all know that, but it can be interesting to know how (should be done with PHP7 CLI I think)

EDIT: I could submit a PR if you want.

Swift

Hello

Can you add swift benchmark please?

Json benchmark

Hi!

It would be interesting to see a benchmark comparing json parsing. You can try with this big json: https://github.com/zeMirco/sf-city-lots-json

We tried to optimized json parsing in Crystal a lot and we believe it might be one of the fastest out there. And, as usual, it's implemented in Crystal itself.

Here's a code that you can try:

require "json"

text = File.read("citylots.json")
json = Json.parse(text) as Hash
puts json.length

Thanks!

Node.js UPDATE

Please kindly update the Node.js version, or at least add a new entry that goes like 'JavaScript Node Latest'.

Almost all the other languages and implementations are using bleeding edge versions except JavaScript Node.js and JavaScript V8

Node.js latest stable = 5.0.0

Include Vert.x JavaScript

Please include Vert.x ( http://vertx.io/ ) in your benchmarks as the polygot platform seems very promising and tends to benefit from the Java HotSpot's runtime optimizations.... perhaps some benchmark warm up will be needed. Thanks

Nim & Clang Update

You're using the very latest version of Rust and Go compilers, but your Nim is 8.5 months behind...

The current version of Nim is 0.16.0 stable (or 0.16.1 devel) and Clang 3.9.1 (or ideally 4.0 SVN).

Also please make sure you're compiling Nim code with -d:release.

Thank you very much for a great benchmark! 🥇 😃

latest version of jruby with java 10 and graal

Hello,
Can we run this benchmark against latest jruby 9.2, with java 10 and enable this options for java:
export JAVA_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Xcompile.invokedynamic -Xfixnum.cache=false -Xmn512m -Xms2048m -Xmx2048m"

Swift in Base64

It should work on Linux and if GCD included, it should speed up significantly as fast as Rust.

import Foundation

let strsize = 10_000_000
let tries = 100
let longString = String(repeating: "a", count: strsize)
let data = longString.data(using: .utf8)
var base64en:Data? = nil
var total: Int = 0

//Encode
for _ in 0..<tries {
    autoreleasepool {
        base64en = data!.base64EncodedData()
        total = total &+ base64en!.endIndex
    }
}
print(total)

//Dencode
total=0
for _ in 0..<tries {
    autoreleasepool {
        total = total &+ Data(base64Encoded: base64en!)!.endIndex
    }
}
print(total)

Nim 0.11.2

Nim has recently updated to 0.11.2, any news on updating?

Add PyPy3.5 to testing

There was a first PyPy3.5 beta release with Python 3.5 support, maybe include it as well?

Mandelbrot implementation

I think I missed where the Mandelbrot is implemented.

Could you link it into the readme file?

Also, very cool that you reference your source for the origin of many of these benchmarks.
Thanks

Julia code runs in global scope

In Julia running code outside of predefined functions carries a heavy performance penalty. For instance, simply rewriting the matrix multiplication benchmark as follows yields a performance improvement by a factor of 3 on my machine:

function matgen(n)
tmp = 1.0 / n / n
[ float32(tmp * (i - j) * (i + j - 2)) for i=1:n, j=1:n ]
end

function main()
n = 100
if length(ARGS) >= 1
n = int(ARGS[1])
end
t = time()
n = int(n / 2 * 2)
a = matgen(n)
b = matgen(n)
c = a * b
v = int(n/2) + 1
println(c[v, v])
println(time() - t)
end

main()
main()

The same goes for the other benchmarks. Technically, comprehensions are fairly slow too (compared to unrolled @simd/@inbounds annotated for loops), but the matrix generation doesn't particularly matter in this benchmark. Also note that the main() function is invoked twice here to show the kind of performance improvement the JIT produces (roughly 300 times on my machine). In general, it is good practice in Julia to first run performance sensitive functions on a tiny dataset to invoke the JIT, then run the actual computation.

P.S. Also note that this particular benchmark implementation essentially measures the performance of whatever OpenBLAS version you compiled Julia to use and virtually any language should be able to obtain similar results.

c++ for bench.b could be implemented 20% faster for x64 and twice faster for x86

numbers are with disabled print

namespace modified
{
	enum op_type {
		INC,
		MOVE,
		LOOP,
		PRINT
	};

	struct Op;
	using Ops = vector<Op>;

	using data_t = ptrdiff_t;

	struct Op
	{
		op_type op;
		data_t val;
		Ops loop;
		Op(Ops v) : op(LOOP), loop(v) {}
		Op(op_type _op, data_t v = 0) : op(_op), val(v) {}
	};

	class Tape
	{
		using vect = vector<data_t>;
		vect	tape;
		vect::iterator	pos;
	public:
		Tape()
		{
			tape.reserve(8);
			tape.push_back(0);
			pos = tape.begin();
		}

		inline data_t get() const
		{
			return *pos;
		}
		inline void inc(data_t x)
		{
			*pos += x;
		}
		inline void move(data_t x)
		{
			auto d = std::distance(tape.begin(), pos);
			d += x;
			if (d >= (data_t)tape.size())
				tape.resize(d + 1);
			pos = tape.begin();
			std::advance(pos, d);
		}
	};

	class Program
	{
		Ops ops;
	public:
		Program(const string& code)
		{
			auto iterator = code.cbegin();
			ops = parse(&iterator, code.cend());
		}

		void run() const
		{
			Tape tape;
			_run(ops, tape);
		}
	private:
		static Ops parse(string::const_iterator *iterator, string::const_iterator end)
		{
			Ops res;
			while (*iterator != end)
			{
				char c = **iterator;
				*iterator += 1;
				switch (c) {
				case '+':
					res.emplace_back(INC, 1);
					break;
				case '-':
					res.emplace_back(INC, -1);
					break;
				case '>':
					res.emplace_back(MOVE, 1);
					break;
				case '<':
					res.emplace_back(MOVE, -1);
					break;
				case '.':
					res.emplace_back(PRINT);
					break;
				case '[':
					res.emplace_back(parse(iterator, end));
					break;
				case ']':
					return res;
				}
			}
			return res;
		}

		static void _run(const Ops &program, Tape &tape)
		{
			for (auto &op : program)
			{
				switch (op.op) 
				{
				case INC:
					tape.inc(op.val);
					break;
				case MOVE:
					tape.move(op.val);
					break;
				case LOOP:
					while (tape.get() > 0)
						_run(op.loop, tape);
					break;
				case PRINT:
					if (do_print())
					{
						printf("%c", (int)tape.get());
						fflush(stdout);
					}
					break;
				}
			}
		}
	};
}

bf2_bench.zip
x86
x64

Suggestion: add compile/build duration

Some concerns about new (rust, scala, swift) or old (haskell) compiled languages is the build/compile speed, it would be nice to see the build duration also.

Repeat benchmarks to eliminate noise

I can run the same benchmark a few times and get wildly different results. Consider having xtime.rb loop 10-100 times and take the minimum to filter out some of this noise.

Suggestions

Please include C Clang for the Base64 benchmark. My results on my machine:
GCC:
encode: 1333333600, 1.08
decode: 1000000000, 2.07

Clang:
encode: 1333333600, 1.23
decode: 1000000000, 1.44

Also, please modify the D implementation of the Matmul benchmark. dotProduct is optimized, every other language usees the naive implementation (that's why D is so fast in this benchmark).

UPDATES Required

NodeJS is now 5.7.0
Go is now 1.6

Both should have significant performance improvements

BF benchmark: Kotlin uses arrays while Java and C# use lists.

Hi. If you change C# to use int[] instead of List<int> for the tape and the program, it becomes much faster. Please align the implementations to use the same abstractions. If you want, I can submit a PR for C#, but I think it's better to change the Kotlin version.

Julia native result reported for Matmul is not from xtime

...or at least, I strongly believe it is.

I think you used its own self-reported time rather than the output of xtime.rb by mistake in this case. matmul-native.jl prints the time that it thinks it takes. This isn't fair to the other benchmarks because only the julia-native code gets to ignore the overhead of the testing framework.

On my machine I get similar results for Rust and C but Julia-native's time is way off. Instead of something close to 0.15s I get 0.75s ~5x slower than reported. On the other hand, the other languages are slightly faster, which makes sense since I'm using a new i7 instead of an i5.

Crystal flags for benchmarking.

Hello,

I don't know if it isn't used already, but, here goes, per Crystal's own documentation:
"Make sure to always use --release for production-ready executables and when performing benchmarks."

Just my 2 cents.

Mono is faster with --llvm flag

Hi,

On my setup OSX 10.10 with Mono JIT compiler version 3.12.0 Mono . When I run matmult.exe with --llvm flag enabled it takes 11.71s, whereas the original took 21.60s.

Resulting run command looks like:

../xtime.rb mono -O=all --gc=sgen --llvm matmul.exe 1500

Is there such option on your Ubuntu setup? If yes, would you check how it affects the performance?

Julia Timing

One thing that I discovered in Julia is that the current benchmark is not very accurate. It would be better to call @time main() in order to get the time and memory consumption sans the JIT for more accurate results. I have found this makes some difference in the results. For example, with brainfuck the results show Julia to be only 0.45 seconds slower than Crystal.

BF2: Kotlin does not flush stdout after each character

On this line of the Kotlin program, printing is handled. Kotlin's .print(char) function calls directly into Java's print function, which flushes on new line. Kotlin should flush the output stream after each character is printed to implement the behavior specified in the README, that "stdout should be flushed after each symbol."

An alternative solution would be to not flush stdout in the other languages, instead leaving it up to the standard library's natural flow.

C# benchmark with coreclr

I know we have the benchmarks with Mono but since coreclr was just released it would be great to get the benchmarks updated with that.

Clojure JSON benchmark

here's a solution for Clojure using the Cheshireparser

(let [data (parse-stream (clojure.java.io/reader "./1.json") true)
      len  (count data)]
  (loop [sx 0.0 sy 0.0 sz 0.0 [coord & coords] data]
    (if-let [{:keys [x y z]} coord]
      (recur (+ sx x) (+ sy y) (+ sz z) coords)
      (println (/ sx len) (/ sy len) (/ sz len)))))

A few runtime updates

Go -> 1.7
NodeJS -> 6.4
Python3 -> 3.5.2

:Removal:
JXCore - unmaintained, defunct, and abandoned

Brainfuck V2 implementations are broken

Many by 387b17d

The condition should be testing for zero or non-zero not greater than.

This "Hello World!" contains a relevent test case.

>++++++++[-<+++++++++>]<.>[][<-]>+>-[+]++>++>+++[>[->+++<<+++>]<<]>-----.
>->+++..+++.>-.<<+[>[+>+]>>]<--------------.>>.+++.------.--------.>+.>+.

NB: Some languages use an unsigned byte value which would make the two condition types equivalent.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.