Giter Site home page Giter Site logo

lh3 / biofast Goto Github PK

View Code? Open in Web Editor NEW
175.0 175.0 26.0 127 KB

Benchmarking programming languages/implementations for common tasks in Bioinformatics

Makefile 1.51% Python 7.57% Julia 9.80% Nim 9.37% C 31.67% JavaScript 3.08% Lua 4.51% Crystal 6.80% Go 3.78% Rust 1.88% R 1.41% Scala 9.43% D 4.28% F# 4.91%
bioinformatics

biofast's People

Contributors

bicycle1885 avatar bovee avatar cjw85 avatar eug48 avatar jiazhen-rong avatar keats avatar lh3 avatar mbhall88 avatar nh13 avatar pcostanza avatar poisonalien avatar sstadick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

biofast's Issues

Suggestions for fastq benchmark

Sorry if I have jumped the gun here and these suggestions are forthcoming. These would be the two libraries I use most (and I suspect quite a few other do too) for the respective languages. I'm interested to see where they fall in the existing benchmark.

Not sure if this is also of interest, but hyperfine is a nice tool for doing this kind of benchmarking. Not only will it give relative times with multiple runs, but also allows for warm-up runs (useful for IO intensive processes). I've used it for a very similar task here.

EDIT:
Just read the accompanying blog again and noticed you mentioned you implemented in languages you are familiar with. I'm happy to provide any Rust examples if needed.

shorter run time because of Crystal V1.2.1 Release

Crystal lang had release V1.2.1 recently,so I tested fqcnt_cr1_klib.cr and bedcov_cr1_klib.cr(with nothing modify these two files) in my computer below showed:

   For fqcnt: the run time of plain txt is from 1.5s to 0.9s!But the time of gzip file is still 9s.
   For bedcov: g2r cost 6.9s instead of 8.8s,  r2g cost 10.8s instead of 14.8s!

Above shorter runtime maybe because of computer hardware difference OR Crystal version difference, so I rerun with biofast-bin-20200520-a7af6d8.tar.bz2 in my computer:

## fqcnt
$ hyperfine --warmup 3 ' ../biofast-bin-20200520-a7af6d8/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq'
Benchmark 1:  ../biofast-bin-20200520-a7af6d8/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq
  Time (mean ± σ):     891.0 ms ±  49.2 ms    [User: 649.7 ms, System: 213.5 ms]
  Range (min … max):   864.6 ms … 1028.0 ms    10 runs

$ hyperfine --warmup 3 ' ../biofast-bin-20200520-a7af6d8/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq.gz'
Benchmark 1:  ../biofast-bin-20200520-a7af6d8/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq.gz
  Time (mean ± σ):      8.938 s ±  0.042 s    [User: 8.719 s, System: 0.096 s]
  Range (min … max):    8.894 s …  9.041 s    10 runs

## bedcov
$ hyperfine --warmup 3 ' ../biofast-bin-20200520-a7af6d8/bedcov/bedcov_cr1_klib   ex-rna.bed ex-anno.bed  # g2r'
Benchmark 1:  ../biofast-bin-20200520-a7af6d8/bedcov/bedcov_cr1_klib   ex-rna.bed ex-anno.bed  # g2r
  Time (mean ± σ):      7.927 s ±  0.034 s    [User: 7.272 s, System: 0.525 s]
  Range (min … max):    7.878 s …  7.976 s    10 runs

$ hyperfine --warmup 3 ' ../biofast-bin-20200520-a7af6d8/bedcov/bedcov_cr1_klib  ex-anno.bed ex-rna.bed  # r2g'
Benchmark 1:  ../biofast-bin-20200520-a7af6d8/bedcov/bedcov_cr1_klib  ex-anno.bed ex-rna.bed  # r2g
  Time (mean ± σ):     17.731 s ±  0.069 s    [User: 14.906 s, System: 2.579 s]
  Range (min … max):   17.632 s … 17.810 s    10 runs

So with the latest Crystal V1.2.1(take the computer hardware difference into consideration):

      For fqcnt, cost more a little time.
      For bedcov, cost less a little time(especially for r2g).


Detail as below:

system and crystal version

$ lscpu|grep -E  'Model name|CPU family'
CPU family:          6
Model name:          Intel(R) Xeon(R) Gold 6133 CPU @ 2.50GHz

$ cat /etc/os-release |grep PRETTY_NAME
PRETTY_NAME="Ubuntu 18.04.6 LTS"

$ crystal -v
Crystal 1.2.1 [4e6c0f26e] (2021-10-21)

LLVM: 10.0.0
$ git clone https://github.com/lh3/biofast.git

fqcnt with Crystal 1.2.1

$ crystal build fqcnt_cr1_klib.cr --release

$ ll biofast-data-v1/*fq
-rw-rw-r-- 1 ubuntu ubuntu 1396487030 Oct 23 10:50 biofast-data-v1/M_abscessus_HiSeq.fq

$ hyperfine --warmup 3 '~/biofast/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq'
Benchmark 1: ~/biofast/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq
  Time (mean ± σ):     968.0 ms ±   8.0 ms    [User: 743.2 ms, System: 206.8 ms]
  Range (min … max):   960.7 ms … 981.4 ms    10 runs

# update LLVM from V10 to V12 and then recompile  fqcnt_cr1_klib.cr
$ crystal_llvm12 build fqcnt_cr1_klib.cr -o fqcnt_cr1_klib_llvm12 --release

$ hyperfine --warmup 3 '~/biofast/fqcnt/fqcnt_cr1_klib_llvm12 biofast-data-v1/M_abscessus_HiSeq.fq'
Benchmark 1: ~/biofast/fqcnt/fqcnt_cr1_klib_llvm12 biofast-data-v1/M_abscessus_HiSeq.fq
  Time (mean ± σ):     931.0 ms ±   6.0 ms    [User: 716.9 ms, System: 197.2 ms]
  Range (min … max):   923.5 ms … 940.3 ms    10 runs


$ gzip biofast-data-v1/M_abscessus_HiSeq.fq
$ ll -sh biofast-data-v1/*gz
465M -rw-r--r-- 1 ubuntu ubuntu 465M May  4  2020 biofast-data-v1/M_abscessus_HiSeq.fq.gz

$ hyperfine --warmup 3 '~/biofast/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq.gz'
Benchmark 1: ~/biofast/fqcnt/fqcnt_cr1_klib biofast-data-v1/M_abscessus_HiSeq.fq.gz
  Time (mean ± σ):      9.100 s ±  0.068 s    [User: 8.853 s, System: 0.107 s]
  Range (min … max):    9.030 s …  9.259 s    10 runs

$ hyperfine --warmup 3 '~/biofast/fqcnt/fqcnt_cr1_klib_llvm12 biofast-data-v1/M_abscessus_HiSeq.fq.gz'
Benchmark 1: ~/biofast/fqcnt/fqcnt_cr1_klib_llvm12 biofast-data-v1/M_abscessus_HiSeq.fq.gz
  Time (mean ± σ):      9.082 s ±  0.023 s    [User: 8.848 s, System: 0.099 s]
  Range (min … max):    9.046 s …  9.119 s    10 runs

bedcov with Crystal 1.2.1

$ hyperfine --warmup 3 './bedcov_cr1_klib ex-rna.bed ex-anno.bed   # g2r'
Benchmark 1: ./bedcov_cr1_klib ex-rna.bed ex-anno.bed   # g2r
  Time (mean ± σ):      6.921 s ±  0.023 s    [User: 6.587 s, System: 0.222 s]
  Range (min … max):    6.887 s …  6.954 s    10 runs

$ hyperfine --warmup 3 './bedcov_cr1_klib_llvm12 ex-rna.bed ex-anno.bed   # g2r'
Benchmark 1: ./bedcov_cr1_klib_llvm12 ex-rna.bed ex-anno.bed   # g2r
  Time (mean ± σ):      6.827 s ±  0.047 s    [User: 6.501 s, System: 0.216 s]
  Range (min … max):    6.756 s …  6.943 s    10 runs


$ hyperfine --warmup 3 './bedcov_cr1_klib ex-anno.bed ex-rna.bed  # r2g'
Benchmark 1: ./bedcov_cr1_klib ex-anno.bed ex-rna.bed  # r2g
  Time (mean ± σ):     10.846 s ±  0.067 s    [User: 10.524 s, System: 0.139 s]
  Range (min … max):   10.739 s … 10.956 s    10 runs

$ hyperfine --warmup 3 './bedcov_cr1_klib_llvm12 ex-anno.bed ex-rna.bed  # r2g'
Benchmark 1: ./bedcov_cr1_klib_llvm12 ex-anno.bed ex-rna.bed  # r2g
  Time (mean ± σ):     10.637 s ±  0.166 s    [User: 10.339 s, System: 0.138 s]
  Range (min … max):   10.498 s … 11.079 s    10 runs

Swift benchmark

I'd be curious to see a benchmark with swift, but I don't use it myself. Putting this here in case anyone wants to take it up

Use `-d:danger --gc:arc` for Nim

At least for Nim development, the use of the -d:danger flag can dramatically improve speed if you take care when writing your code. I imagine it is likely the case that other languages here have their own optimal configurations. I might be missing it, but I can't see what flags were used during compilation. Is that documented anywhere?

Oneliners to count fq

Combination of Linux system commands

  • SED (quiet fast)
echo -e $(gzip -cd M_abscessus_HiSeq.fq.gz | sed -n '1~4p' | wc -l)"\t"$(gzip -cd M_abscessus_HiSeq.fq.gz | sed -n '2~4p' | wc -m)"\t"$(gzip -cd M_abscessus_HiSeq.fq.gz | sed -n '4~4p' | wc -m)
  • AWK (slowest)
gzip -cd M_abscessus_HiSeq.fq.gz | awk 'BEGIN{OFS="\t";a=0;b=0;c=0}{d=NR%4;if(d==1){a+=1;next}else if(d==2){b+=length($0);next}else if(d==0){c+=length($0);next}}END{print a,b,c}'

Benchmarks do not run for long enough; too biased by non-reading code performance

The benchmarks run only for a few seconds to under a minute. If a certain language takes a while to start and/or shutdown is REPL or JVM, has code outside the reading/writing of the FASTQ that are slow (ex. Java reflection for arg-parsing), then the benchmarks speed are more indicative of that then the core part we care about (reading FASTQs). In your own words:

When you read through a 100Gb gzip’d fastq file, performance matters. 30min vs 1hr is a huge difference. High-performance tools often put fastq reading in a separate thread because it is too slow. Zlib is the main bottleneck here, but parsing time should be minimized as well.

A few things can be done to isolate the reading/writing code:

  1. Run a benchmark with a single FASTQ read/record. This has the downfall if the reading code has some start/shutdown time, it is included in the total time.
  2. Run a benchmark on a 100Gb gzip'd fastq file as you mention about. This could also be something suitably large such that 5-20s overhead can be either be removed using the value from (1), or just be included because it's negligible over a 30m run time.

Related to: #5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.