Giter Site home page Giter Site logo

octoml / mlcommons-inference Goto Github PK

View Code? Open in Web Editor NEW
2.0 42.0 1.0 426.4 MB

Fork of MLCommons inference repository to test TVM integration

License: Other

Python 69.94% Makefile 0.98% Jupyter Notebook 2.99% Shell 3.76% Dockerfile 0.57% CMake 0.15% C++ 19.58% HTML 0.19% CSS 1.85%

mlcommons-inference's Introduction

MLPerf™ Inference Benchmark Suite

MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios.

Please see the MLPerf Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.), please cite the following:

@misc{reddi2019mlperf,
    title={MLPerf Inference Benchmark},
    author={Vijay Janapa Reddi and Christine Cheng and David Kanter and Peter Mattson and Guenther Schmuelling and Carole-Jean Wu and Brian Anderson and Maximilien Breughe and Mark Charlebois and William Chou and Ramesh Chukka and Cody Coleman and Sam Davis and Pan Deng and Greg Diamos and Jared Duke and Dave Fick and J. Scott Gardner and Itay Hubara and Sachin Idgunji and Thomas B. Jablin and Jeff Jiao and Tom St. John and Pankaj Kanwar and David Lee and Jeffery Liao and Anton Lokhmotov and Francisco Massa and Peng Meng and Paulius Micikevicius and Colin Osborne and Gennady Pekhimenko and Arun Tejusve Raghunath Rajan and Dilip Sequeira and Ashish Sirasao and Fei Sun and Hanlin Tang and Michael Thomson and Frank Wei and Ephrem Wu and Lingjie Xu and Koichi Yamada and Bing Yu and George Yuan and Aaron Zhong and Peizhao Zhang and Yuchen Zhou},
    year={2019},
    eprint={1911.02549},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MLPerf Inference v2.0 (submission 02/25/2022)

Use the r2.0 branch (git checkout r2.0) if you want to submit or reproduce v2.0 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 vision/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
bert language/bert tensorflow, pytorch, onnx squad-1.1
dlrm recommendation/dlrm pytorch, tensorflow(?), onnx(?) Criteo Terabyte
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus

MLPerf Inference v1.1 (submission 08/13/2021)

Use the r1.1 branch (git checkout r1.1) if you want to submit or reproduce v1.1 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 vision/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
bert language/bert tensorflow, pytorch, onnx squad-1.1
dlrm recommendation/dlrm pytorch, tensorflow(?), onnx(?) Criteo Terabyte
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus

MLPerf Inference v1.0 (submission 03/19/2021)

Use the r1.0 branch (git checkout r1.0) if you want to submit or reproduce v1.0 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 vision/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
bert language/bert tensorflow, pytorch, onnx squad-1.1
dlrm recommendation/dlrm pytorch, tensorflow(?), onnx(?) Criteo Terabyte
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus

MLPerf Inference v0.7 (submission 9/18/2020)

Use the r0.7 branch (git checkout r0.7) if you want to submit or reproduce v0.7 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 vision/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
bert language/bert tensorflow, pytorch, onnx squad-1.1
dlrm recommendation/dlrm pytorch, tensorflow(?), onnx(?) Criteo Terabyte
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus

MLPerf Inference v0.5

Use the r0.5 branch (git checkout r0.5) if you want to reproduce v0.5 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 v0.5/classification_and_detection tensorflow, pytorch, onnx imagenet2012
mobilenet-v1 v0.5/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 v0.5/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 v0.5/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
gnmt v0.5/translation/gnmt/ tensorflow, pytorch See Readme
=======

r1.1-seed

mlcommons-inference's People

Contributors

aaronzhongii avatar arjunsuresh avatar christ1ne avatar davidmochen avatar dependabot[bot] avatar ens-lg4 avatar fmassa avatar galv avatar georgelyuan avatar guschmue avatar jiahuanglin avatar jimmychiangmtk avatar jklingin avatar kstreee-furiosa avatar mnaumovfb avatar nv-jinhosuh avatar nv-rborkar avatar nvitramble avatar nvmbreughe avatar nvpohanh avatar papers-submission avatar petermattson avatar pgmpablo157321 avatar pkanwar23 avatar profvjreddi avatar psyhtest avatar sf-wind avatar sub-mod avatar thekanter avatar tjablin avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

arjunsuresh

mlcommons-inference's Issues

TVM and ONNX model output differs

When testing TVM-based MLPerf benchmarks for object detection with ResNet34-1200 (opset-11) I have noticed that the output from TVM and ONNX models differ (order).

Model with opset-8 works fine with TVM and ONNX:

Adding TVM to the MLPerf inference benchmark (vision)

Current list of tasks:

  • threads > 1 do not work
  • batches > 1 do not work
  • check object detection task on any model to test TVM integration
  • detect TVM version via CK package (right now returns 0.0.0)
  • add "executor" (vm, graph) into MLPerf loadgen for TVM
  • add AMD APU GitHub as a package and check that can run experiments via CK
  • reproduce some MLPerf AMD experiments via CK on x86 from https://github.com/octoml/amd-apu/blob/master/scripts/evaluate_mlperf.sh
  • PROBLEM: TVM-based loadgen doesn't seem to scale with --threads while ONNX-based loadgen does. Need to make sure that results are similar to the ones from AMD experiments by Thierry! Thierry suggested to check "set(USE_OPENMP none)" when building TVM but I think it's a different problem with locking threads (input/run/output) that prevents parallel execution?
  • add support for models from Octomizer wheels
  • add all standard MLPerf models for image classification/object detection
  • add proper TVM initialization (via MLPerf conf files or externally?). Must be reproducible (at the moment doing via MLPERF_TVM* environment variables)
  • record experiments in a standard CK experiment way
  • record experiments in an official MLPerf submission format in mlperf.result
  • perform DSE to reproduce Thierry's experiments
  • add support for BERT
  • check how to pass optimizations to TVM-based loadgen
  • add autotuning capabilities and make sure that results are reproducible (MLPerf submission)
  • add MLPerf user config for proper MLPerf scenarios
  • add support for Windows

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.