Giter Site home page Giter Site logo

analyze-spec-benchmarks's Introduction

A set of Python scripts to fetch benchmarks spec.org, analyze them, and output some text & PNG files.
Original version by Jeff Preshing in February 2012.
Released to the public domain.

For more information:
http://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance


Requirements
------------

* Python 2.6 - 2.7 is required.
* lxml is required if you want to fetch all the data from SPEC's website. Otherwise, you can download aggregated data from: http://preshing.com/files/specdata20120207.zip
  You could probably rewrite the lxml part using one of Python's built-in modules; I didn't bother.
* pycairo is optional if you want to generate the PNG files.
* PIL is optional if you want those PNG files to have high-quality anti-aliasing.
* If you are going to republish any results, you need to abide by SPEC's fair use policy. http://www.spec.org/fairuse.html


Collecting SPEC's data
----------------------

These scripts work with the individual benchmarks in each SPEC result, and not the geometric average which SPEC lists in their CSV downloads. Currently, the only way to access these individual benchmark results is to scrape each result page from their website as text and/or HTML.

If you want to skip the steps in this section, you can simply download the aggregated result files from http://preshing.com/files/specdata20120207.zip and extract them to this folder.

If you want to scrape & aggregate the results yourself, proceed as follows:

1. Run fetch-pages.py. As of this writing, this script downloads 30715 individual pages from SPEC's website and stores them to a folder named "scraped". It's about 383 MB of data, but may be more in the future. The script launches a pool of 20 subprocesses to speed up the download process, so it completes in a matter of minutes. Some requests may time out and break the script; if that happens, simply run the script again. All previously downloaded pages will not be downloaded again. Note that if SPEC changes their website in the future, the script will need to be updated.

2. Run analyze-pages.py. This will scan all the pages downloaded by the previous script, and output two CSV files: summaries.txt and benchmarks.txt. These files will be used as inputs for the remaining scripts.


Determining which benchmarks took advantage of autoparallel, and disqualifying them
-----------------------------------------------------------------------------------

As described in the blog post, certain benchmarks were disqualified from the results due to automatic parallelization. To see the list, search DISQUALIFIED_BENCHMARKS in make-graphs.py.

This list was obtained by running check-autoparallel.py. For every benchmark run with autoparallelization, this script finds the highest multiple of that benchmark relative to the geometric average of all benchmarks in that result. The top six SPECint and SPECfp benchmarks were disqualified.

Obviously, I've assumed that the compiler was not able to automatically parallelize any of the benchmarks below that, and I feel the output of check-autoparallel.py currently makes this assumption reasonable. If this assumption is wrong, I doubt it would alter the conclusions in the blog post. (But of course, that's another assumption...)


Generating the graphs
---------------------

Run make-graphs.py. It outputs the following:

* identified_cpus.txt
	The right column contains a list of all processor names encountered in the input, along with the source filename. The left column contains the recognized brand name, model name and MHz. I used this file to develop & debug the identifyCPU() function found in the script. If new processors are introduced, this function may need to adapt. It might be a good idea to do a diff of this file generated from the latest SPEC data against a copy generated from older data.

* int_report.txt
	The first two lines show the automatically computed conversion ratios between CINT95, CINT2000 and CINT2006. The rest of the file groups all the results by family, then sorts them by hardware release date and normalized SPECint2006 result value. Each line shows the benchmark suite and line number. You should be able to pick out certain points on the PNG graph, find them in this text file, locate the corresponding line in the CSV, and use that to find the detailed html/PDF result page on SPEC's website.

* fp_report.txt
	Same thing as int_report.txt, but for floating-point benchmarks.

* int_graph.png
* fp_graph.png
	Graphs similar to the ones you'll find at: http://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance


-----
SPECint(R) and SPECfp(R) are registered trademarks of the Standard Performance Evaluation Corporation (SPEC).

analyze-spec-benchmarks's People

Contributors

dylanwalker avatar preshing avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.