Comments (4)
Note that the current benchmark measures C++, python and scheme performance. The C++ side is straight-forward, the python and scheme side, not so much. For scheme, there are three distinct bottlenecks:
-
how fast can you move from C++ to guile, do nothing (no-op), and return to C++. Last I measured, this was about 15K/sec or 20K/sec for guile, and about 20K/sec to 25K/sec for cython/python.
-
Once inside guile, how fast can you do something, e.g. create atoms in a loop, that loop being written in scheme (or python). Last I measured, this was reasonably fast, no complaints.
-
How does 2) work when the scheme code is interpreted, memoized, or compiled. All three have different performance profiles. When the guile interpreter runs, nothing is computed in advance; it is interpreted "on the fly". When memoization is turned on, guile caches certain intermediate results, for faster re-use. When compiling is turned on, the scheme code is compiled into a byte-code, and then that byte-code is executed.
Historical experience is that compiling often looses: The amount of time that it takes to compile (ConceptNode "foo")
into bytecode far exceeds the savings of a few cycles, compared to the interpreted version (because, hey, both the compiled and the interpreted paths immediately call C++ code, which is where the 98% of the cpu time goes).
Thus, reporting C++ performance is not bad, but the proper use and measurement of the c++/guile and the c++/python interfaces is .. tricky.
from benchmark.
Note also: it is not entirely obvious that ripping out the existing benchmark code and replacing it with something else results in a win. I mean, starting and stopping a timer, and printing the result is just .. not that hard.
The biggest problem is that the existing benchmark code is just ... messy. There's a bunch of crap done to set up the atomspace, populate it with atoms. What, exactly is a "reasonable" or "realistic" set of atoms to stick in there? How does performance vary as a function of atomspace size?
Other parts of the messiness have to do with the difficulty of measuring the c++/guile interfaces. Its not at all clear to me that just using a different microbenchmarking tool will solve any of these problems....
That said, I don't really care if or how the benchmarks are redesigned, as long as the work and are accurate (and we get a chance to do before+after measurements, to verify that any new results are in agreement with the old results)
from benchmark.
Note also: the current benchmark fails to control dynamic range. For example, we can call getArity()
more than a million times a second. We perform a pattern search in about 100 times a second. The current benchmark wants to time how long it takes to do both, N times. Clearly, just one value of N cannot be used to measure both. This is one of the messy issues.
from benchmark.
I tried to check python benchmarking but it seems to be broken - raised additional issue on it #9
from benchmark.
Related Issues (7)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmark.