Giter Site home page Giter Site logo

Comments (2)

MontyCarter avatar MontyCarter commented on June 18, 2024

By the way, I tried starting two calls to benchexec, each with 8 concurrent benchmarks and 15GB memory, but it seems like it puts 1 execution from each call to benchexec into each cgroup. (That is, 8 cgroup containers, each with 2 jobs - one from each call to benchexec).

I'm guessing this is because the cgroups are not uniquely named between calls to benchexec (for example, both calls to benchexec use 8 cgroups, named like "benchgroup1, benchgroup2, ..., benchgroup8", and so they end up going in the same container)?

If this is the case, then using unique cgroup names between benchexec calls would provide the functionality I'm looking for, as opposed to creating an explicit switch to allow oversubscription.

from benchexec.

PhilippWendler avatar PhilippWendler commented on June 18, 2024

BenchExec does create unique cgroup names and would never put different runs into the same cgroup. If you can reproduce other behavior, please file a separate bug.

For the original proposal, I think it is technically possible to implement with cgroups. I am not convinced on whether it makes sense, though, if we really abort completely on OOM. This would mean that basically the whole benchmark was wasted.

Another strategy we have implemented in a different system is to start each run with half of its memory limit first, and if it hits the limit, restart this single run with its full limit. This could also be done in BenchExec and has the advantage that it produces exactly the same results as starting all runs with the full limit, but it would significantly complicate the resource assignment and task scheduling in BenchExec.

I won't have time for this year's SV-Comp to implement any of these possibilities.

Note that for your machine, also your number of cores only allows to have 8 parallel runs if you want to use the SV-Comp resource settings (4 cores plus hyper-threading). So if you want more parallel runs, you need to reduce the number of cores, so you could also consider simply reducing the memory limit (and maybe manually execute the OOM results again, or just ignore them).

Running two BenchExec instances in parallel would mean that there is absolutely no possibility to prevent severely skewed results if you do overbooking. The kernel could start swapping any of the runs, or (if you have not enough swap) even start killing runs before they reach their memory limit.

from benchexec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.