Giter Site home page Giter Site logo

Use libOSRM about vroom HOT 7 CLOSED

vroom-project avatar vroom-project commented on August 22, 2024
Use libOSRM

from vroom.

Comments (7)

jcoupey avatar jcoupey commented on August 22, 2024

This feature should be OK in develop now.

To get an idea of the gain that can be expected, I monitored the computing times while solving on a set of problems of sizes 100, 200, 500, 1000 and 2000. The solving part (and solutions) are of course the same so the only major difference between using osrm-routed and libosrm with osrm-datastore lies in the matrix and detailed route computing time.

Computing times (CT) are in ms.

Instance size CT with osrm-routed CT with libosrm Diff
100 107 154 +1.44%
200 269 288 +7.1%
500 1099 893 -18.7%
1000 4508 3485 -22.7%
2000 31980 27072 -15.3%

This is only a single execution on a few files but the overall shape seem consistent on other examples. My guess is that on "small" instances, the OSRM response is small enough so that the gain is compensated by the overhead of creating the C++ objects used by libosrm (but then the absolute difference is very small anyway).
So the gain starts to show for "middle-sized" instances with a few thousand points.

from vroom.

daniel-j-h avatar daniel-j-h commented on August 22, 2024

Sorry for dropping off IRC the other day. I'm interested in these numbers, especially how osrm-routed can be faster then directly using libosrm, since the former is the latter plus a small http server. Did you profile the setup or do you have a general feeling for how vroom is using libosrm compared to how it's using osrm-routed?

from vroom.

jcoupey avatar jcoupey commented on August 22, 2024

I'm interested too in finding a good way to setup a comparison between the two and happy to perform more tests.

What makes it difficult is that the osrm-routed numbers seem to be highly dependant on some cache state somewhere. For example I just solved the 2000-point instance on a fresh start of osrm-routed (and my machine) and the problem loading (the table request + going through the matrix) took around 56 seconds. Now the same drops to around 8 seconds on a second run...

Regarding the difference in the client code, both follow the same pattern (see table request wrapper for libosrm and for osrm-routed. The latter has the http overhead through the send_then_receive function and the additional work of parsing the response with rapidjson.

from vroom.

daniel-j-h avatar daniel-j-h commented on August 22, 2024

I just had a look at the code: as long as you call get_matrix only once this should be fine, otherwise I would cache the OSRM object outside of the function. But both look identical from quickly skimming the code.

If you want to dig deeper here I think the next step is to profile both setups and see what comes up.

from vroom.

jcoupey avatar jcoupey commented on August 22, 2024

Yes, get_matrix is only called once but there is still an overhead of not having the OSRM object at hand, as it is re-created in the very same way in the next function (get_route_infos used to retrieve the detailed geometry).
As I recall, my first intention here was to use an OSRM object as a class member. But then using this object inside const functions (get_matrix and get_route_infos) would require Table and Route to be declared const within libosrm. Don't know if this would make sense from the OSRM point of view...

the next step is to profile both setups

Not sure what you have in mind exactly here, but happy to investigate. ;-)

from vroom.

daniel-j-h avatar daniel-j-h commented on August 22, 2024

You're technically correct, the OSRM services should probably be const, but they're modifying some thread-safe internal data structures at the moment (e.g. search heaps) so they are not (yet) marked const. You should be able to mark the OSRM object thread-safe via mutable as in (pseudo code)

struct S {
    mutable OSRM osrm;
}

const S s;
s.osrm.Route();

the OSRM services are safe to call concurrently, so the const/mutable here correctly states

  • logically const and
  • thread-safe

With profiling I mean, to run your benchmarks under perf (or similar) and have a look at the reports.

from vroom.

jcoupey avatar jcoupey commented on August 22, 2024

Closing as using libosrm v5.4.0 should now be ok in develop. The benchmarking discussion part is moved to #49.

from vroom.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.