hsldevcom / opentripplanner Goto Github PK
View Code? Open in Web Editor NEWThis project forked from opentripplanner/opentripplanner
An open source multi-modal trip planner
Home Page: http://www.opentripplanner.org
License: Other
This project forked from opentripplanner/opentripplanner
An open source multi-modal trip planner
Home Page: http://www.opentripplanner.org
License: Other
Sidewalk naming
Act as usual with only a few timeouts
Lots of timeouts (only timeouts on some itineraries...) with latest OTP snapshots for approx. a month
8e1b5cf commit (currently latest)
https://ratp.spiralo.net/router-paris.zip
Docker containter with 7Go memory limit for java.
files are into the router-paris.zip file, like GTFS ones.
Try a request from Luxembourg, Paris, France to Rue Léontine Sohier, Longjumeau, France or a lot of others.
This started yesterday (Feb 3). The folllowing simple trip(id: )
sample query fails most of the time when sent to /hsl
endpoint, currently only about 1/3 of trip(id: )
requests return the correct response. In /finland
endpoint the same request works every time as it should.
I don't know if this is limited to a specific set of Trips or applies to all of them.
IMHO, there should be some automated monitoring for this type of stuff, and it should not stay in api.digitransit.fi for this long...
[edit] Seems like this type of Reittiopas link also sometimes fails to load because of the same reason https://beta.reittiopas.fi/linjat/HSL:4561/pysakit/HSL:4561:1:01/HSL:4561_20170123_La_2_1259
{
trip(id: "HSL:2147_20170130_Pe_2_2254") {
gtfsId
}
}
failed response:
{
"data": {
"trip": null
}
}
correct response:
{
"data": {
"trip": {
"gtfsId": "HSL:2147_20170130_Pe_2_2254"
}
}
}
Docker run should not get stuck
Using M1 Mac docker run for OpenTripPlanner gets stuck and prints only JAR=otp-shaded.jar
I followed the guide in the link on how to use OpenTripPlanner with docker
https://digitransit.fi/en/developers/architecture/x-apis/1-routing-api/
docker run -e OTP_DATA_CONTAINER_URL=http://otp-data-container:8080 -p 8080:8080 hsldevcom/opentripplanner
Run the above command in Apple silicon M1 environment
This will be improved in #11
We realized that after adding a new gtfs file to our OTP instance, with trips with frequencies (besides scheduled times), the calls to graphQL "stoptimes" were returning 500.
To be sure of this, we removed the frequencies.txt file and the issue disappeared (of course we didn’t had results for the period where frequencies were applied).
We tried this on both “prod” and “latest” docker images.
Another thing we noticed, whilst using the file with frequencies, was that:
With further debug, it was found that the call to get stoptimes would work only with scheduledTimetable:
@@ -1800,9 +1804,30 @@
.name("stoptimes")
.description("List of times when this trip arrives to or departs from a stop")
.type(new GraphQLList(stoptimeType))
.dataFetcher(environment -> TripTimeShort.fromTripTimes(
index.patternForTrip.get((Trip) environment.getSource()).scheduledTimetable, # HERE!
environment.getSource()))
In this situation, OTP throws the following:
otp_1 | 18:40:40.356 WARN (ExecutionStrategy.java:70) Exception while fetching data
otp_1 | java.lang.ArrayIndexOutOfBoundsException: null
otp_1 | 18:40:40.357 WARN (FieldErrorInstrumentation.java:174) Exception while fetching field
otp_1 | java.lang.ArrayIndexOutOfBoundsException: null
otp_1 | 18:40:40.409 WARN (ExecutionStrategy.java:70) Exception while fetching data
otp_1 | java.lang.ArrayIndexOutOfBoundsException: null
otp_1 | 18:40:40.411 WARN (FieldErrorInstrumentation.java:174) Exception while fetching field
otp_1 | java.lang.ArrayIndexOutOfBoundsException: null
otp_1 | 18:40:40.429 WARN (FieldErrorInstrumentation.java:133) Errors executing query
That's because the Trip didn't had a "scheduledTimetable".
So we've implement a fix. Its been working for some time an we believe its ok:
.name("stoptimes")
.description("List of times when this trip arrives to or departs from a stop")
.type(new GraphQLList(stoptimeType))
.dataFetcher(environment ->{
Timetable timetable = index.patternForTrip.get((Trip) environment.getSource()).scheduledTimetable;
// If the Trip is frequency-based, there are no scheduled tripTimes (they must com from <FrequencyEntry>.tripTimes)
if (timetable.tripTimes.isEmpty()) {
// This should probably be encapsulated into a function named TripTimeShort.fromFrequencyTripTimes,
// since it does the same as existing function TripTimeShort.fromTripTimes, but for Frequency.
// Or, it could also be moved into TripTimeShort.fromTripTimes.
List<TripTimeShort> out = Lists.newArrayList();
for (FrequencyEntry freq : timetable.frequencyEntries) {
TripTimes times = freq.tripTimes;
for (int i = 0; i < times.getNumStops(); ++i) {
out.add(new TripTimeShort(times, i, timetable.pattern.getStop(i), null));
}
}
return out;
} else {
return TripTimeShort.fromTripTimes(timetable, environment.getSource());
}
})
.build())
```
We're grad to share it via patch or pull request.
Querying stoptimes for a trip returns clearly incorrect departure times. The scheduledArrival and scheduledDeparture values for different stops may have same values (to the second). This should match neither scheduled timetables or estimated realtime stop times. It does not happen for every trip but still noticeably often.
This issue has been in the Digitransit API for months now, and I find it peculiar that this has not surfaced and got fixed in any testing done so far...
For example,
{
trip(id: "HSL:1075_20170501_La_1_1554") {
stoptimesForDate(serviceDay: "20170506") {
stop {
gtfsId
}
scheduledArrival
realtimeArrival
scheduledDeparture
realtimeDeparture
serviceDay
realtime
}
}
}
Response
{
"data": {
"trip": {
"stoptimesForDate": [
...
{
"stop": {
"gtfsId": "HSL:1240121"
},
"scheduledArrival": 58080,
"realtimeArrival": -1,
"scheduledDeparture": 58080,
"realtimeDeparture": -1,
"serviceDay": 1494018000,
"realtime": true
},
{
"stop": {
"gtfsId": "HSL:1240122"
},
"scheduledArrival": 58080,
"realtimeArrival": 58107,
"scheduledDeparture": 58080,
"realtimeDeparture": 58107,
"serviceDay": 1494018000,
"realtime": true
},
...
{
"stop": {
"gtfsId": "HSL:1383107"
},
"scheduledArrival": 58440,
"realtimeArrival": 58467,
"scheduledDeparture": 58440,
"realtimeDeparture": 58467,
"serviceDay": 1494018000,
"realtime": true
},
{
"stop": {
"gtfsId": "HSL:1383109"
},
"scheduledArrival": 58440,
"realtimeArrival": 58467,
"scheduledDeparture": 58440,
"realtimeDeparture": 58467,
"serviceDay": 1494018000,
"realtime": true
},
...
{
"stop": {
"gtfsId": "HSL:1382113"
},
"scheduledArrival": 58620,
"realtimeArrival": 58647,
"scheduledDeparture": 58620,
"realtimeDeparture": 58647,
"serviceDay": 1494018000,
"realtime": true
},
{
"stop": {
"gtfsId": "HSL:1382143"
},
"scheduledArrival": 58620,
"realtimeArrival": 58647,
"scheduledDeparture": 58620,
"realtimeDeparture": 58647,
"serviceDay": 1494018000,
"realtime": true
},
...
{
"stop": {
"gtfsId": "HSL:1384149"
},
"scheduledArrival": 58860,
"realtimeArrival": 58887,
"scheduledDeparture": 58860,
"realtimeDeparture": 58887,
"serviceDay": 1494018000,
"realtime": true
},
{
"stop": {
"gtfsId": "HSL:1384151"
},
"scheduledArrival": 58860,
"realtimeArrival": 58887,
"scheduledDeparture": 58860,
"realtimeDeparture": 58887,
"serviceDay": 1494018000,
"realtime": true
},
...
]
}
}
}
Leg currently returns a legGeometry: LegGeometry which looks like this:
"legGeometry": {
"length": 17,
"points": "ueenJey~uCyDxCOmAIs@G_@Ec@G_@YfA}@l@a@Xm@`@IDOH^xCPl@CP@L"
}
Pattern and Trip instead only expose geometry: [[Float]] list which takes up much more space to express the same thing. Even though compression probably solved the bandwidth issue mostly, it would still be nice to systematically use polyline encoding for all geometries. Coordinate arrays can stay as an alternative.
Please add patternGeometry to Pattern and tripGeometry to Trip.
{
"lat": 60.168929,
"lon": 24.613933
},
{
"lat": 60.168883,
"lon": 24.614113
},
{
"lat": 60.168766,
"lon": 24.614327
},
{
"lat": 60.168152,
"lon": 24.615018
},
{
"lat": 60.168025,
"lon": 24.615231
},
{
"lat": 60.167764,
"lon": 24.615461
},
{
"lat": 60.167609,
...
Walk transit steps should include OSM ids
While other route patterns seem to have the usual high accuracy for their geometry, trams seem to only have stop-to-stop resolution. So the latitudes and longitudes are only given for each stop in geometry array.
I believe this is a regression to how it was just some days (perhaps weeks?) ago. Maybe some error in source data?
[edit] this only seems to affect some tram routes as query with trip(id: "HSL:1007B_20170508_La_1_1441") returns the usual resolution geometry.
{
trip(id: "HSL:1003 _20170508_La_2_1423") {
gtfsId
pattern {
geometry {
lat
lon
}
}
}
}
Complete response below:
{
"data": {
"trip": {
"gtfsId": "HSL:1003 _20170508_La_2_1423",
"pattern": {
"geometry": [
{
"lat": 60.19256090000013,
"lon": 24.930717699999875
},
{
"lat": 60.1922220999999,
"lon": 24.936283799999977
},
{
"lat": 60.191135199999906,
"lon": 24.94093719999998
},
{
"lat": 60.190622699999906,
"lon": 24.94557789999997
},
{
"lat": 60.186781500000386,
"lon": 24.94845870000003
},
{
"lat": 60.18531529999993,
"lon": 24.951408299999994
},
{
"lat": 60.18350619999994,
"lon": 24.95271009999999
},
{
"lat": 60.181311699999725,
"lon": 24.949989899999995
},
{
"lat": 60.17906760000015,
"lon": 24.95006559999993
},
{
"lat": 60.17336340000017,
"lon": 24.949187599999963
},
{
"lat": 60.1716923000001,
"lon": 24.947397300000084
},
{
"lat": 60.17043500000034,
"lon": 24.940672800000034
},
{
"lat": 60.16779039999983,
"lon": 24.941512799999895
},
{
"lat": 60.16625370000038,
"lon": 24.94238930000003
},
{
"lat": 60.16467160000027,
"lon": 24.937878300000005
},
{
"lat": 60.1627774000002,
"lon": 24.93904600000002
},
{
"lat": 60.1609003000002,
"lon": 24.94166850000002
},
{
"lat": 60.1580242000002,
"lon": 24.94180290000002
},
{
"lat": 60.158176000000076,
"lon": 24.94551740000002
},
{
"lat": 60.15847440000025,
"lon": 24.94982020000002
},
{
"lat": 60.15952860000026,
"lon": 24.954968900000022
},
{
"lat": 60.16154810000026,
"lon": 24.95648490000003
}
]
}
}
}
}
The following request for HFP cache takes several seconds to respond. Response delay was much smaller when I tried this some months ago, so something must be slowing these requests down currently.
https://api.digitransit.fi/realtime/vehicle-positions/v1/hfp/journey/tram/+/+/#
Since PR #79 merging, I'm not able to build graphs anymore, due to a "expected role in multipolygon" error.
Please fix!
Plan search for the following coordinates fails with PathNotFoundException
from: {lat: 60.168961, lon: 24.924713}, to: {lat: 60.158830, lon: 24.933685
The coordinates are valid but is in the middle of a house compound
Trip has directionId with type String whereas Pattern has directionId with type Int, although they seem to refer to exact same data. Probably should be Int, if possible.
Also, directionId values in Trips and Patterns seem to be 0 or 1, while MQTT feed directionId values are 1 or 2 in the topic, as follows
/hfp/journey/type/id/line/direction/headsign/start_time/next_stop/(geohash_level)/geohash/#
These should be more consistent, now they lead to unnecessary confusion and bugs in code.
When executing a GraphQL Query that runs into the (default) timeout of 30 seconds, an Error Response is returned that sadly doesn't reference the timeout (and such the reason why the query didn't succeed).
This caused a lot of confusion why our OTP instance didn't want to respond with stops for a GraphQL query and only cleared after we digged into the OTP code.
The returned json contains an errorType
or message
that warns the user about the abortion of the query because of the timeout.
In the returned json, there is nowhere a reference to a timeout found, instead a DataFetchingException
with a null
message appears:
{"data":{},"errors":[{"stack":[...]],"errorType":"DataFetchingException","locations":null,"message":"Exception while fetching data: null"}]}
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Credentials: false
Content-Type: application/json
Date: Fri, 03 Apr 2020 20:46:44 GMT
Connection: close
Content-Length: 3072
{"data":{},"errors":[{"stack":["java.util.concurrent.FutureTask.report(FutureTask.java:121)","java.util.concurrent.FutureTask.get(FutureTask.java:192)","org.opentripplanner.index.ResourceConstrainedExecutorServiceExecutionStrategy.execute(ResourceConstrainedExecutorServiceExecutionStrategy.java:81)","graphql.execution.Execution.executeOperation(Execution.java:85)","graphql.execution.Execution.execute(Execution.java:44)","graphql.GraphQL.execute(GraphQL.java:201)","org.opentripplanner.routing.graph.GraphIndex.getGraphQLExecutionResult(GraphIndex.java:1055)","org.opentripplanner.routing.graph.GraphIndex.getGraphQLResponse(GraphIndex.java:1035)","org.opentripplanner.index.IndexAPI.getGraphQL(IndexAPI.java:657)","sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)","sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)","java.lang.reflect.Method.invoke(Method.java:498)","org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)","org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)","org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)","org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)","org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)","org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)","org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)","org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)","org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)","org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)","org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)","org.glassfish.jersey.internal.Errors.process(Errors.java:315)","org.glassfish.jersey.internal.Errors.process(Errors.java:297)","org.glassfish.jersey.internal.Errors.process(Errors.java:267)","org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)","org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)","org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)","org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:384)","org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:224)","org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:591)","org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:571)","java.lang.Thread.run(Thread.java:745)"],"errorType":"DataFetchingException","locations":null,"message":"Exception while fetching data: null"}]}
docker image: hsldevcom/opentripplanner:4325c2e143dcdc8c81ba36fb0f7f2e549667ed38
ENV JAVA_OPTS="-Xms8G -Xmx8G"
http://<otp-host>:8080/otp/routers/<router>/index/graphql
with a long running query returning lots of objects, like the query hsl-map-server/tilelive-otp-stops issues for getting all the stops: https://github.com/HSLdevcom/tilelive-otp-stops/blob/3f0cac4ace1b6b506424ea0372c46ab190652914/index.js#L42-L62In #94 (DT-607, DT-56) the TimedExecutorServiceExecutionStrategy
(later ResourceConstrainedExecutorServiceExecutionStrategy
) was introduced, to add a timeout to long running graphql queries to prevent a DOS.
With new linker, OTP lost ability to route to stops where no OSM data is available. This was re-implemented in opentripplanner@deb2544 could you please merge it?
Thanks!
Hello,
I have started working on my own Digitransit instance. As of now, I have working search routing in the UI based on OpenTripPlanner API with both static GTFS and protocol buffer GTFS-RT trip updates.
I however am unable to to find a suitable way to plug a GTFS-RT vehicle positions feed in. I have the feed already set up, working and tested with other tools. I see that on http://reittiopas.hsl.fi/ there are vehicle positions being used, but the updaters I can find do not support GPS data ingestion: real-time-alerts, bike-rental, bike-park, websocket-gtfs-rt-updater, stop-time-updater etc.
This issue is discussed in the original opentripplanner repo´s issues: opentripplanner#2329 and a pull request with an updater for pbf gtfs-rt positions feed was submitted, but closed and not merged: opentripplanner@cee935a
From what I can see, no suitable updater is present neither in the original OTP, and not in the HSLdevcom version:
https://github.com/HSLdevcom/OpenTripPlanner/blob/master/src/main/java/org/opentripplanner/updater/GraphUpdaterConfigurator.java
https://github.com/opentripplanner/OpenTripPlanner/blob/master/src/main/java/org/opentripplanner/updater/GraphUpdaterConfigurator.java
Could you please me point me in the right direction as to how to achieve the integration of vehicle positions that is present in Reittiopas (be it either GTFS-RT, MQTT or other)? Maybe this is done through another component and not OTP but from the few days of studying the code and attempted network requests, it seems that the GPS data displayed on the map upon selecting a route/trip is somehow coming from OTP.
This might be related to OSM
Here is the code in org.opentripplanner.routing.impl.DefaultFareServiceImpl.java :
private Fare _getCost(GraphPath path, Set<String> allowedFareIds) {
List<Ride> rides = createRides(path);
// If there are no rides, there's no fare.
if (rides.size() == 0) {
return null;
}
Fare fare = new Fare();
boolean hasFare = false;
for (Map.Entry<FareType, Collection<FareRuleSet>> kv : fareRulesPerType.entrySet()) {
FareType fareType = kv.getKey();
Collection<FareRuleSet> fareRules;
if (allowedFareIds != null) {
fareRules = kv.getValue().stream()
.filter(f -> allowedFareIds.contains(f.getFareAttribute().getId().toString()))
.collect(Collectors.toList());
} else {
fareRules = kv.getValue();
}
// Get the currency from the first fareAttribute, assuming that all tickets use the same
// currency.
Currency currency = null;
if (fareRules.size() > 0) {
currency =
Currency.getInstance(fareRules.iterator().next().getFareAttribute().getCurrencyType());
}
hasFare = populateFare(fare, currency, fareType, rides, fareRules);
// log.info("Fares for {} available: {}", fareType, hasFare);
}
return hasFare ? fare : null;
}
inside the for loop, if we have 4 fareTypes need to calculate, let's assume the result as below:
1_fareType : hasFare = true
2_fareType: hasFare = true
3_fareType: hasFare = true
4_fareType: hasFare = false
what will be the result return from this method ? null right ?
I wonder this is written as design or it's a logic error.
if it's a logic error , I advice to update the code to be
hasFare = populateFare(fare, currency, fareType, rides, fareRules) || hasFare;
Routing uses only ways at levels between building:min_level and building:levels to route from a building.
Routing uses level 1 if a path near (or under) the building is present.
N/A
HSL Reittiopas Web interface as of 26 Nov 2017.
N/A
HSL Reittiopas Web interface as of 26 Nov 2017.
Query a route from a building with building:min_level above 1 and complex paths near it. In this case, Kirjokansi 2 A, Espoo to Heikintori in Reittiopas is a good example. Routing seems to assume a footpath at level 1 (under the building, in a shopping mall) is accessible directly from the building. Footpath on level 4 or pedestrian area on level 6, both levels included in the building (>= building.min_level) should be used instead.
This seems to be sensitive to walking distance preference. These particular queries show unintuitive routes of different kind (both routed through level 1) at the time of writing:
If there's an alternative solution to this routing issue, I'm welcome to hear of it as an OSM contributor. Amount of such buildings is going to increase in HSL area as a result of new 3D combined metro station - shopping mall - residential building complexes being built.
This should be loaded from translations.txt. This is currently pending the integration of the new GTFS loader in OTP, as the OBA loader does not support translations, and is not easily expendable.
When I try to run the docker image I get the following:
Error reading bike rental feed from http://digitransit-proxy:8080/out/helsinki-fi.smoove.pro/api-public/stations
java.net.UnknownHostException: digitransit-proxy
docker run --rm --name otp-hsl -p 9080:8080 -e ROUTER_NAME=hsl -e JAVA_OPTS=-Xmx5g -e ROUTER_DATA_CONTAINER_URL=https://api.digitransit.fi/routing-data/v2/hsl hsldevcom/opentripplanner
Since PR #94 merging, Lucene indexing is failing:
19:42:27.156 INFO (LuceneIndex.java:148) Starting background Lucene indexing. 19:42:38.224 INFO (ExecutionStrategy.java:42) Exception while fetching data java.lang.RuntimeException: Lucene indexing failed. at org.opentripplanner.common.LuceneIndex.index(LuceneIndex.java:100) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.common.LuceneIndex.access$200(LuceneIndex.java:47) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.common.LuceneIndex$BackgroundIndexer.run(LuceneIndex.java:149) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.common.LuceneIndex.<init>(LuceneIndex.java:68) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.routing.graph.GraphIndex.getLuceneIndex(GraphIndex.java:449) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.index.IndexGraphQLSchema.lambda$new$242(IndexGraphQLSchema.java:1943) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at graphql.execution.ExecutionStrategy.resolveField(ExecutionStrategy.java:40) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.index.TimedExecutorServiceExecutionStrategy.lambda$execute$332(TimedExecutorServiceExecutionStrategy.java:57) [otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_66-internal] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_66-internal] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_66-internal] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal] Caused by: java.nio.channels.ClosedByInterruptException: null at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) ~[na:1.8.0_66-internal] at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:970) ~[na:1.8.0_66-internal] at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:195) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.store.Directory.copy(Directory.java:185) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4672) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:535) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:502) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:508) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:380) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:472) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1534) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1204) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1185) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.common.LuceneIndex.addStop(LuceneIndex.java:114) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] at org.opentripplanner.common.LuceneIndex.index(LuceneIndex.java:85) ~[otp-1.1.0-SNAPSHOT-shaded.jar:1.1] ... 11 common frames omitted
Please fix :)
Absolute stoptimes that can be converted to a unix time and on to valid Date objects require a serviceDay value. This value is returned for stoptimes returned to nearby departure and stoptimesForDate(serviceDay: ) queries.
{
"scheduledArrival": 74400,
"realtimeArrival": 74400,
"scheduledDeparture": 74400,
"realtimeDeparture": 74400,
"serviceDay": 1485122400,
}
But it is not included in Legs returned to plan queries. As serviceDay changes at different times depending on route, not at midnight, it seems that there is no reliable way to get the correct serviceDay for stoptimes of trips of planned Legs, especially if they are at night time.
Add serviceDay as a value to Legs that are transitLegs.
pathway_mode would be read as defined in the spec.
OTP requires a pathway_type field, that is not in the spec (seems to be pathway_mode under another name)
891117f (master)
IFDM dataset from Paris.
Use a GTFS file with a pathway.txt file.
Thanks for your help!
In many cases it is useful to get the first/last stop and departure/arrival time for a trip, perhaps for a large number of trips or patterns using a single query.
While the current graphQL API allows querying for all stops and stoptimes to get to this information, it feels cumbersome at best. Having to query all stops and stopime for long routes with many stops in them makes it currently slow or difficult to get to this data.
If possible, add departureStop, arrivalStop, departureStoptime, arrivalStoptime to all elements that support these.
Dear all,
Would it be possible to pull this PR opentripplanner#2271 into your OTP fork, to make graph building faster? It really improves building time for Paris (2 fold decrease). BTW, there also may have some other interesting things to merge from upstream master.
Thanks :)
Specify how our minimun routing should be improved. See #23.
Questions:
Come up with a list that affects to calibration weighs. E.g. (price, waiting location, co2, eu directives, traffic coordination (liikenteenohjaus), trustworthiness, where I have to walk, weather,...)
We should go through current implementation docs and see what values are used there.
As you can see below, Addresses (names) are not used anymore in Origin and Destination. A fix has been committed upstream: opentripplanner@08470b8 please fix :)
NOTE: this issue system is intended for reporting bugs and tracking progress in software
development. For all other usage and software development questions or discussion, please write to
the user mailing list(https://groups.google.com/forum/#!forum/opentripplanner-users) or post a
question in the developer chat: https://gitter.im/opentripplanner/OpenTripPlanner.
Image should be deployed succesfully
When attempting to deploy image in cloud run I get the following error:
Error retrieving graph source bundle https://api.digitransit.fi/routing-data/v2/finland/router-finland.zip from otp-data-server... retrying in 5 s...
I think it could be related to the API key that is now needed to access routing data
I used the latest image from docker hub: https://hub.docker.com/r/hsldevcom/opentripplanner
On the cloud shell of Google Cloud platform (linux) I used this:
docker pull hsldevcom/opentripplanner:latest
docker tag hsldevcom/opentripplanner:latest gcr.io/helsinki-transport/hsldevcom/opentripplanner:latest
docker push gcr.io/helsinki-transport/hsldevcom/opentripplanner:latest
These commands pull tag and push the container image to the container registry of GCP.
On GCP open the cloud shell and run this:
docker pull hsldevcom/opentripplanner:latest
docker tag hsldevcom/opentripplanner:latest gcr.io/helsinki-transport/hsldevcom/opentripplanner:latest
docker push gcr.io/helsinki-transport/hsldevcom/opentripplanner:latest
Then try to load container image in cloud run.
I hope I have provided enough information. Let me know if you need any more details.
We want to have the following in Finnish, Swedish, Sami and English (if available):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.