Comments (20)
I have just released version 4.0.2 and 3.7.4, both including the fix for this bug.
from shlink.
Here's the log related to
GeoLite2 database download failed
Yeah, that's basically showing that Shlink successfully downloaded new versions of the database on every visit, until it reached the API limit, and then all the instances of GeoLite2 database download failed
are due to that limit and Shlink still attempting to download every time.
Unfortunately it does not explain why Shlink was still thinking a new instance was needed to be downloaded, when it had a fresh copy.
The only solution I can think of is to change how Shlink decides when a new copy is needed. Potential options:
- Track when was the last successful download, and attempt new downloads after a period of time.
This is tricky though, as it needs to be tracked per instance, not on a central place like the database.
There are also many edge cases that would need to be handled. - Query the API to see if there's a new version available, and download it only in that case.
I don't even know if this is supported, or if those "queries" would also count for the download limit somehow, driving this useless.
Also, this might add extra latency.
For context, the way it works now is that Shlink reads the database metadata, for a value that tells when was it build. If a certain a mount of days has passed (35, if I remember correctly), or the database does not exist at all, it tries to download it.
It is very straightforward, has very low impact and keeps the GeoLite file as the single source of truth, which is convenient, but it is clearly not covering some particular scenario that I'm missing.
from shlink.
Can you try opening a shell inside the container and running shlink visit:download-db -vvv
?
from shlink.
Actually, the loop stopped today, the last record is from this afternoon
Filename | Date | IP Address | City | Region | Country | ISP | Org |
---|---|---|---|---|---|---|---|
GeoLite2-City_20240220.tar.gz | 2024-02-22 15:07:27 | 20.74.17.xxx | Paris | Paris | France | Microsoft Azure | Microsoft Azure |
from shlink.
Here's the log related to GeoLite2 database download failed
from shlink.
Can you try opening a shell inside the container and running
shlink visit:download-db -vvv
?
docker exec fierte.pm shlink visit:download-db -vvv
[INFO] GeoLite2 db file is up to date.
from shlink.
Sometimes there are issues with some of their files, in which the metadata used by Shlink to determine if an update is needed, is wrong.
It's probably what happened this time, which probably caused your instance to think an update was needed on every new request, and just re-download the same file again and again.
I tried to update the file in my own instance, but it skipped that version, and the most recent one seems to be fine.
I'm going to try to verify if this is the case.
from shlink.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
from shlink.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
Ouch! Do you have some link where this is explained? I would like to reference it from the docs.
If I manage to confirm this was the problem, I'll try to find some way to mitigate it.
from shlink.
Thanks you, because they move us from 1000 downloads a day to 30 on free plan, so we must be careful.
Ouch! Do you have some link where this is explained? I would like to reference it from the docs.
If I manage to confirm this was the problem, I'll try to find some way to mitigate it.
Here : https://comms.maxmind.com/daily-download-limit-decreasing-2
from shlink.
Just checked the file from the 9th of February, and the metadata is correct. Shlink should not have tried to download it over and over.
The logic basically compares the GeoLite file's build time and checks if it's more than 35 days old, in which case it tries to download a new copy.
This is done with concurrency in mind, so a lock is set until download ends, to avoid multiple downloads in parallel.
Other potential reasons for this to happen are that there was not enough disk space to decompress the file after downloading it, or perhaps an issue with the system date that made Shlink think it was in the future.
I'll keep this open for now to see if I can think in some way to make the process more resilient.
from shlink.
Got the same bug last week and I also received download limit reached notification from MaxMind.
Other potential reasons for this to happen are that there was not enough disk space to decompress the file after downloading it
In my case the server has enough disk space to handle the file.
I've restart the shlink service to see if it will work.
from shlink.
Could any of you check if your instances have some log entry starting with GeoLite2 database download failed
?
from shlink.
There was a new report of this issue, but in there, it was mentioned this was happening with orphan visits specifically.
I checked again the log provided here, and I noticed there are many attempts on downloading the database as a result of an orphan visit.
I also see some attempts which do not seem to be linked with a particular request happening instants before it, though. @sparanoid could it be that you have some scheduled task to periodically download the GeoLite file, or that the logs were manipulated to remove sensitive information?
from shlink.
I haven't looked too closely at the code, but it appears that you are downloading the file to a temporary file and then copying it to the final location. This could potentially result in a corrupted file if multiple requests are going at once. To prevent this, you could either write the file atomically or take out appropriate locks (or preferably both).
In order to write the file atomically, you should download it to the same directory as the final file to ensure the file is on the same file system, decompress it, and then rename the file to the final file name. You would either want to take a lock to ensure that no other request is writing to the same temporary files at the same time or you would want to use random names for the temporary files.
Some other thoughts:
- Comparing the metadata time and the system time could result in excess downloads if the system time is off.
- What happens if the file system is read-only or
open_basedir
is enabled and the database path is outside of it?
Edit:
I was looking at the code in shlink-ip-geolocation
when I commented above, and missed this code in this repo:
shlink/module/CLI/src/GeoLite/GeolocationDbUpdater.php
Lines 41 to 49 in e244b2d
I didn't look into how that locking works, but presumably it prevents multiple downloads at once.
from shlink.
I'm having the same problem with my instance, which just started happening in the last few days.
from shlink.
I was looking at the code in shlink-ip-geolocation when I commented above, and missed this code in this repo:
shlink/module/CLI/src/GeoLite/GeolocationDbUpdater.php
Lines 41 to 49 in e244b2d
I didn't look into how that locking works, but presumably it prevents multiple downloads at once.
Yes, that's correct. That lock prevents multiple downloads in parallel.
from shlink.
I have a suspicion of what could be the problem. There might be some stateful service somewhere down the dependency tree, that's keeping a reference to the old database file metadata, making every check resolve that the file is too old, resulting in a new download.
from shlink.
@oschwald answering to your comments:
In order to write the file atomically, you should download it to the same directory as the final file to ensure the file is on the same file system, decompress it, and then rename the file to the final file name. You would either want to take a lock to ensure that no other request is writing to the same temporary files at the same time
This is exactly how it's done.
Comparing the metadata time and the system time could result in excess downloads if the system time is off.
I thought about this, but it would have to be several days off, so I think it's a negligible risk.
If someone really has a system with a so messed up system time, I think it's reasonable that the expected solution in that case is to ask the admins to fix that, not to expect Shlink to work around the problem.
Ultimately, any solution that does not make a lot of MaxMind API requests would be time based, one way or another, so there's not much that can be done here.
What happens if the file system is read-only or open_basedir is enabled and the database path is outside of it?
Then nothing can be done and GeoLite files won't be downloaded. It's an unfortunate limitation due to how GeoLite db files work.
In any case, this already happened not long ago. The solution involved making sure Shlink only tries to write on its own data
directory, and incidentally, on the tmp
dir due to some external dirs.
from shlink.
I have a suspicion of what could be the problem. There might be some stateful service somewhere down the dependency tree, that's keeping a reference to the old database file metadata, making every check resolve that the file is too old, resulting in a new download.
I can confirm this is the problem. There's an unintentional stateful service that's reading the GeoLite file metadata when created, and holding it in memory, making every check think the database is too old.
This is affecting all versions of Shlink, so I will try to backport it to v3.x if it's not too complex.
from shlink.
Related Issues (20)
- Allow zero width shortening
- Set cache lifetime per short url HOT 7
- Allow short URL redirects to be cached by proxy-cache/CDN services
- Failed to import custom slug from Bitly. HOT 5
- Update to RoadRunner 2024
- Can't delete a shortened url HOT 3
- YOURLS Import: INVALID_PORT HOT 3
- Add option to allow all short URLs to be crawlable
- Allow user agents to be customized in robots.txt HOT 5
- Text note to links HOT 7
- Wrong type property for Dynamic Redirection's Docs HOT 1
- Improve general performance around short URLs for instances with a large number HOT 11
- MaxMind GeoLite2 infinite download attempts - problem still exists with 4.0.2 HOT 14
- IP based redirection HOT 1
- `SQLSTATE[42000]: Syntax error or access violation: 1305 SAVEPOINT DOCTRINE_2 does not exist` when creating short URL using MariaDB/MySQL HOT 2
- Improve GeoLite2 file downloads when using RoadRunner HOT 1
- Auto generate a url HOT 2
- Add another shlink instances
- Support Japanese charsets HOT 9
- Improve performance when using findIfExists HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from shlink.