Giter Site home page Giter Site logo

flexo's People

Contributors

aude avatar cmsigler avatar gyscos avatar n8henrie avatar nroi avatar ppickfor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flexo's Issues

Cleaning package cache based on access time

Flexo already provides the possibility to automatically clean the package cache by retaining not more than x versions. However, it can still happen that a package is downloaded once by a client, and then uninstalled by the client, so that the package now remains in the cache indefinitely even though no client needs it anymore.

paccache provides the -u, --uninstalled flag for this use case, but we cannot provide something similar for Flexo, because Flexo runs on a server, so it doesn't know which packages are installed by the various clients that are using Flexo.

As a workaround, we should give the user the possibility to define something like the following in the configuration file:

retain_for_duration = "9 months"

So that Flexo will remove all packages older than the given duration, even if that means that fewer than num_versions_retain versions are retained in the cache.

We are already using the humantime library to parse human-readable durations, so implementing this should be easy.

If this option is not defined in the configuration file, the default should be to not delete any packages based on the access time.

Provide multiarch Docker image

I am running flexo on a Raspberry Pi in a Docker container and love it so far. My only problem is the lack of a ARM image for Docker, so I currently build it myself locally. Luckily, no changes on the Dockerfile or flexo itself were necessary! But it would be nice if an official multi-arch image was provided so watchtower can automatically update flexo on my machine.

Crash on flexo startup

Everything has been working brilliantly for the past few weeks, but upon attempting an update today I've realised the service has not been running and is currently crashing on startup. I don't recall making any changes to flexo.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ParseIntError { kind: Empty }', src/mirror_flexo.rs:213:25
stack backtrace:
   0:     0x55f9aaf03455 - <unknown>

Please let me know if you need any more info and thank you for your help.

Useless warnings appear when cleaning the package cache

Since pacman introduced the parallel downloads feature, warning messages like the following can appear in the log:

Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.063Z DEBUG flexo] Purging package cache
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.063Z DEBUG flexo] Purging package cache
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.063Z DEBUG flexo] Purging package cache
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.063Z DEBUG flexo] Purging package cache
Jun 18 19:50:28 archWS flexo[154603]: ==> finished: 1 packages removed (disk space saved: 9.1 MiB)
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.242Z DEBUG flexo] Package cache purged successfully
Jun 18 19:50:28 archWS flexo[154603]: stat: cannot statx '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: rm: cannot remove '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.244Z WARN  flexo] Unable to purge package cache: /usr/bin/flexo_purge_cache has exited with failure (exit code Some(123))
Jun 18 19:50:28 archWS flexo[154603]: stat: cannot statx '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: rm: cannot remove '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.244Z WARN  flexo] Unable to purge package cache: /usr/bin/flexo_purge_cache has exited with failure (exit code Some(123))
Jun 18 19:50:28 archWS flexo[154603]: stat: cannot statx '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: rm: cannot remove '/var/cache/flexo/pkg/community/os/x86_64/0ad-a23.1-9-x86_64.pkg.tar.zst': No such file or directory
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.245Z WARN  flexo] Unable to purge package cache: /usr/bin/flexo_purge_cache has exited with failure (exit code Some(123))
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.249Z DEBUG flexo] File "/var/cache/flexo/pkg/community/os/x86_64/.0ad-a23.1-9-x86_64.pkg.tar.zst.cfs" is no longer required and therefore removed.
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.250Z WARN  flexo] Unable to remove file "/var/cache/flexo/pkg/community/os/x86_64/.0ad-a23.1-9-x86_64.pkg.tar.zst.cfs": Os { code: 2, kind: NotFound, message: ">
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.251Z WARN  flexo] Unable to remove file "/var/cache/flexo/pkg/community/os/x86_64/.0ad-a23.1-9-x86_64.pkg.tar.zst.cfs": Os { code: 2, kind: NotFound, message: ">
Jun 18 19:50:28 archWS flexo[154603]: [2021-06-18T17:50:28.252Z WARN  flexo] Unable to remove file "/var/cache/flexo/pkg/community/os/x86_64/.0ad-a23.1-9-x86_64.pkg.tar.zst.cfs": Os { code: 2, kind: NotFound, message: ">

It's safe to ignore these warnings, but they are annoying.

The problem is probably caused by having multiple cleaning processes running in parallel.

Include connect_timeout option for latency tests

DNS lookups can take a bit of time, but DNS requests are cached, so we don't want to exclude mirrors just because the DNS lookup took a bit longer than usual. However, we currently only use a single timeout in our latency tests, which is the total timeout of the entire request.

We should also add the connect_timeout so that we can exclude high-latency mirrors without excluding mirrors just because DNS took a bit longer.

Feature request -- Specify country/country list for mirror auto-selection

Hi,

I feel this falls into a general category of "Improve 'auto' mirror selection" but that seems too general for a bug report IMHO. Instead I would like to propose a feature request instead. As always I hope this helps :)

My feature request: Please consider a variable to restrict selected mirrors to a single country or a list of countries.

My mirror auto-selection seems not to have worked too well :( Perhaps I can help because I'm a US user(?). I find I have mirrors from Sweden and Australia in my auto-selection list, as well as United States and Canada (Canada should be fine, of course). (FYI, I have 5 US, 2 Canada, and 1 each Australia and Sweden. Also, of the 5 US mirrors only one of flexo's were in my old mirrorlist selected via rankmirrors from exclusively US mirrors :( And I have learned how to use rankmirrors to manually tune my mirrorlist very efficiently, again, FWIW....)

If there would be a config option in flexo.toml to restrict countries from which mirrors are chosen, this could be improved, I propose. Just my 2c which I hope you find useful and something you wish to implement :)

Thank you!

Clemmitt

Edit: Turns out I have only one feature request. I just discovered that the auto-selected mirrorlist can be regenerated simply by restarting flexo :)

Flexo cannot save and serve files on linux 5.10.3 and OpenZFS 2.0.0

Hi,

maybe this is more of an information...

I am using flexo with (zfs) linux 5.10.3 and OpenZFS 2.0.0, coming from 5.9.14 everything was working.

From my research, I found it is due to a change in the kernel and how zfs does (or not) implement sendfile (or some sort of directIO)...
(had the sendfile issue with nginx yesterday and had to remove it to get it up and running)

here is an link to the issue openzfs/zfs#1156 and another PR working on directIO openzfs/zfs#10018

So my guess is, that flexo does some kind of "directIO" and this is failing now (with my setup).

However, I can confirm, it is still running with 5.10.3 and ext4 ...
So the problem is likely to be outside of flexo, I assume... but know you know about it ;-)

Thanks

archzfs.db URL is not being created correctly

Thank you for adding custom repos in #41! I thought it was working properly, but I think I was wrong -- I kept seeing 404s reported when I updated my databases, and ls /var/cache/flexo/pkg/archzfs/os/x86_64/ shows all zero-length files.

$ yay
[sudo] password for me: 
:: Synchronizing package databases...
 core is up to date
 extra is up to date
 community                                                                                                           5.5 MiB   638 KiB/s 00:09 [########################################################################################] 100%
error: failed retrieving file 'archzfs.db' from myserver:7878 : The requested URL returned error: 404

I think my config is identical to the example in the readme and the example in /etc/flexo/flexo.toml:

On client:

[archzfs]
Server = http://myserver:7878/custom_repo/archzfs/$repo/os/$arch

On server:

[[custom_repo]]
    name = "archzfs"
    url = "https://archzfs.com"

When I added an extra line for some debug output I could see it is requesting
https://archzfs.com/archzfs/os/x86_64/archzfs.db, but the actual URL is https://archzfs.com/archzfs/x86_64/archzfs.db (no os).

Looking more closely, I just now noticed that it looks like the URI is properly shown (without os) in the readme, but incorrect in the comments in flexo.toml. PR incoming.

Use libc from crates.io, not the github repository

We're currently using our own, forked version of libc in order to be able to use sendfile64. Now that our PR has been merged, we should use the official repository again and wait for the libc team to push a new release to crates.io.

Support HTTP range requests

HTTP range requests are used by pacman if the file is already partially available on the local file system, so that pacman does not download the entire file.

Notice that there is one special case which complicates matters:
Assume that the client sends a request to download a file starting at byte b. Assume that flexo is not able to fulfill this request from the local cache because we have only s bytes of this file available locally, where s < b. So in that case, we need to fetch the file from a remote mirror. What range are we going to request from the remote mirror?

  1. Requesting the range s- (i.e., everything starting from byte s) would allow us to concatenate the payload to the existing file. This would not work as intended if b is much larger than s: Imagine the client has already 1GiB of the file available locally, but flexo does not have this file available locally yet. In this case, flexo would first have to download 1GiB from the remote mirror until it could start sending data to the client. With a slow internet connection, the client could time out before he receives the first byte.
  2. Requesting the range b- would allow us to start sending data to the client immediately, so we would not run into timeouts or anything similar. But the remaining question is: how should we store the payload we receive from the server? Flexo is currently based on the assumption that all locally stored files start at byte 0, and not some arbitrary byte b. Storing incomplete files that lack data at their beginning (not their end) would make things much more difficult.

Since we assume that the case described above is a rare special case, we choose the following workaround:
Instead of sending a request to the remote mirror, we just respond with a redirect so that the client fetches the file directly from the remote mirror.

Support to Serve Packages for ARM

Hi @nroi,

first of all thanks for further improving/evolving cpcache as flexo. 👍

I would like to ask if it would be possible to cache and serve packages for other architectures than x86_64, especially arm*?

I did a first quick try and it did not work. I also tried to add some arm mirror, but this also did not help.

Thanks.

Latency tests don't work as intended (mirror auto-selection)

The latency tests are intended to select low-latency mirrors, so we would expect users from the USA to get mirrors from America, users from Australia to get users from Australia, etc. Apparently, this is not the case: See #28

My mirror auto-selection seems not to have worked too well :( Perhaps I can help because I'm a US user(?). I find I have mirrors from Sweden and Australia in my auto-selection list, as well as United States and Canada

Fallback strategy if fetching data from remote JSON endpoint has failed

When Flexo starts and the latency results are not available yet or obsolete, it fetches the JSON document from https://archlinux.org/mirrors/status/json/ and deserializes the JSON content. There are various reasons why this can fail:

  • The ArchLinux server can be offline for maintenance.
  • The URL could have changed (in fact, this has already happened in the past).
  • The JSON format could have changed, for example, a field that used to be included in the past might have been removed.

To make Flexo more robust, we should include a fallback strategy to cope with this. Current plan:

  • Include a new setting such as mirrors_auto.mirrors_status_json_endpoint_fallback.
  • By default, this setting is set to something like file:///var/cache/flexo/state/mirrorlist_fallback.json (the nice thing is that we use CURL to fetch the JSON, and CURL understands file:// just as well as it does https://.
  • Flexo updates this file (/var/cache/flexo/state/mirrorlist_fallback.json) as soon as it has successfully fetched, and deserialized, the mirrors from the remote endpoint. Make sure that the existing file is overwritten only after the deserialization has succeeded!
  • When Flexo starts, and it has failed to fetch and deserialize the content from https://archlinux.org/mirrors/status/json/, then it will fetch the JSON from the fallback file.

Removal of outdated packages with out paccache

Thank you for this wonderful tool!

I'm using https://github.com/docker-flexo/docker-flexo which is based on debian, and I'm hosting it on an ubuntu sever.

Furthermore I'm using a named docker-compose volume to store the cache like so

  flexo:
    restart: ${RESTART-unless-stopped}
    image: nroi/flexo
    container_name: flexo
    volumes:
      - flexo-cache:/var/cache/flexo

All this put together makes it a challenge to run paccache -r. Currently the best solution is to spin up a temporary docker container using the archlinux image, mount the named volume, and run the command in the README using paccache -r.

While this works it would be nice if flexo would prune old packages on its own with out the use of paccache -r.

Is this in scope?

Flaky test cases in docker tests

The problem seems to be that for some reason, during our integration tests, it can happen that all services suddenly stall completely: they are completely frozen, do not respond to HTTP requests or executing any code at all. It only takes a few seconds or a few hundred milliseconds until they "unfreeze", but this still causes failing test cases.

The problem has been analyzed on the timeouts-wip branch, this is what is known so far:

  • The problem is related to Docker, and not the Rust program. To verify this, we have added a NGINX service into our docker-compose.yml and written a bash script test_stalling.sh: We can start our docker services with ./docker-compose and then run the test_stalling.sh script. It may take more than one attempt to reproduce the issue, but the test_stalling script should eventually output something like >>> Threshold exceeded: 1.625723038

  • The problem is not restricted to a single docker service: it seems that all docker services are stalling completely. We can run the following two statements in two parallel bash sessions:

    while true; do curl -f -s http://127.0.0.1:8088 > /dev/null && date +%s%N;  sleep .2; done | ./analyze_nanos.py
     while true; do curl -f -s http://127.0.0.1:8099 > /dev/null && date +%s%N;  sleep .2; done | ./analyze_nanos.py

    and we will see that the error occurs at both bash sessions at the same time.

  • It's not just the docker services that have been started with this particular compose file that are stalling. For example, if we start a third NGINX server outside of the docker-compose file, for example, like so:

    docker container run -it --rm -p 8077:80 nginx:1.19.1-alpine

    then we can verify that this NGINX instance stalls at the exact same moment where the other NGINX instances also stall.

Allow human-readable strings for low_speed_limit setting

It would be more convenient for users if it were possible to define a human-readable value for curl's low_speed_limit setting, for example, 20MiB/s or "20MBit". With the current implementation, users need to calculate this value themselves.

implementation draft:

  • A new setting low_speed_limit_formatted of type Option<String> is introduced, in addition to the existing low_speed_limit setting.
  • If both the low_speed_limit and the low_speed_limit_formatted settings are defined, the low_speed_limit setting is ignored in favor of the low_speed_limit_formatted setting.
  • A parser converts the value from String to u32 as follows:
    • If the value includes digits only, the value is just converted from String to u32.
    • If the value ends with something like KiB/s or MiB/s, then the numeric prefix is multiplied by 2^10, or 2^20 or whatever.
    • If the value ends with something like KBit/s or MBit/s, then the numeric prefix is divided by 8, then multiplied by 10^3, or 10^6 or whatever.

Proposed project icon(?)

I'm not sure if this would qualify as "fair use" or not... however....

Please close if inappropriate; HTH :)

Clemmitt
flexo_icon

Make local pacman use flexo's cache

It would be really great to somehow make local pacman use flexo's cache as a cache directory for local system updates;

So for example:

  • Suppose I have a certain package in my flexo cache and I wish to pacman -S that package,
    My suggestion is to not indulge pacman to download this certain package from localhost flexo service (thus creating a wasteful copy of the file),
    But instead to directly install it from flexo's cache!

I've though of few approaches:

  1. Encourage pacman to officially support storing and using its' system cache in flexo's format (aka the mirror-able tree structure).
  2. Include a configurable feature in flexo that will create a symlink for every package;
    • In /var/cache/flexo/pkg/ create a new directory (lets call it ./pacman/ for demonstration);
    • Symlink all latest versions of all packages into /var/cache/flexo/pkg/pacman/*
    • Point into the new cache directory in your local /etc/pacman.conf

Edit:
I would gladly contribute but unfortunately i don't know Rust (yet!?).

Compilation fails when off_t is defined as 32 bit integer

We are currently calling libc::sendfile with an offset of type i64. This works fine on machines where off_t is defined as a 64 bit signed integer, but it fails to compile on other machines (for example, ARM devices like the cubieboard or the Raspberry Pi: #14 (comment)).

Notice the following excerpt from man 2 sendfile:

The original Linux sendfile() system call was not designed to handle large file offsets. Consequently, Linux 2.4 added sendfile64(), with a wider type for the offset argument. The glibc sendfile() wrapper function transparently deals with the kernel differences.

By using sendfile64(), compilation should succeed on those devices that have off_t defined as a 32 bit signed integer.

Improve mirror selection process

The mirror selection process and the default settings should be improved such that:

  • If the user has a high latency internet connection, the user will not end up without any mirrors (i.e., Flexo will not abort with the message Unable to find remote mirrors that match the selected criteria..
  • Mirrors from different locations are considered so that users will run their latency tests against mirrors from the same continent (See #22 for more details: The user is located in Australia, but the latency tests run only against European mirrors).

Compatibility with existing cache directory

Many users will switch to flexo from cpcache, so they will already have the /var/cache/cpcache directory on their file system. To make the switch as easy as possible, we should provide a small guide describing how to transfer or symlink the files from one directory to the other.

Timeout when files are deleted after Flexo has started

Steps to reproduce:

  1. Start Flexo
  2. Remove a package file from /var/cache/flexo/pkg
  3. Try to download this file

Observed behavior:
The downloading client receives no response, it completely stalls for multiple seconds until the client aborts the download.

Expected behavior:
The file should be downloaded from the remote mirror.

Random freezes

If anyone else has experienced a similar issue, please leave a 👍 or comment.

For what it's worth, I've been experiencing freezes in the service lately […]

I've not been able to get any response with HTTP, and the log shows nothing happens. Even restarting flexo doesn't work immediately. Then, after maybe 1s-2m, it just starts responding again like nothing happened.

Haven't looked into it, so could definitely be something local, though thought I'd mention anyway just in case.

Originally posted by @aude in #27 (comment)

Compatibility with cpcache.toml configuration file

Many users will switch to flexo from cpcache, so they will already have a cpcache.toml file in their file system. To make sure that switching from cpcache to flexo is painless, we should

  • Ensure that the new TOML file remains compatible to the previous one, or
  • Write a small guide on what adaptations need to be made to use your previous config file in flexo.

Improve error messages when all mirrors fail the latency test

If all mirrors fail the latency tests, the user is shown the message Unable to find remote mirrors that match the selected criteria. (see #22), which isn't very helpful. Improve the error messages to tell the user that he should increase the latency or the number or mirrors to be tested.

Switch to another mirror if download is too slow

It can happen that even after the initial latency tests ran, the mirror that was chosen is slower than desired. In those cases, flexo should switch to another mirror without the client noticing any interruption in the download progress.

curl already has the CURLOPT_LOW_SPEED_TIME and CURLOPT_LOW_SPEED_LIMIT setting, and flexo already uses these settings. But currently, flexo just crashes when curl aborts the download due to the mirror being slower than allowed by those two settings.

Slow downloads via Wifi

Downloads are awfully slow if the downloading client is connected via wifi. This happens both in debug mode and in release mode.

Start fails due to invalid data in latency_test_results.json

Short summary: If Flexo fails to start with a message like the following:

Unable to find remote mirrors that match the selected criteria. Please adapt your flexo.toml configuration file.

Remove the file /var/cache/flexo/state/latency_test_results.json and try to restart Flexo.

Long version:

If Flexo is configured to use mirror_selection_method = "predefined", and then reset to mirror_selection_method = "auto", it can happen that the /var/cache/flexo/state/latency_test_results.json contains garbage data like the following:

{
  "version": 3,
  "timestamp": "2021-05-27T16:19:33.197019202Z",
  "download_providers": [
    {
      "uri": "https://mirror.rackspace.com/archlinux/",
      "name": "https://mirror.rackspace.com/archlinux/",
      "mirror_results": {
        "total_time": {
          "secs": 0,
          "nanos": 0
        },
        "namelookup_duration": {
          "secs": 0,
          "nanos": 0
        },
        "connect_duration": {
          "secs": 0,
          "nanos": 0
        },
        "pretransfer_time": {
          "secs": 0,
          "nanos": 0
        },
        "starttransfer_time": {
          "secs": 0,
          "nanos": 0
        }
      },
      "country_code": "Unknown"
    }
  ]
}

Flexo then attempts to find a mirror with the country code "Unknown", which obviously does not exist, and therefore fails to start. To avoid this, Flexo should not store garbage data in /var/cache/flexo/state/latency_test_results.json.

Improve robustness for slow, high-latency internet connections

See #52 as an example where a user had issues due to timeouts. It seems that the issue can come up when, for some reason, the connectivity to a given mirror is less than ideal, even though the mirror is located near the user.

Look into this issue more closely and see if this is something where Flexo can be made more robust.

Implement bandwidth-tests in addition to latency-tests

So far, we have only used latency tests mainly because they don't take as much time as latency tests and because they don't cause as much stress on the remote mirrors. It would still be beneficial to have the option to run bandwidth-tests when the user desires to do so – but not in an automated manner, like every time when Flexo starts.

Perhaps we could just provide an endpoint so that the user can run an HTTP POST request with curl in order to instruct Flexo to run bandwidth-tests.

Don't abort slow downloads too often

Slow downloads are not always a result of slow mirrors: It could also be that the ISP has issues, or that the bandwidth is currently utilized by another application.

For those cases, we should make sure that Flexo will not go through the entire list of available mirrors, but instead just switch maybe 3 or 5 times and then give up.

Exclude mirrors returning cloudflare pages from latency tests

Some mirrors put cloudfront in front of the actual server, so the response we obtain in our latency test is not necessarily a response from the actual server, but from cloudflare. This is a problem if cloudware does not return the actual content, but instead asks the user to enter a captcha. This means that:

  1. This mirror is completely unsuited as a mirror because it requires the user to enter a captcha before starting the download, and
  2. The latency result is completely useless, because we intend to measure the latency to the actual server, not to cloudflare.

Persistent connections (client-to-server)

Flexo currently doesn't support persistent connections for the connection that is established between the client (e.g. pacman) and flexo. Example:

curl -vv $(pacman -Sp file) $(pacman -Sp wget) > /tmp/outt --output /tmp/ffile

Will output:

Connection died, retrying a fresh connect

Persistent connections should provide tangible benefits especially if many small files are downloaded.

Create test case for files with size > 2GiB

from man 2 sendfile:

sendfile() will transfer at most 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes actually transferred. (This is true on both 32-bit and 64-bit systems.)

We should introduce a test case with a file significantly larger than that to make sure that flexo handles this case.

Don't abort slow downloads if no mirrors are left

We're currently aborting downloads if the download speed falls below the configured threshold. Before we do this, we need to ensure that there is at least one mirror left: Otherwise, we could finally run out of mirrors. Downloading a file slowly is probably better than crashing or returning a 500 response.

Download database files instead of redirecting

Currently, we choose a very simple approach to ensure that database files are not cached: Instead of having Flexo download those files from the remote mirror, Flexo simply returns a 301 redirect response, so that the client downloads those files from the remote mirror. This approach has two disadvantages:

  1. The client cannot use a single, persistent TCP connection for all downloads: It first connects to Flexo, and then connects to the remote mirror to fetch the database file.
  2. Flexo cannot provide any sort of robustness if the remote mirror turns out to be too slow or unresponsive: For example, when the remote mirror just stalls without serving any data, the client that runs pacman will receive an error message similar to to the following:
    error: failed retrieving file 'extra.db' from 127.0.0.1:7878 : Operation timed out after 10001 milliseconds with 0 out of 0 bytes received
    
    If Flexo were to fetch the files from the remote mirror, it could detect that the mirror is unresponsive and immediately switch to an alternative mirror.

Therefore, we need a solution where the database files are downloaded from the remote mirror, but not cached: With every new GET request, the database files must be fetched from the remote mirror.

Verbose mode?

I've been using Flexo for a while using Docker, and had to restart it today. It's not starting anymore:

[2020-10-27T04:47:21Z INFO  flexo] Will switch mirror if download speed falls below 125.00 KiB/s
[2020-10-27T04:47:25Z ERROR flexo] Unable to find remote mirrors that match the selected criteria.

I've got those two lines, and the container stops.

I tried specifying a few things through the configuration file, which didn't make any difference. Is there any way to enable a more verbose output?

Does not work on NFS shares

The program panicked because extended attributes are not supported in NFS.

flexo         | thread 'main' panicked at 'Unable to get extended file attributes: Os { code: 95, kind: Other, message: "Operation not supported" }', src/mirror_flexo.rs:278:74
flexo         | stack backtrace:
flexo         |    0:     0x5595f50a75a0 - std::backtrace_rs::backtrace::libunwind::trace::h72c2fb8038f1bbee
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/../../backtrace/src/backtrace/libunwind.rs:96
flexo         |    1:     0x5595f50a75a0 - std::backtrace_rs::backtrace::trace_unsynchronized::h1e3b084883f1e78c
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/../../backtrace/src/backtrace/mod.rs:66
flexo         |    2:     0x5595f50a75a0 - std::sys_common::backtrace::_print_fmt::h3bf6a7ebf7f0394a
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/sys_common/backtrace.rs:79
flexo         |    3:     0x5595f50a75a0 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h2e8cb764b7fe02e7
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/sys_common/backtrace.rs:58
flexo         |    4:     0x5595f50c88fc - core::fmt::write::h7a1184eaee6a8644
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/fmt/mod.rs:1080
flexo         |    5:     0x5595f50a4ac2 - std::io::Write::write_fmt::haeeb374d93a67eac
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/io/mod.rs:1516
flexo         |    6:     0x5595f50a9bcd - std::sys_common::backtrace::_print::h1d14a7f6ad632dc8
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/sys_common/backtrace.rs:61
flexo         |    7:     0x5595f50a9bcd - std::sys_common::backtrace::print::h301abac8bb2e3e81
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/sys_common/backtrace.rs:48
flexo         |    8:     0x5595f50a9bcd - std::panicking::default_hook::{{closure}}::hde0cb80358a6920a
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:208
flexo         |    9:     0x5595f50a9878 - std::panicking::default_hook::h9b1a691049a0ec8f
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:227
flexo         |   10:     0x5595f4fcd0fe - flexo::main::{{closure}}::hf1ec2d7f2094680e
flexo         |   11:     0x5595f50aa2d6 - std::panicking::rust_panic_with_hook::h2bdec87b60580584
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:581
flexo         |   12:     0x5595f50a9e59 - std::panicking::begin_panic_handler::{{closure}}::h101ca09d9df5db47
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:484
flexo         |   13:     0x5595f50a7a0c - std::sys_common::backtrace::__rust_end_short_backtrace::h3bb85654c20113ca
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/sys_common/backtrace.rs:153
flexo         |   14:     0x5595f50a9e19 - rust_begin_unwind
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:483
flexo         |   15:     0x5595f50c71d1 - core::panicking::panic_fmt::h48c31e1e3d550146
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/panicking.rs:85
flexo         |   16:     0x5595f50c6ff3 - core::option::expect_none_failed::h6154dc750ae47ade
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/option.rs:1234
flexo         |   17:     0x5595f4fbdc11 - <flexo::mirror_flexo::DownloadJob as flexo::Job>::initialize_cache::h7f5198327861392b
flexo         |   18:     0x5595f4fd50c5 - flexo::JobContext<J>::new::hf13dfcde9c39ef77
flexo         |   19:     0x5595f4fcba4e - flexo::main::h3d7a33cdeb0ccf9f
flexo         |   20:     0x5595f4fb0b33 - std::sys_common::backtrace::__rust_begin_short_backtrace::h682322d6f8de7a0b
flexo         |   21:     0x5595f4fb0c19 - std::rt::lang_start::{{closure}}::h30e8d153ece79ab7
flexo         |   22:     0x5595f50aa7c7 - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::he179d32a5d10d957
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/ops/function.rs:259
flexo         |   23:     0x5595f50aa7c7 - std::panicking::try::do_call::hcb3d5e7be089b2b4
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:381
flexo         |   24:     0x5595f50aa7c7 - std::panicking::try::h7ac93b0cd56fb701
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:345
flexo         |   25:     0x5595f50aa7c7 - std::panic::catch_unwind::h7b40e396c93a4fcd
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panic.rs:382
flexo         |   26:     0x5595f50aa7c7 - std::rt::lang_start_internal::h142b9cc66267fea1
flexo         |                                at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/rt.rs:51
flexo         |   27:     0x5595f4fd0c22 - main
flexo         |   28:     0x7ff3d957509b - __libc_start_main
flexo         |   29:     0x5595f4f7210a - _start
flexo         |   30:                0x0 - <unknown>

Server unresponsive after client aborted some downloads

After aborting a pacman -Syu, subsequent pacman -Syu were unsuccessful:

error: failed retrieving file 'core.db' from localhost:7878 : Operation too slow. Less than 1 bytes/sec transferred the last 10 seconds

Server log:

Apr 13 13:52:52 archWS flexo[70162]: Retrieved 125 files with a total size of 654.47 MiB from local file system.
Apr 13 13:53:37 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:37 archWS flexo[70162]: Received header
Apr 13 13:53:37 archWS flexo[70162]: GetRequest { resume_from: None, path: "core/os/x86_64/core.db" }
Apr 13 13:53:37 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:37 archWS flexo[70162]: Serve file via redirect.
Apr 13 13:53:37 archWS flexo[70162]: header: "HTTP/1.1 301 Moved Permanently\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:37 GMT\r\nContent-Length: 0\r\nLocation: https://mirror.f4st.host/archlinux/core/os/x86_64/core.db\r\n\r\n"
Apr 13 13:53:37 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:37 archWS flexo[70162]: Received header
Apr 13 13:53:37 archWS flexo[70162]: GetRequest { resume_from: None, path: "extra/os/x86_64/extra.db" }
Apr 13 13:53:37 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:37 archWS flexo[70162]: Serve file via redirect.
Apr 13 13:53:37 archWS flexo[70162]: header: "HTTP/1.1 301 Moved Permanently\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:37 GMT\r\nContent-Length: 0\r\nLocation: https://mirror.f4st.host/archlinux/extra/os/x86_64/extra.db\r\n\r\n"
Apr 13 13:53:37 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:37 archWS flexo[70162]: Received header
Apr 13 13:53:37 archWS flexo[70162]: GetRequest { resume_from: None, path: "community/os/x86_64/community.db" }
Apr 13 13:53:37 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:37 archWS flexo[70162]: Serve file via redirect.
Apr 13 13:53:37 archWS flexo[70162]: header: "HTTP/1.1 301 Moved Permanently\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:37 GMT\r\nContent-Length: 0\r\nLocation: https://mirror.f4st.host/archlinux/community/os/x86_64/community.db\r\n\r\n"
Apr 13 13:53:37 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:37 archWS flexo[70162]: Received header
Apr 13 13:53:37 archWS flexo[70162]: GetRequest { resume_from: None, path: "multilib/os/x86_64/multilib.db" }
Apr 13 13:53:37 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:37 archWS flexo[70162]: Serve file via redirect.
Apr 13 13:53:37 archWS flexo[70162]: header: "HTTP/1.1 301 Moved Permanently\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:37 GMT\r\nContent-Length: 0\r\nLocation: https://mirror.f4st.host/archlinux/multilib/os/x86_64/multilib.db\r\n\r\n"
Apr 13 13:53:38 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:38 archWS flexo[70162]: Received header
Apr 13 13:53:38 archWS flexo[70162]: GetRequest { resume_from: Some(17514478), path: "extra/os/x86_64/thunderbird-68.7.0-1-x86_64.pkg.tar.zst" }
Apr 13 13:53:38 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:38 archWS flexo[70162]: Serve file via redirect.
Apr 13 13:53:38 archWS flexo[70162]: header: "HTTP/1.1 301 Moved Permanently\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:38 GMT\r\nContent-Length: 0\r\nLocation: https://mirror.f4st.host/archlinux/extra/os/x86_64/thunderbird-68.7.0-1-x86_64.pkg.tar.zst\r\n\r\n"
Apr 13 13:53:39 archWS flexo[70162]: Established connection with client.
Apr 13 13:53:39 archWS flexo[70162]: Received header
Apr 13 13:53:39 archWS flexo[70162]: GetRequest { resume_from: None, path: "community/os/x86_64/haskell-hackage-security-0.6.0.1-1-x86_64.pkg.tar.zst" }
Apr 13 13:53:39 archWS flexo[70162]: Attempt to schedule new job
Apr 13 13:53:39 archWS flexo[70162]: Job was scheduled, will serve from growing file
Apr 13 13:53:39 archWS flexo[70162]: selected provider: DownloadProvider { uri: https://mirror.f4st.host/archlinux/, mirror_results: MirrorResults { namelookup_duration: 977.999µs, connect_duration: 22.414ms }, country: "Germany" }
Apr 13 13:53:39 archWS flexo[70162]: need to create new channel: DownloadProvider { uri: https://mirror.f4st.host/archlinux/, mirror_results: MirrorResults { namelookup_duration: 977.999µs, connect_duration: 22.414ms }, country: "Germany" }
Apr 13 13:53:39 archWS flexo[70162]: Attempt to create file: "/var/cache/flexo/pkg/community/os/x86_64/haskell-hackage-security-0.6.0.1-1-x86_64.pkg.tar.zst"
Apr 13 13:53:39 archWS flexo[70162]: Fetch package from remote mirror: https://mirror.f4st.host/archlinux/community/os/x86_64/haskell-hackage-security-0.6.0.1-1-x86_64.pkg.tar.zst
Apr 13 13:53:39 archWS flexo[70162]: No speed limit was set.
Apr 13 13:53:39 archWS flexo[70162]: Received complete header from server
Apr 13 13:53:39 archWS flexo[70162]: code is 200
Apr 13 13:53:39 archWS flexo[70162]: Sending content length: 537365
Apr 13 13:53:39 archWS flexo[70162]: Received content length via channel: 537365
Apr 13 13:53:39 archWS flexo[70162]: header: "HTTP/1.1 200 OK\r\nServer: flexo\r\nDate: Mon, 13 Apr 2020 11:53:39 GMT\r\nContent-Length: 537365\r\n\r\n"
Apr 13 13:53:39 archWS flexo[70162]: Received header from provider, status OK

Improve docker image

The current docker image is based on the official ArchLinux image, which is not considerably lightweight. The docker file should therefore be changed so that it results in a leaner image.

Port 7878 closed(???) [Fixed]

Hello nroi,

First, thank you for flexo, which I'm attempting to use for the first time. I appreciate the name and hope you'll adopt a suitable cartoon icon as the project logo, assuming no copyright issues....

I am a long time Archer, but I feel like a total n00b. I installed flexo-git from AUR. The only flexo.toml edit I made was to set low_speed_limit. 'port = 7878' is unchanged in flexo.toml. I systemctl started and enabled flexo, and it's running --

$ systemctl status flexo | cat
● flexo.service - central pacman cache
Loaded: loaded (/usr/lib/systemd/system/flexo.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2020-12-05 11:22:52 EST; 4s ago
Main PID: 66408 (flexo)
Tasks: 1 (limit: 35952)
Memory: 2.4M
CGroup: /system.slice/flexo.service
└─66408 /usr/bin/flexo

Dec 05 11:22:52 vagente595 systemd[1]: Started central pacman cache.
Dec 05 11:22:52 vagente595 flexo[66408]: [2020-12-05T16:22:52.049Z INFO flexo] Will switch mirror if download speed falls below 1.20 MiB/s

But port 7878 is closed :( --

$ nmap -p 7878 localhost
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-05 11:24 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00010s latency).
Other addresses for localhost (not scanned): ::1

PORT STATE SERVICE
7878/tcp closed owms

Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds

iptables, ip6tables and nftables services aren't enabled or running. I don't have any iptables or nftables packet filtering in place --

$ sudo iptables -vS
-P INPUT ACCEPT -c 16745 3135815
-P FORWARD ACCEPT -c 0 0
-P OUTPUT ACCEPT -c 18248 1334084
$ sudo ip6tables -vS
-P INPUT ACCEPT -c 30266 49863949
-P FORWARD ACCEPT -c 0 0
-P OUTPUT ACCEPT -c 27922 2447847
$ sudo nft list ruleset
$
$ sudo nft list tables
$

To add insult to injury, I installed pacoloco-git and systemctl started that after stopping flexo, and that port is open --

$ nmap -p 9129 localhost
Starting Nmap 7.91 ( https://nmap.org ) at 2020-12-05 11:36 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00013s latency).
Other addresses for localhost (not scanned): ::1

PORT STATE SERVICE
9129/tcp open unknown

Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds

... so I don't appear to have any networking misconfiguration. This is such a beginner problem! Why am I an idiot? Thank you for any help you can provide :)

Clemmitt Sigler

Support for custom repo (e.g. archzfs)?

Hello,

Thanks for the project! I have multiple Arch installs on my LAN and a single, very slow, internet connection, so I'm looking into options for package caching. A project in rust always catches my eye.

Is there support for custom repos like archzfs?

In contrast to core / extra / community, adding the flexo url to my archzfs section of pacman.conf seems not to work, resulting in:

error: failed retrieving file 'archzfs.db' from localhost:7878 : The requested URL returned error: 404
[archzfs]
Server = http://localhost:7878/$repo/os/$arch
Server = http://archzfs.com/$repo/$arch
...

I tried adding http://archzfs.com/ to mirrors_predefined in flexo.toml with no change in behavior.

I see a few other issues mentioning zfs but nothing on the archzfs repo or adding custom repos.

Is using flexo to cache packages from custom repos possible?

Panicking threads cause PoisonErrors

We want flexo to be robust in the sense that if a single thread serving a single request fails, subsequent requests can still be successfully handled. This is currently not the case: If one thread runs into a panic!, subsequent requests will fail also:

flexo-server_1  | Attempt to schedule new job
flexo-server_1  | thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: "PoisonError { inner: .. }"', src/main.rs:89:30
flexo-server_1  | stack backtrace:
flexo-server_1  |    0:           0x4a0864 - backtrace::backtrace::libunwind::trace::h73f66b70bb6ab0fb
flexo-server_1  |                                at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
flexo-server_1  |    1:           0x4a0864 - backtrace::backtrace::trace_unsynchronized::h982f9756f52d4730
flexo-server_1  |                                at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
flexo-server_1  |    2:           0x4a0864 - std::sys_common::backtrace::_print_fmt::hcb8a98b6ac6fb3df
flexo-server_1  |                                at src/libstd/sys_common/backtrace.rs:77
flexo-server_1  |    3:           0x4a0864 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h8ca8145d5b4dec88
flexo-server_1  |                                at src/libstd/sys_common/backtrace.rs:59
flexo-server_1  |    4:           0x4d344c - core::fmt::write::h246c4508de237efa
flexo-server_1  |                                at src/libcore/fmt/mod.rs:1052
flexo-server_1  |    5:           0x49e427 - std::io::Write::write_fmt::h9ad42688c275e700
flexo-server_1  |                                at src/libstd/io/mod.rs:1426
flexo-server_1  |    6:           0x4a2a05 - std::sys_common::backtrace::_print::h22323acb5b25085c
flexo-server_1  |                                at src/libstd/sys_common/backtrace.rs:62
flexo-server_1  |    7:           0x4a2a05 - std::sys_common::backtrace::print::h03732590463ddb1a
flexo-server_1  |                                at src/libstd/sys_common/backtrace.rs:49
flexo-server_1  |    8:           0x4a2a05 - std::panicking::default_hook::{{closure}}::hb0df26df8d964988
flexo-server_1  |                                at src/libstd/panicking.rs:204
flexo-server_1  |    9:           0x4a26f1 - std::panicking::default_hook::ha8a56736eae7cb38
flexo-server_1  |                                at src/libstd/panicking.rs:224
flexo-server_1  |   10:           0x4a306a - std::panicking::rust_panic_with_hook::h70dad2471f7ea801
flexo-server_1  |                                at src/libstd/panicking.rs:472
flexo-server_1  |   11:           0x4a2c50 - rust_begin_unwind
flexo-server_1  |                                at src/libstd/panicking.rs:380
flexo-server_1  |   12:           0x4d1f61 - core::panicking::panic_fmt::h8c279dee2ee7cd29
flexo-server_1  |                                at src/libcore/panicking.rs:85
flexo-server_1  |   13:           0x4d1d83 - core::option::expect_none_failed::h24e1ba43092a11ab
flexo-server_1  |                                at src/libcore/option.rs:1199
flexo-server_1  |   14:           0x439953 - flexo::serve_file::he8ed5a81be4b2f79
flexo-server_1  |   15:           0x46b176 - std::sys_common::backtrace::__rust_begin_short_backtrace::h0d28dc34c7b23eff
flexo-server_1  |   16:           0x4459bb - std::panicking::try::do_call::h32abdd41e9ebd430
flexo-server_1  |   17:           0x4a6567 - __rust_maybe_catch_panic
flexo-server_1  |                                at src/libpanic_unwind/lib.rs:86
flexo-server_1  |   18:           0x445d27 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h1f5e788360a930e4
flexo-server_1  |   19:           0x4983df - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::h5c209cc2f51c734a
flexo-server_1  |                                at /rustc/b8cedc00407a4c56a3bda1ed605c6fc166655447/src/liballoc/boxed.rs:1015
flexo-server_1  |   20:           0x4a5cf0 - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::ha982e2070915d4a3
flexo-server_1  |                                at /rustc/b8cedc00407a4c56a3bda1ed605c6fc166655447/src/liballoc/boxed.rs:1015
flexo-server_1  |   21:           0x4a5cf0 - std::sys_common::thread::start_thread::hc8a38454df69b8b9
flexo-server_1  |                                at src/libstd/sys_common/thread.rs:13
flexo-server_1  |   22:           0x4a5cf0 - std::sys::unix::thread::Thread::new::thread_start::h0344e3a791d8fe52
flexo-server_1  |                                at src/libstd/sys/unix/thread.rs:80

where main:89 refers to:

let result = job_context.lock().unwrap().schedule(order.clone(), get_request.resume_from);

Build and test flexo on ARM devices

Rust supports ARM, so we should be able to support ARM devices like the Raspberry Pi. We probably need to modify the PKGBUILD on AUR so that it can be built on ArchLinux ARM.

Notice that the scope of this particular issue is only to be able to run flexo on ARM devices: The client who downloads packages still needs to have a x86_64 architecture.

[bug] flexo serves 0 length files

Hi, due to some unknown reason a package download can fail and flexo identifies the file as "0 length". The issue is that it keeps in the cache and continues to serve it.

Expected result: 0 length files are automatically deleted and re downloaded if necessary. Here is log of such occurrence:

Apr 13 11:59:27 arch-builder flexo[55449]: [2021-04-13T09:59:27.307Z INFO  flexo] Request served [CACHE HIT]: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst"
Apr 13 11:59:27 arch-builder flexo[55449]: [2021-04-13T09:59:27.307Z INFO  flexo::mirror_flexo] The file "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" used to lack the content-length attribute, this attribute has now been set to 0.
Apr 13 11:59:26 arch-builder flexo[55449]: [2021-04-13T09:59:26.384Z INFO  flexo] Error: JobTerminated { channel: DownloadChannel { handle: Easy { handle: 0x7f0d14002890, handler: DownloadState { job_state: JobState { order: DownloadOrder { filepath: StrPath { path_buf: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", inner: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" } }, job_resources: Some(DownloadJobResources { file_state: FileState { buf_writer: BufWriter { writer: File { fd: 12, path: "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", read: false, write: true }, buffer: 0/8192 }, size_written: 0 }, header_state: HeaderState { received_header: [], header_success: None }, last_chance: true }), tx: Sender { .. } }, properties: MirrorConfig { cache_directory: "/var/cache/flexo/pkg", mirrorlist_fallback_file: "/var/cache/flexo/state/mirrorlist", mirrorlist_latency_test_results_file: Some("/var/cache/flexo/state/latency_test_results.json"), refresh_latency_tests_after: None, port: 7878, listen_ip_address: None, mirror_selection_method: Auto, mirrors_predefined: [], custom_repo: Some([CustomRepo { name: "archzfs", url: "http://mirror.sum7.eu/archlinux/" }, CustomRepo { name: "aurto", url: "http://10.10.50.31:8080/aurto/" }]), low_speed_limit: None, low_speed_time_secs: Some(3), max_speed_limit: None, num_versions_retain: Some(3), mirrors_auto: Some(MirrorsAutoConfig { mirrors_status_json_endpoint: "https://archlinux.org/mirrors/status/json/", mirrors_blacklist: [], https_required: false, ipv4: true, ipv6: false, max_score: 1.0, num_mirrors: 3, mirrors_random_or_sort: Random, timeout: 300, allowed_countries: Some([]) }) } } } }, error: CurlError(Error { description: "Timeout was reached", code: 28, extra: Some("Resolving timed out after 3000 milliseconds") }) }, try again
Apr 13 11:59:26 arch-builder flexo[55449]: [2021-04-13T09:59:26.384Z WARN  flexo::mirror_flexo] Unable to download from "http://arch.jensgutermuth.de/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst": Timeout reached. Try another remote mirror.
Apr 13 11:59:19 arch-builder flexo[55449]: [2021-04-13T09:59:19.803Z ERROR flexo] Unable to serve request "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst": FileAttrError(TimeoutError)
Apr 13 11:59:19 arch-builder flexo[55449]: [2021-04-13T09:59:19.803Z INFO  flexo] Number of attempts exceeded: File "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" not found.
Apr 13 11:59:08 arch-builder flexo[55449]: [2021-04-13T09:59:08.632Z ERROR flexo] Unable to serve request "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst": FileAttrError(TimeoutError)
Apr 13 11:59:08 arch-builder flexo[55449]: [2021-04-13T09:59:08.632Z INFO  flexo] Number of attempts exceeded: File "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" not found.
Apr 13 11:58:54 arch-builder flexo[55449]: [2021-04-13T09:58:54.777Z INFO  flexo] Request served [NO PAYLOAD]: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst"
Apr 13 11:58:52 arch-builder flexo[55449]: [2021-04-13T09:58:52.182Z INFO  flexo] Error: JobTerminated { channel: DownloadChannel { handle: Easy { handle: 0x7f0d14002890, handler: DownloadState { job_state: JobState { order: DownloadOrder { filepath: StrPath { path_buf: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", inner: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" } }, job_resources: Some(DownloadJobResources { file_state: FileState { buf_writer: BufWriter { writer: File { fd: 12, path: "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", read: false, write: true }, buffer: 0/8192 }, size_written: 0 }, header_state: HeaderState { received_header: [], header_success: None }, last_chance: false }), tx: Sender { .. } }, properties: MirrorConfig { cache_directory: "/var/cache/flexo/pkg", mirrorlist_fallback_file: "/var/cache/flexo/state/mirrorlist", mirrorlist_latency_test_results_file: Some("/var/cache/flexo/state/latency_test_results.json"), refresh_latency_tests_after: None, port: 7878, listen_ip_address: None, mirror_selection_method: Auto, mirrors_predefined: [], custom_repo: Some([CustomRepo { name: "archzfs", url: "http://mirror.sum7.eu/archlinux/" }, CustomRepo { name: "aurto", url: "http://10.10.50.31:8080/aurto/" }]), low_speed_limit: None, low_speed_time_secs: Some(3), max_speed_limit: None, num_versions_retain: Some(3), mirrors_auto: Some(MirrorsAutoConfig { mirrors_status_json_endpoint: "https://archlinux.org/mirrors/status/json/", mirrors_blacklist: [], https_required: false, ipv4: true, ipv6: false, max_score: 1.0, num_mirrors: 3, mirrors_random_or_sort: Random, timeout: 300, allowed_countries: Some([]) }) } } } }, error: CurlError(Error { description: "Timeout was reached", code: 28, extra: Some("Failed to connect to mirror.f4st.host port 443: Connection timed out") }) }, try again
Apr 13 11:58:52 arch-builder flexo[55449]: [2021-04-13T09:58:52.182Z WARN  flexo::mirror_flexo] Unable to download from "https://mirror.f4st.host/archlinux/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst": Timeout reached. Try another remote mirror.
Apr 13 11:58:50 arch-builder flexo[55449]: [2021-04-13T09:58:50.479Z INFO  flexo] Error: JobTerminated { channel: DownloadChannel { handle: Easy { handle: 0x7f0d140048a0, handler: DownloadState { job_state: JobState { order: DownloadOrder { filepath: StrPath { path_buf: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", inner: "extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst" } }, job_resources: Some(DownloadJobResources { file_state: FileState { buf_writer: BufWriter { writer: File { fd: 12, path: "/var/cache/flexo/pkg/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst", read: false, write: true }, buffer: 0/8192 }, size_written: 0 }, header_state: HeaderState { received_header: [], header_success: None }, last_chance: false }), tx: Sender { .. } }, properties: MirrorConfig { cache_directory: "/var/cache/flexo/pkg", mirrorlist_fallback_file: "/var/cache/flexo/state/mirrorlist", mirrorlist_latency_test_results_file: Some("/var/cache/flexo/state/latency_test_results.json"), refresh_latency_tests_after: None, port: 7878, listen_ip_address: None, mirror_selection_method: Auto, mirrors_predefined: [], custom_repo: Some([CustomRepo { name: "archzfs", url: "http://mirror.sum7.eu/archlinux/" }, CustomRepo { name: "aurto", url: "http://10.10.50.31:8080/aurto/" }]), low_speed_limit: None, low_speed_time_secs: Some(3), max_speed_limit: None, num_versions_retain: Some(3), mirrors_auto: Some(MirrorsAutoConfig { mirrors_status_json_endpoint: "https://archlinux.org/mirrors/status/json/", mirrors_blacklist: [], https_required: false, ipv4: true, ipv6: false, max_score: 1.0, num_mirrors: 3, mirrors_random_or_sort: Random, timeout: 300, allowed_countries: Some([]) }) } } } }, error: CurlError(Error { description: "Timeout was reached", code: 28, extra: Some("Failed to connect to mirror.f4st.host port 80: Connection timed out") }) }, try again
Apr 13 11:58:50 arch-builder flexo[55449]: [2021-04-13T09:58:50.479Z WARN  flexo::mirror_flexo] Unable to download from "http://mirror.f4st.host/archlinux/extra/os/x86_64/qt5-tools-5.15.2+kde+r17-1-x86_64.pkg.tar.zst": Timeout reached. Try another remote mirror.

Any idea how to increase the downlaod time out as workaround?

Move setting mirrors_blacklist to mirrors_auto

The mirrors_blacklist setting only makes sense if the mirror selection method is set to auto: The only alternative to auto is predefined, and no user would want to enter a mirror into the predefined list, only to then exclude the exact same mirror by also entering it into the blacklist.

This refactoring should also make it easier to clean up the filter_predicate function, which currently uses .unwrap().

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.