zhaofengli / attic Goto Github PK
View Code? Open in Web Editor NEWMulti-tenant Nix Binary Cache
Home Page: https://docs.attic.rs
License: Other
Multi-tenant Nix Binary Cache
Home Page: https://docs.attic.rs
License: Other
I haven't seen any way to upload build logs, neither via nix store copy-log
or by doing attic push
.
Is that something you plan to add?
Currently it seems like all chunks end up in one big directory:
impl LocalBackend {
▏ pub async fn new(config: LocalStorageConfig) -> ServerResult<Self> {
▏ ▏ fs::create_dir_all(&config.path)
▏ ▏ ▏ .await
▏ ▏ ▏ .map_err(ServerError::storage_error)?;
▏ ▏
▏ ▏ Ok(Self { config })
▏ }
▏
▏ fn get_path(&self, p: &str) -> PathBuf {
▏ ▏ self.config.path.join(p)
▏ }
}
This seems bad from a performance standpoint. Many file operations in the kernel require locking of the parent directory which makes them slow when having a lot of concurrency. Also thinks like listing a directory becomes quite slow this way. What are your thoughts on taking the first few bytes of the directory to compute a subdirectory instead?
Before it was forked, borg
was called attic
, so many people are going to be confused if they see the name used for a different tool I reckon.
It would be nice to set a target max-size for a cache instead of just doing gc based on age.
I did a benchmark of how well Attic deduplicates chromium
derivations, and how it compares to deduplicating backup tools such as bup
and bupstash
: NixOS/nixpkgs#89380 (comment)
One thing I noticed is that in ~/.local/share/attic/storage
, all CDC chunks are in a single folder.
This will not work well (be slow) on many Linux file systems, and not work at all on others.
For example:
ext4
no space left on device
with high likelyhood even after 50k files in a single dir, see https://blog.merovius.de/posts/2013-10-20-ext4-mysterious-no-space-left-on/large_dir
support flag
readdir()
is the only system call that can be used for it, and it requires reading a dir top to bottom,
My benchmark of a couple Chromium derivations already created 100k files.
So it might already break ext4.
This is why most deduplicating backup tools like bup
, kopia
, and soon bupstash
, use prefixe directories, e.g.
928/
928fe29d-f7c6-4bdf-98ae-6185c3efd604.chunk
I recommend that Attic does the same.
How long your prefix dir should be depends on how many files you expect to store (which in turn depends on how large your chunks are and how large the deduplicated content is), see more in the next posts.
I am importing the package and extracting the attic-client attribute but all attic components are built.
This is the code associated with the import and use of attic-client: fluidattacks/makes@main...drestrepom:makes:main#diff-ecdfd02b8e24dc35996c02b7530266eb6fe5e03239e760490f2839d556c55f06R9
What could be the most optimal way to import it?
I'd like to try running it on our Kubernetes cluster. Would be nice to have official docker image pushed to docker hub.
I was able to create a couple tokens before, but now if I try make-token
it just terminates.
atticd-atticadm make-token --sub foobar --validity '1y' --push test
Running as unit: run-u31.service
Press ^] three times within 1s to disconnect TTY.
Finished with result: exit-code
Main processes terminated with: code=exited/status=200
Service runtime: 12ms
CPU time consumed: 4ms
IP traffic received: 0B
IP traffic sent: 0B
IO bytes read: 0B
IO bytes written: 0B
When I run systemctl show run-u31.service
, I see this (notably, LoadError=org.freedesktop.systemd1.NoSuchUnit "Unit run-u31.service not found."
):
ExitType=main
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
TimeoutAbortUSec=1min 30s
TimeoutStartFailureMode=terminate
TimeoutStopFailureMode=terminate
RuntimeMaxUSec=infinity
RuntimeRandomizedExtraUSec=0
WatchdogUSec=infinity
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ReloadResult=success
CleanResult=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestampMonotonic=0
ExecMainExitTimestampMonotonic=0
ExecMainPID=0
ExecMainCode=0
ExecMainStatus=0
ControlGroupId=0
MemoryCurrent=[not set]
MemoryAvailable=infinity
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=[no data]
IPIngressPackets=[no data]
IPEgressBytes=[no data]
IPEgressPackets=[no data]
IOReadBytes=18446744073709551615
IOReadOperations=18446744073709551615
IOWriteBytes=18446744073709551615
IOWriteOperations=18446744073709551615
Delegate=no
CPUAccounting=yes
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
IOAccounting=yes
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=yes
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
DefaultMemoryLow=0
DefaultMemoryMin=0
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=9522
IPAccounting=yes
ManagedOOMSwap=auto
ManagedOOMMemoryPressure=auto
ManagedOOMMemoryPressureLimit=0
ManagedOOMPreference=none
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1073741816
LimitNOFILESoft=1073741816
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=31741
LimitNPROCSoft=31741
LimitMEMLOCK=1042718720
LimitMEMLOCKSoft=1042718720
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=31741
LimitSIGPENDINGSoft=31741
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
CoredumpFilter=0x33
Nice=0
IOSchedulingClass=2
IOSchedulingPriority=4
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinityFromNUMA=no
NUMAPolicy=n/a
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=inherit
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore
DynamicUser=no
RemoveIPC=no
PrivateTmp=no
PrivateDevices=no
ProtectClock=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectKernelLogs=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
PrivateIPC=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=2147483646
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
TimeoutCleanUSec=infinity
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
ProtectProc=default
ProcSubset=all
ProtectHostname=no
KillMode=control-group
KillSignal=15
RestartKillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=run-u31.service
Names=run-u31.service
Description=run-u31.service
LoadState=not-found
ActiveState=inactive
FreezerState=running
SubState=dead
StateChangeTimestampMonotonic=0
InactiveExitTimestampMonotonic=0
ActiveEnterTimestampMonotonic=0
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=no
CanStop=yes
CanReload=no
CanIsolate=no
CanFreeze=yes
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnSuccessJobMode=fail
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=no
AssertResult=no
ConditionTimestampMonotonic=0
AssertTimestampMonotonic=0
LoadError=org.freedesktop.systemd1.NoSuchUnit "Unit run-u31.service not found."
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
CollectMode=inactive
I don't know what could be wrong with it, but I tried cleaning up:
/var/lib/atticd
and ~/.config/attic
and it allowed me to create one token, but fails when I try to create another. Old tokens still seem to work, presumably because I used the same HS256 secret.
What can I do to fix the token making process? Also, please let me know if any more info would be helpful.
I'm having a hard time finding docs on how to setup attic with Minio. The tutorial tells me to use postgreSQL + S3 for a less temporary setup, but I can't find out how to do it.
Problem Statement
A nix-backed SDLC currently can't avoid the (let's call it) "media break" to at some point separately "publish" binary artifacts to some form of what is commonly known as registry: container registry, package registries, etc.
Some people already take offense with "media breaks" themselves (like me): they are a bad thing from first principles.
But a more vested argument is indeed that once we break the medium, we loose a lot of features by which we could otherwise make our registry more useful.
So this problem statement is about a missed opportunity. Not about an itch (or maybe missed opportunities itch you, too? Especially when Nix could raise and shine.).
Some of the things how your average registry could be augmented:
nixpkgs
to become the most comprehensive package registry in the world.Potential Solution
Design attic with a plugin system in mind that makes it possible (and maybe even easy?!) to amend it's state and frontend with language-specific registry functionality and routes.
Listings and nicely rendered package pages could make sense to become a practical alternative to existing registries for end users.
And what about NAR?
I'll quote @Ericson2314:
Don't worry at all, it's an implementation detail.
But in the short term some streaming transcoder or similar thing + some extra metadata in state might just get us going.
Would be great to have documentation for the HTTP API so that services can communicate without needing to shell out to the CLI.
I've seen #7 (comment), not sure if there's anything else.
One use case would be giving nixbuild.net the ability to push to attic as well as cachix.
Hi,
I am trying to use the docker.io/NixOS/nix:latest docker image and install attic in there to serve as binary cache and run in K8s.
However I didnt find any instructions on how to install with just nix package manager.
I tried
nix profile install https://github.com/zhaofengli/attic but the installation fails
error: builder for '/nix/store/9yxll8rg9hm10pvm4a82vpnsdwnh26vw-aws-sigv4-0.54.1.drv' failed with exit code 139
error: 1 dependencies of derivation '/nix/store/rf6x1ywjf314c7w15ccypr55yv0gllsh-cargo-package-aws-sigv4-0.54.1.drv' failed to build
or some other runs I see the following
bash-5.1# nix profile install github:zhaofengli/attic
trace: warning: crane cannot find version attribute in /nix/store/k9wifwriy8nd8a1j7jcczdz9f0f2kw8n-source/Cargo.toml, consider setting `version = "...";` explicitly
error: builder for '/nix/store/9jvbzhzd8192v5xpljyq8j3lh505g596-cargo-package-winapi-i686-pc-windows-gnu-0.4.0.drv' failed with exit code 2;
last 2 log lines:
> /nix/store/61mzw223sdk26lxawnx64b72xgjrhaj8-gnutar-1.34/bin/tar: Child died with signal 11
> /nix/store/61mzw223sdk26lxawnx64b72xgjrhaj8-gnutar-1.34/bin/tar: Error is not recoverable: exiting now
For full logs, run 'nix log /nix/store/9jvbzhzd8192v5xpljyq8j3lh505g596-cargo-package-winapi-i686-pc-windows-gnu-0.4.0.drv'.
Not sure why it needs windows api.. Hence, I am thinking my command is incorrect.
or
error: builder for '/nix/store/v69r3icrvh26vyzwhiydlkgh1px2sbdr-cargo-package-http-range-header-0.3.0.drv' failed due to signal 11 (Segmentation fault)
error: 1 dependencies of derivation '/nix/store/k9zyg176bwys2awyynwq8iv18h2qm4qg-vendor-registry.drv' failed to build
building '/nix/store/871rakk4bkq2pwhrma3sk83ajn2qwpnx-source.drv'...
error: 1 dependencies of derivation '/nix/store/gx6b2fnilxidx62m6wrkdl3g4j2v8rk5-vendor-cargo-deps.drv' failed to build
error: 1 dependencies of derivation '/nix/store/1gxagm26pi29sn5xf1rd4cw1nivfd7m6-attic-0.1.0.drv' failed to build
Appreciate any help.
Thanks
mohnish.
I setup my binary caches as part of my NixOS config, so that the substituters are available on all of my machines that share the same config.
attic use
does print out the information I would need, however it also will then attempt to modify the Nix config, which would be unwanted.
What would be helpful would be a flag (such as --print
or something) that would skip trying to modify the Nix config and just prints out the substituter/trusted public key pair.
(A separate flag to explicitly generate the netrc changes might also be useful.)
Hi! Thank you for working on this. I was giving Attic a try but on some uploads (notably, this one is several hundred MBs) it's failing to push.
I'm getting this error:
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 2023-02-09T17:05:16.952504Z ERROR attic_server::error: Database error: Failed to acquire connection from pool
I'm using the default db, which I assume is a local sqlite... so it seems like a weird error to see. It's pretty reproducible.
My config (mostly the defaults from the docs, really):
{
services.atticd = {
enable = true;
credentialsFile = "/etc/nixos/attic-creds.env";
settings = {
listen = "[::]:8080";
# Data chunking
#
# Warning: If you change any of the values here, it will be
# difficult to reuse existing chunks for newly-uploaded NARs
# since the cutpoints will be different. As a result, the
# deduplication ratio will suffer for a while after the change.
chunking = {
# The minimum NAR size to trigger chunking
#
# If 0, chunking is disabled entirely for newly-uploaded NARs.
# If 1, all NARs are chunked.
nar-size-threshold = 64 * 1024; # 64 KiB
# The preferred minimum size of a chunk, in bytes
min-size = 16 * 1024; # 16 KiB
# The preferred average size of a chunk, in bytes
avg-size = 64 * 1024; # 64 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 256 * 1024; # 256 KiB
};
};
};
Client error:
✅ zlaczlhsx8s4d54ksgpszxwnpqqqf7hn-corretto17-17.0.5.8.1 (1.51 MiB/s, 99.7% deduplicated)
❌ v9mvqn3sa39pjdcszi19c8hpp7wlizca-source: InternalServerError: The server encountered an internal error or misconfiguration.
Error: InternalServerError: The server encountered an internal error or misconfiguration.
Error: Process completed with exit code 123.
Full logs of the run:
Feb 09 16:47:59 localhost atticd[787]: Attic Server 0.1.0 (release)
Feb 09 16:47:59 localhost atticd[787]: Running migrations...
Feb 09 16:47:59 localhost atticd[787]: Starting API server...
Feb 09 16:47:59 localhost atticd[787]: Listening on [::]:8080...
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 2023-02-09T17:05:16.952504Z ERROR attic_server::error: Database error: Failed to acquire connection from pool
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 0: tokio::task::runtime.spawn
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: with kind=task task.name= task.id=45603 loc.file="server/src/api/v1/upload_path.rs" loc.line=406 loc.col=13
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tokio-1.23.0/src/util/trace.rs:16
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 1: attic_server::api::v1::upload_path::upload_path
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: at server/src/api/v1/upload_path.rs:115
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 2: tower_http::trace::make_span::request
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: with method=PUT uri=/_api/v1/upload-path version=HTTP/1.1
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tower-http-0.3.5/src/trace/make_span.rs:116
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 3: tokio::task::runtime.spawn
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: with kind=task task.name= task.id=8720 loc.file="/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/hyper-0.14.23/src/common/exec.rs" loc.line=49 loc.col=21
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tokio-1.23.0/src/util/trace.rs:16
Feb 09 17:05:16 ip-10-193-26-190 atticd[787]: 2023-02-09T17:05:16.953005Z ERROR tower_http::trace::on_failure: response failed classification=Status code: 500 Internal Server Error latency=257570 ms
Hi! It'd be awesome to have a GH Action that one could easily add to use the client.
It would take care of installing the client into the environment + login
+ use
, but also push new paths in its "after" section, much like Cachix does.
I'm trying to upload to two separate servers.
I use attic login server1
and then attic push cache_on_server_1
which works just fine.
When I then do: attic login server2
and attic push cache_on_server_2
I get NoSuchCache: The requested cache does not exist.
So, for some reason I'm not loged to the second server.
If I use attic login --set-default server2
the push works fine.
Is this supposed to work like this? Or is there a way to set the upstream server somewhere else?
Currently the tokens you get from atticadm are impossible to revoke. It would be nice for security purpose to add the ability to revoke tokens.
Some solutions that I can think of:
I am getting the error
The program must call nix::initNix() before calling any libstore library functions.
(core dumped) attic push remote $WORKSPACE/$val
(where $WORKSPACE/$val
is a symlink to a store path which has worked just fine).
I am on revision c77b5fb
If there is anything to help you debug it, let me know.
The output of attic cache info
:
/nix/store/m7fmj73cz1zhyahwhw6s8qs8l1gqllji-attic-0.1.0/bin/attic cache info remote
Public: false
Public Key: remote:xxx
Binary Cache Endpoint: https://my-server.com/remote
API Endpoint: https://my-server.com
Store Directory: /nix/store
Priority: 41
Upstream Cache Keys: ["cache.nixos.org-1"]
Retention Period: Global Default
Scenario:
Goal:
Non Goals:
While the cache interface isn't public, it's a hack, but the interface is also not too complicated
Works well with https://github.com/DevPalace/phoenix-ci (i.e. std-action
) design tenets of a pre-evaluation phase and potentially shareable build results between parallel runs.
Reference: NixOS/nixpkgs#89380
As proposed there by @wamserma and also recently by @wmertens in discourse.
Edit: further references of this idea in next comment down below.
Before casync
or any generic dedup strategy
... we can likely exploit the (presumed & yet unmetered) property that a lot of re -builds may not actually change a lot of bytes, but the store path references to other dependencies, and thus creating an almost identical new store entry.
Possible Solution
When zeroing (/nix/store/0000000...
) these store references creates identical artifacts, we have a potent initial and cheap deduplication exclusive to store based systems.
The true references would need storing via some metadata schema and cheap substitution on-the-fly when serving a given concrete store entry.
Quantification Necessary
The purpose of this issue is to anchor this strategy in the context of attic, since the roadmap states potential future use of deduplication strategies, specifically mentioning casync
.
Unfortunately, I can't provide any estimates or quantification of the benefits other than the qualitative argument that this can be an interesting and cheap approach to explore before more advanced dedup strategies may be considered.
Still, I think it is worth mentioning in this context.
A cachix cache could be nice until attic is in nixpkgs
Some of the ci tests fail not because of errors in the code but by timing out. See #21.
Skimming the find_and_lock_chunk
code, it looks like the purpose of this field is essentially to mark a nar
or chunk
row that may soon be referenced by a new object (looking at the upload code) so that it doesn't get garbage collected in the meantime.
However, if the process is halted between incrementing holders_count
and decrementing it when the guard is dropped, then the holders_count
is permanently greater than zero, and never eligible for garbage collection.
with 70ae61b I got this, when using pkgs.callPackage attic { inherit rustPlatform; };
:
EDIT: using package.nix
.
error: anonymous function at /nix/store/5l147bk6zqi6kgbs3b9gxma1g4yx2gbk-nixos-22.11.1530.09b46f2c1d8/nixos/pkgs/build-support/rust/import-cargo-lock.nix:3:1 called with unexpected argument 'allowBuiltinFetchGit'
at /nix/store/5l147bk6zqi6kgbs3b9gxma1g4yx2gbk-nixos-22.11.1530.09b46f2c1d8/nixos/pkgs/build-support/rust/build-rust-package/default.nix:64:36:
63| if cargoVendorDir != null then null
64| else if cargoLock != null then importCargoLock cargoLock
| ^
65| else fetchCargoTarball ({
(use '--show-trace' to show detailed location information)
Originally posted by @bbigras in #12 (comment)
Hey @zhaofengli, Thanks for adopting SeaORM!
It's our pleasure to see more inspirational projects were built on top of SeaORM :)
Let us know if you have any feature recommendation or feedback. Your contribution is what drive us forward!
Some learning resources for you: Documentation, Tutorial, Cookbook, Q&A, Blog
Join our Discord server to chat with others in the SeaQL community!
Feel free to submit a PR to showcase your project, SeaQL/sea-orm#403.
Please note: This isn't really a request for a feature to be implemented, it is just to get ideas and a discussion going. I don't know if people consider this part of their threat model, but it's something I mulled over, and I'm interested in hearing the ideas of other people who have written binary cache software. Attic looks great in any case!
I quite like Attic at a glance. I have developed several Nix binary caches in the past, including Eris, and an an unreleased "serverless" one running on WASM/JS function services, which I also planned to have many of the same features as Attic. But I'd like to mention something since I mulled over it a bit.
My serverless solution also has server-side signing, since it is relatively easy to compute the signature for a .narinfo
, and makes many bugs like NixOS/nix#6960 irrelevant. It's nice. But I think it's important to note that server-side signing acts as a kind of oracle; anything uploaded to the binary cache is implicitly signed as if it was authored by you. This means that if anyone uploads anything invalid or garbage (or backdoored, e.g. a CI system) it can and will be shown as "authentic." There is also no secure provenance or identity attached to the original upload; it is not possible to prove after the fact that where it came from.
For instance, if someone steals an authentication key from a (valid) person doing uploads, they can then upload anything they want with abandon and it can never be tied back to them. For example, given the way I think the upload works from the description in #7, they can do things like populate "correct" hashes with trojaned binaries e.g. deadbeef-firefox-100
is a valid hash the user computes, but they upload a trojaned binary under this hash. Now, if someone else tries to upload deadbeef-firefox-100
, the deadbeef.narinfo
file gets located, and therefore the nar itself is silently discarded. The cache then remains infected for all time until it is purged.
This is sort of a different take on the original problems signatures solved; current non-content-addressed store derivations take their hash from their inputs, not their output; keys are used to authenticate that the output binary is authentically coming from a trusted source, because a malicious source could compute the same input hash, but give you a trojaned binary under it. What you want is a kind of non-repudiation so that when you get an upload, it cannot be denied where it came from. CA derivations partially address this because they're self authenticating, so when you look up a hash, it can be verified that it is legitimate immediately. But you still don't know who gave it to you.
One way around this is to sign uploads in a lock-step way. First, an agent requests to do an upload, and establishes some identity that can be validated e.g. "I am github CI runner on commit hash 0xDEADBEEF running at 12pm UTC, with the given $GITHUB_TOKEN
", and you check the $GITHUB_TOKEN
is legitimate on the server via OAuth. You then issue a new short-term ed25519 signing key in return, which can be used to sign uploads for a short time frame, say, 15 minutes. The agent then uploads all its derivations under this signing key, within this time frame, and this is validated by the server. The key is then marked as "permissible, but not usable for any further signatures" after the 15 minutes. Then, when a narinfo is requested, it is identifiable where it came from at what time through the signature. This signature can then be replaced by a new signature "on the fly", and this replaced signature is what is shown to the user, just like it works today. This design is similar to the way SLSA Level 3 guidelines operate, as they require cryptographically sure proof of provenance. This approach isolates the user-facing key from the key used to authenticate the builder itself, rather than relying purely on simple bearer token schemes (which I suspect is what is used now, though I admit I haven't read the code thoroughly yet...)
Another very good thing to note is that this allows real revocation; assuming it is ever discovered that a build is compromised for some reason, it's now possible to track this down to individual keys assigned during the build step, and revoke those keys behind the scenes e.g. when a narinfo is requested that is associated with a revoked key, simply 404 instead.
In the world of CA derivations, this provenance is still useful for those reasons, though the need for end-user signatures is not needed since the hashes allow self authentication.
Anyway, I'd be interested to know your thoughts on something like this. It is complex to work out the details, but I think a significant step up over the current state of the art in cache security and helps operate closer to modern standards like SLSA. With a globally deduplicated cache it's also important since any user can easily "poison the well" for all other users, so having some auditability for cases like this is nice, which is something I realized while thinking about multi-tenancy and deduplication myself.
I think attic could benefit from having a client module. I just started and still haven't used it much, but something like
{
services.attic = {
enable = true;
defaultStore = "main";
stores = [
{
name = "main";
endpoint = "https://attic.server.com";
tokenFile = /etc/file;
}
];
};
}
I tried to update to current master
on nixpkgs
today for my systems but encountered the following error:
unpacking sources
unpacking source archive /nix/store/qdggd1m9zk6kq29fc3kvf6prb8xabdfr-source
source root is source
patching sources
Executing configureCargoCommonVars
decompressing cargo artifacts from /nix/store/1mck5pp7l9sgivbjqirj6m1c3444gk0r-attic-deps-0.0.1/target.tar.zst to target
configuring
will append /build/source/.cargo-home/config.toml with contents of /nix/store/d2513s4lymp7x92cc76wk4119xpk0qyi-vendor-cargo-deps/config.toml
default configurePhase, nothing to do
building
++ command cargo --version
cargo 1.69.0
++ command cargo build --profile release --message-format json-render-diagnostics -p attic-client -p attic-server
Compiling bindgen v0.65.1
Compiling attic v0.1.0 (/build/source/attic)
The following warnings were emitted during compilation:
warning: In file included from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate/attic/src/nix_store/bindings/nix.hpp:19,
warning: from src/nix_store/bindings/nix.cpp:11:
warning: /nix/store/sw5561i42ijhzx8jrbyg33cpnx3rcv2k-nix-2.15.1-dev/include/nix/uds-remote-store.hh:23:20: fatal error: uds-remote-store.md: No such file or directory
warning: 23 | #include "uds-remote-store.md"
warning: | ^~~~~~~~~~~~~~~~~~~~~
warning: compilation terminated.
warning: In file included from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate/attic/src/nix_store/bindings/nix.hpp:19,
warning: from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/sources/attic/src/nix_store/bindings/mod.rs.cc:1:
warning: /nix/store/sw5561i42ijhzx8jrbyg33cpnx3rcv2k-nix-2.15.1-dev/include/nix/uds-remote-store.hh:23:20: fatal error: uds-remote-store.md: No such file or directory
warning: 23 | #include "uds-remote-store.md"
warning: | ^~~~~~~~~~~~~~~~~~~~~
warning: compilation terminated.
error: failed to run custom build command for `attic v0.1.0 (/build/source/attic)`
Caused by:
process didn't exit successfully: `/build/source/target/release/build/attic-d4ae52706e01b1b3/build-script-build` (exit status: 1)
--- stdout
cargo:CXXBRIDGE_PREFIX=attic
cargo:CXXBRIDGE_DIR0=/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/include
cargo:CXXBRIDGE_DIR1=/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate
TARGET = Some("x86_64-unknown-linux-gnu")
OPT_LEVEL = Some("3")
HOST = Some("x86_64-unknown-linux-gnu")
cargo:rerun-if-env-changed=CXX_x86_64-unknown-linux-gnu
CXX_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CXX_x86_64_unknown_linux_gnu
CXX_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CXX
HOST_CXX = None
cargo:rerun-if-env-changed=CXX
CXX = Some("g++")
cargo:rerun-if-env-changed=CXXFLAGS_x86_64-unknown-linux-gnu
CXXFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CXXFLAGS_x86_64_unknown_linux_gnu
CXXFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CXXFLAGS
HOST_CXXFLAGS = None
cargo:rerun-if-env-changed=CXXFLAGS
CXXFLAGS = None
cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
CRATE_CC_NO_DEFAULTS = None
DEBUG = Some("false")
CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
running: "g++" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/include" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate" "-Wall" "-Wextra" "-std=c++17" "-O2" "-include" "nix/config.h" "-o" "/build/source/target/release/build/attic-510b23bac4c80a60/out/f7f357d16ca57eb4-mod.rs.o" "-c" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/sources/attic/src/nix_store/bindings/mod.rs.cc"
running: "g++" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/include" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate" "-Wall" "-Wextra" "-std=c++17" "-O2" "-include" "nix/config.h" "-o" "/build/source/target/release/build/attic-510b23bac4c80a60/out/src/nix_store/bindings/nix.o" "-c" "src/nix_store/bindings/nix.cpp"
cargo:warning=In file included from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate/attic/src/nix_store/bindings/nix.hpp:19,
cargo:warning= from src/nix_store/bindings/nix.cpp:11:
cargo:warning=/nix/store/sw5561i42ijhzx8jrbyg33cpnx3rcv2k-nix-2.15.1-dev/include/nix/uds-remote-store.hh:23:20: fatal error: uds-remote-store.md: No such file or directory
cargo:warning= 23 | #include "uds-remote-store.md"
cargo:warning= | ^~~~~~~~~~~~~~~~~~~~~
cargo:warning=compilation terminated.
cargo:warning=In file included from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate/attic/src/nix_store/bindings/nix.hpp:19,
cargo:warning= from /build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/sources/attic/src/nix_store/bindings/mod.rs.cc:1:
cargo:warning=/nix/store/sw5561i42ijhzx8jrbyg33cpnx3rcv2k-nix-2.15.1-dev/include/nix/uds-remote-store.hh:23:20: fatal error: uds-remote-store.md: No such file or directory
cargo:warning= 23 | #include "uds-remote-store.md"
cargo:warning= | ^~~~~~~~~~~~~~~~~~~~~
cargo:warning=compilation terminated.
exit status: 1
exit status: 1
--- stderr
CXX include path:
/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/include
/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate
error occurred: Command "g++" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/include" "-I" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/crate" "-Wall" "-Wextra" "-std=c++17" "-O2" "-include" "nix/config.h" "-o" "/build/source/target/release/build/attic-510b23bac4c80a60/out/f7f357d16ca57eb4-mod.rs.o" "-c" "/build/source/target/release/build/attic-510b23bac4c80a60/out/cxxbridge/sources/attic/src/nix_store/bindings/mod.rs.cc" with args "g++" did not execute successfully (status code exit status: 1).
Maybe this has something to do with the recent nix
update?
If I can do something more to help debug this issue, please let me know.
Hi,
i am using attic since some month and its a quite nice piece of software so far, but i observed the following situation.
Observed behavior:
When running attic watch-store CACHENAME
manually or use the linked service, it continuously consumes 1 core.
Expected behavior:
When there is no nix store action happening, attic should not to burn one of the cpu cores.
Background:
My config for that can be viewed here.
The config.toml when to the home dir of the user by magic impure commands.
It looks like the following
default-server = "nixos"
[servers.nixos]
endpoint = "https://<server domain>"
token = "<some token>"
I hope that the ticket is kind of enough as report, if you need additional information please let me know. 🙂
Problem Statement
Users of a private cache may cover their entire SDLC on a nix basis. This means that on every commit & successful build a new store entry will be populated.
Some of these transactions (otherwise undistinguishable), however, are called 'releases' and they are meant for long time distribution.
A garbage collection solely based on LRU has no way to elide such assets and therefore can't offer safety guarantees for release semantics.
Why Nix for distribution at all?
Intuitively, one might interject, that this is what your average registry (package, container, you-name-it) is meant for.
While yes, Nix could potentially offer better distribution guarantees and by laying the foundations to "disintermediate" a typical registry with this feature, we open up a venue of potential ecosystem innovations outside of the scope of this issue.
It's about safe garbage collection
For the scope of this issue, it's solely about safe gc. Any tagged closure would be durably preempted from GC. That means durably superior UX promises on releases to end users.
Potential Implementation
A naive implementation in a first iteration could "simply" add a release tag (potentially implemented as gc root) as metadata to a particular store path.
Depending on how useful this turns out to be already, future distribution-like use cases may be interested in additional metadata, but this is something for another issue / user story.
I think it would be quite helpful if the attic client supported asynchronous uploading when using it via the nix post-build-hook.
Maybe the client could write the paths of the binaries to a file and another instance of the client (via systemd) could be watching said file and upload all binaries that appear.
Is this something you’d accept a PR for?
attic push <CACHE> .#nixosConfigurations.demo.config.system.build.toplevel
like nix copy
I'm using:
{
imports = [
"${(import ./nix/sources.nix).attic}/nixos/atticd.nix"
];
services.atticd = {
enable = true;
credentialsFile = config.sops.secrets.attic.path;
settings = {
database = {
url = "postgresql://[email protected]/attic";
};
storage = {
type = "local";
path = "/mnt/attic/storage";
};
# useFlakeCompatOverlay = false;
};
};
}
[bbigras2@nixos:/etc/nixos]$ sudo nixos-rebuild build
building Nix...
building the system configuration...
error: unsupported argument 'submodules' to 'fetchGit', at /nix/store/9i8226p09g7cvih4lf24rysyx1qkxka8-source/lib/downloadCargoPackageFromGit.nix:18:5
(use '--show-trace' to show detailed location information)
nix-info (I'm still on nixos-21.11 but going to update soon):
"x86_64-linux"
Linux 5.10.126, NixOS, 21.11 (Porcupine)
yes
yes
nix-env (Nix) 2.3.16
"nixos-21.11.337975.eabc3821918"
"home-manager-20.09"
/nix/var/nix/profiles/per-user/root/channels/nixos
attic watch-store
seems to be leaking or holding onto memory over time.
I've been running many large builds on a set of self-hosted Github Actions Runners for the past two days. The GHA runners run
in their own nixos-containers, attic watch-store
runs on the host system (NixOS). The GHA runners share a nix daemon with the attic watch-store
. Attic pushes out nix store paths just fine after they have been build by the runners. The whole setup runs on a system with 8GiB memory which should be sufficient for doing build coordination (the actual builds happen on different machines). But as the number of paths that attic had to check/upload grows, so does its memory use.
Maybe attic is holding onto some buffers while uploading, this would explain the RSS size. Maybe attic tries to know which paths were previously uploaded in an inefficient manner, but I doubt that tracking a couple of thousands of checked/uploaded nix paths would take up ~8GiB.
Anyways, really like this project! Thank you for putting in the work.
Thank you for developing Attic
Working inside the nix develop
shell, the nix
binary is:
$ which nix
/nix/store/k6ciawnknm1yxxiwm4pih008rijyn9wh-nix-2.15.1/bin/nix
Could it be better to remove nix
from the develop
shell packages?
Attic config:
default-server = "horus"
[servers.horus]
endpoint = "ENDPOINT"
token = "TOKEN"
Atticd database is almost empty. Only thing there, in the cache
table:
1|main|main:KEY|0|/nix/store|41|["cache.nixos.org-1"]|2023-06-15T17:33:07.205948201+00:00||
Atticd and attic are on the same machine
Running RUST_BACKTRACE=full RUST_LOG=debug attic watch-store main
does not give more info
Both cachix and attic implement their own push-related endpoints, that are, for good reason, incompatible with Nix's own HTTP binary cache store. This is not so great for interoperability.
What does the attic API look like? Is it compatible with the cachix API?
Would you be interested in working towards a native HttpBinaryCacheStore
subclass in Nix?
Some benefits of a store implementation:
nix copy
nix path-info --store
Heya,
I tried to set up attic but I'm getting this error:
Jun 20 01:18:21 myBox atticd[95277]: Attic Server 0.1.0 (release)
Jun 20 01:18:21 myBox atticd[95277]: Running migrations...
Jun 20 01:18:21 myBox atticd[95277]: Starting API server...
Jun 20 01:18:21 myBox atticd[95277]: Listening on [::]:8080...
Jun 20 01:25:38 myBox atticd[95277]: 2023-06-20T01:25:38.352005Z ERROR attic_server::error: Storage error: File exists (os error 17)
Jun 20 01:25:38 myBox atticd[95277]: 0: tokio::task::runtime.spawn
Jun 20 01:25:38 myBox atticd[95277]: with kind=task task.name= task.id=24 loc.file="server/src/api/v1/upload_path.rs" loc.line=408 loc.col=13
Jun 20 01:25:38 myBox atticd[95277]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tokio-1.28.2/src/util/trace.rs:16
Jun 20 01:25:38 myBox atticd[95277]: 1: attic_server::api::v1::upload_path::upload_path
Jun 20 01:25:38 myBox atticd[95277]: at server/src/api/v1/upload_path.rs:117
Jun 20 01:25:38 myBox atticd[95277]: 2: tower_http::trace::make_span::request
Jun 20 01:25:38 myBox atticd[95277]: with method=PUT uri=/_api/v1/upload-path version=HTTP/1.1
Jun 20 01:25:38 myBox atticd[95277]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tower-http-0.4.0/src/trace/make_span.rs:116
Jun 20 01:25:38 myBox atticd[95277]: 3: tokio::task::runtime.spawn
Jun 20 01:25:38 myBox atticd[95277]: with kind=task task.name= task.id=20 loc.file="/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/hyper-0.14.26/src/common/exec.rs" loc.line=49 loc.col=21
Jun 20 01:25:38 myBox atticd[95277]: at /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-vendor-cargo-deps/c19b7c6f923b580ac259164a89f2577984ad5ab09ee9d583b888f934adbbe8d0/tokio-1.28.2/src/util/trace.rs:16
Jun 20 01:25:38 myBox atticd[95277]: 2023-06-20T01:25:38.355235Z ERROR tower_http::trace::on_failure: response failed classification=Status code: 500 Internal Server Error latency=62 ms
I've set up an sshfs mount to a hetzner storage box, so I'm sure it's a misconfiguration, but the error is very hard to interpret for me.
I think knowing which file it wants to create would be helpful?
Here's my config:
allowed-hosts = ["..."]
api-endpoint = "..."
listen = "[::]:8080"
[chunking]
avg-size = 65536
max-size = 262144
min-size = 16384
nar-size-threshold = 65536
[database]
url = "sqlite:///var/lib/atticd/server.db?mode=rwc"
[garbage-collection]
default-retention-period = "6 months"
[storage]
path = "/run/attic"
type = "local"
/run/attic
is empty except for .ssh
and .hsh_history
and owned by user/group atticd
As far as I can tell, users that are using R2 could potentially be accessing their storage through a CloudFlare CDN-enabled URL and benefit from the CDN, for free.
Would it be possible disable chunking, sign on upload, instead of download, and then use the blob storage directly, rather than having to go through the attic server?
I noticed this in my own attic instance across restarts and gcs.
Skimming the upload code, it looks like a nar row is inserted and a cleanup action is registered to delete the newly inserted row if an error occurs before the nar is transitioned to valid, but that doesn't help if the process halts between insertion and transitioning to valid.
These rows will never be garbage collected since only valid nars are selected for deletion.
Chunks are handled in the same way for upload and garbage collection.
This problem can easily be replicated by sending atticd
a SIGINT
or SIGTERM
during an upload, so the lowest hanging fruit would be to install proper signal handlers. Note that the systemd service is configured to receive a SIGTERM
when a stop is requested.
Putting aside proper signal handling, I think nars and chunks in the pending upload state should be examined and handled when we know that no upload could be occurring.
warning: error: unable to download 'https://cache.cyberchaos.dev/musl/nar/zgh590rfmf9gq1n02yvhja3zmvhxfryd.nar': HTTP error 307 (curl error: Couldn't connect to server); retrying in 334 ms
[yuka@m1:~]$ curl https://cache.cyberchaos.dev/musl/nar/l532wss41p018cx18k80zij2z6nxrdv7.nar -v
[...]
< HTTP/2 307
< location: http://127.0.0.1:3900/attic/23d1d0a2-2b0e-46ab-8345-b1985b44161b.chunk?x-id=GetObject&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=GK448edf5de88c655c737e0e78%2F20230428%2Fgarage%2Fs3%2Faws4_request&X-Amz-Date=20230428T012509Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=94f291a7a3c78000f68248abbd1968a2d3d01105143e991162f221cee6607f40
< x-attic-cache-visibility: public
< content-length: 0
< date: Fri, 28 Apr 2023 01:25:09 GMT
<
* Connection #0 to host cache.cyberchaos.dev left intact
I am very excited about attic, and am trying to play with it inside docker. However, when I run attic watch-store
or attic push
, I get the error:
Error: Unknown C++ exception: error: cannot figure out user name.
This is clearly an error coming from the nix-store bindings, but maybe attic could support a fallback, or a flag to set the username.
installing 'attic-static-x86_64-unknown-linux-musl-0.1.0'
copying path '/nix/store/qi9cixkq0pj60yw1y5l28hid7f53310i-attic-static-x86_64-unknown-linux-musl-0.1.0' from 'https://staging.attic.rs/attic-ci'...
warning: error: unable to download 'https://staging.attic.rs/attic-ci/nar/qi9cixkq0pj60yw1y5l28hid7f53310i.nar': HTTP error 200 (curl error: Stream error in the HTTP/2 framing layer); retrying in 338 ms
warning: error: unable to download 'https://staging.attic.rs/attic-ci/nar/qi9cixkq0pj60yw1y5l28hid7f53310i.nar': HTTP error 200 (curl error: Stream error in the HTTP/2 framing layer); retrying in 614 ms
warning: error: unable to download 'https://staging.attic.rs/attic-ci/nar/qi9cixkq0pj60yw1y5l28hid7f53310i.nar': HTTP error 200 (curl error: Stream error in the HTTP/2 framing layer); retrying in 1200 ms
warning: error: unable to download 'https://staging.attic.rs/attic-ci/nar/qi9cixkq0pj60yw1y5l28hid7f53310i.nar': HTTP error 200 (curl error: Stream error in the HTTP/2 framing layer); retrying in [26](https://github.com/a1ca7raz/nurpkgs/actions/runs/5470829567/jobs/9961341827#step:5:27)84 ms
error: unable to download 'https://staging.attic.rs/attic-ci/nar/qi9cixkq0pj60yw1y5l28hid7f53310i.nar': HTTP error 200 (curl error: Stream error in the HTTP/2 framing layer)
error:
… while calling the 'storePath' builtin
at /tmp/tmp.0TlyJ0LJeD:19:23:
18| outPath = maybeStorePath (builtins.getAttr outputName outputs);
19| drvPath = maybeStorePath (builtins.getAttr outputName outputs);
| ^
20| };
error: path '/nix/store/qi9cixkq0pj60yw1y5l[28](https://github.com/a1ca7raz/nurpkgs/actions/runs/5470829567/jobs/9961341827#step:5:29)hid7f53[31](https://github.com/a1ca7raz/nurpkgs/actions/runs/5470829567/jobs/9961341827#step:5:32)0i-attic-static-x86_64-unknown-linux-musl-0.1.0' does not exist and cannot be created
I guess it's created when we use attic login
and seems to be rw-r--r--
. I guess it should be 600.
Same thing for ~/.config/nix/netrc
.
$ attic login dsad asdas sadas
Error: newline in string found at line 5 column 6
This happens when executing any command with attic. Maybe I damaged some file but I don't know what it could be.
It will be convenient to configure attic-client by environment variables (like AWS) in GitHub Actions
Similar to: #64
It looks like chunks are transitioned to the "pending delete" state, then objects are deleted from remote storage, and finally the chunk rows are deleted.
If the process halts between "transitioning to pending delete" and "deleting the chunk rows" then they are never selected for deletion again since only chunks that were in the valid state and transition to "pending delete" are returned as orphans.
In this case I think an easy fix would be to fetch all chunks marked for deletion after transitioning new orphans, so old orphans get deleted as well.
watch-store
is nice when running locally, but it would be nice to have something similar to cachix's watch-exec
.
It's essentially the same, but automatically stops watching when sub-process finishes.
I added around 10 Chromium store paths from NixOS/nixpkgs#89380 (comment) using attic push niklas-attic-test --ignore-upstream-cache-filter --no-closure
.
Afterwards I ran
atticd --mode garbage-collector-once
and got:
Attic Server 0.1.0 (release)
Error: Execution Error: error returned from database: (code: 1) too many SQL variables
Caused by:
error returned from database: (code: 1) too many SQL variables
``
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.