This repo contains an opinionated build system to run a pixelfed instance. Some of the main benefits include:
- Automatic SSL certificates
- A fully working run-locally setup so you can test new updates/changes before pushing to prod
- Automatic backups
- Mechanism for cloning data from prod -> dev to test changes
- Well-defined folders for secrets, to prevent accidental disclosure of keys
Best of all, the primary setup is a run-once scaffolding. That means that you can customize the setup to your heart's content. Just take the generated configs, and tweak until you have the setup you're comfortable with. Even though the setup is opinionated, it's easy to change, since there's no runtime component to the scripting (only docker images). For more details, see the How it Works section
Running pixelfed actually takes a lot of work. Even though there are some scripts and built-in docker files, there's a lot to configure and manage on your own. SSL certificates, keeping the php, redis, and worker configs compatible. Having different settings to debug problems vs. prod, and the list goes on. I found the experience to be very much NOT turnkey, and hence this repo exists to make the admin experience as turnkey as possible.
The way this works is you need to get some prerequisites ready (get your S3 buckets ready, etc.) Then, when you run the scaffold program, it will ask you a bunch of questions, save your responses, and then finally generate all of the config files needed to power your experience. You can then run pixelfed locally, and make sure everything works well. Finally, you can copy all of the files to your remote webhost, and run the prod version.
- Install Docker, git, and python3 on your computer
- Create a free ngrok account, and have your ngrok authtoken available
- Ngrok is used only for local development, since we need a proper URL to view pixelfed
- There are paid versions of ngrok, but this setup works with the free version.
- Create an account with an S3-compatible webhost. I strongly encourage you to not give money to Amazon. If you're not sure, maybe try backblaze?
- Create two buckes (pixelfed, and pixelfed-dev) that are configured to allow PUBLIC access to the files. (The names are just references, you can call them whatever makes sense to you and is available)
- Create a third bucket (pixelfed-backups), but make sure that bucket is private
- Create 3 API keys - one key that can read/write to pixelfed, another key that can read/write to pixelfed-dev, and a third key that can read/write to all three buckets (for your backups)
- Make note of the following pieces of information for your S3 API. If using backblaze, see their docs for the particulars
- The endpoint URL
- The region
- The AWS access key (e.g. the api key userid)
- The AWS secret access key (e.g. the password for the api key)
- Download the repository
git clone [email protected]:intentionally-left-nil/pixelfed-docker.git
cd pixelfed-docker
- Create a virtual environment for python
python -m venv ./env
- Use the environment
source ./env/bin/activate
- Install the python packages needed for the scripts:
pip install -r requirements.txt
- In the python environment (see the Installation steps), execute
./scaffold.py
. This will ask you a bunch of questions. Enter your answers on the command line and hit enter. You'll need the steps from the Prerequisites to complete the config - Once you answer all of the necessary questions, the code will generate the scaffolding you need
- Run the one-time setup tasks:
docker compose --profile setup run --rm initialize
- Run the dev server:
docker compose --profile dev up
- Wait a minute for everything to build and start
- Navigate to https://localhost:8000
- Note that the ngrok URL will change every time. But you can always go to this localhost address to find it
- Enter in the admin username & password from earlier
- Do whatever you need to in order to test the changes locally
- Figure out how to copy the directory to your remote webserver. For example, you could run
rsync -av pixelfed-docker [email protected]:/home/user/pixelfed-docker
, or similar - Open an SSH tunnel to your webhost. Type the remaining commands in your SSH tunnel
cd pixelfed-docker
(wherever you copied it to)docker compose --profile setup run initialize
docker compose --profile prod up
- Figure out how to autostart docker on your webhost. For example:
[Unit]
Description=Pixelfed docker compose
Requires=docker.service
After=docker.service
[Service]
WorkingDirectory=/home/your_name_here/pixelfed-docker
ExecStart=/usr/bin/docker compose --profile prod up
ExecStartPost=/usr/bin/docker system prune -f
ExecStop=/usr/bin/docker compose --profile prod down
TimeoutStartSec=0
Restart=on-failure
StartLimitIntervalSec=60
StartLimitBurst=3
[Install]
WantedBy=multi-user.target
The nice thing about this setup is you can test the changes locally to your hearts content before deploying!
- Update the docker-compose file for all of the pixelfed images to point to the new verison
- Re-build the dev environment
docker compose --profile dev build
- Run the worker in docker
docker compose --profile dev run --rm -i -t worker /bin/sh
- Switch to the www-data user:
su www-data
- Do any upgrade steps you need to, such as upgrading the database:
php artisan migrate --force
- Exit the custom worker
- Test the changes
docker compose --profile dev up
- Copy the files over to your prod webserver.
- Run the same steps on prod, except use
--profile prod
The docker-compose.yml file contains a backup service. This service runs daily and backs up the database files, as well as your S3 bucket. You don't need to do anything. It will delete old backups so you don't keep taking up space. The backups also run for the dev environment, so you can test it out and any changes locally, before deploying to prod
Sometimes, you might have an issue that only shows with real data and you need to investigate. You can use the backups system to make your local environment have the same posts, etc. as prod
docker compose --profile dev run --rm backup_dev /root/restore.sh --source-environment=prod --dest-environment=dev -n=1 --restore-db --restore-s3
- Delete all your redis data.
docker system prune
followed bydocker volume rm <your redis volume name>
should do it docker compose --profile dev up
It's very similar to the clone step, except you might not need to delete redis (unclear)
- TAKE PROPER BACKUPS
docker compose --profile prod run --rm backup /root/restore.sh --source-environment=prod -n=1 --restore-db --restore-s3
- Restart your instance
Scaffold.py works in two stages. First, it loads the existing configuration from config/config.toml and secrets/config.toml. If any settings are missing, it prompts the user, then saves them to the appropriate file. (The reason there are two config files is that the latter one contains stuff you don't want others to see. Passwords, etc. You should never upload that folder to github). The scaffold.py can also generate some of the config for you - For example, it runs pixelfed to generate the oauth keys and other app secrets.
Then, the scaffold.py script takes the files in the templates folder, replaces the variables with the ones from the config, and saves the new files to the appropriate locations. That's it for the scaffolding!
If you try to run pixelfed and visit it from http://localhost, it just won't work. Pixelfed needs a domain to work properly. Ngrok is great, because we can have a domain name, but it's powered by our local computer. There's only one catch. If you don't pay for ngrok, then your domain name changes every time. As a workaround, the repo contains the app_dynamic_domain docker image which updates the pixelfed env files to use the new domain automatically during startup. Lastly, we need to refresh the pixelfed config to pick up the new environment. There's a pending PR to handle this better, but in the mean time this is why the docker-compose file specifies the run command for the worker
It's just a cron job that runs aws s3 cp <BUCKET> <BACKUP_BUCKET>
and pg_dump | aws s3 cp
to copy both the files and the database. Then, there's a little bit of scripting to check how many existing backups exist, and to delete old ones. Nothing too fancy
The primary code when responding to the website is in the app
worker. This contains the main php response. Some things take longer to process, and that's what the worker is for. It uses jobs on redis to know when there's work to complete. All of the PHP containers use the same UID for www-data
so there's no confusion with file ownership on what the permissions refer to