Giter Site home page Giter Site logo

testdrivenio / django-github-digitalocean Goto Github PK

View Code? Open in Web Editor NEW
55.0 55.0 43.0 25 KB

Continuously Deploying Django to DigitalOcean with Docker and GitHub Actions

License: MIT License

Dockerfile 11.58% Shell 7.38% Python 81.04%
django django-digital-ocean django-docker github-actions github-actions-docker

django-github-digitalocean's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

django-github-digitalocean's Issues

No such file or directory

My builds pass but for some reason when it runs deploy job I get this error: scp: /app: No such file or directory

Use of set-env has been disabled in Github Actions

[Just a note: I'm working through using the tutorial/adapting it for my own project and have run into the following issue. As I work through the issue and make the necessary changes for my own project, I'll update this issue with suggested fix(es).]

The use of set-env in Github Actions has been disabled (see GitHub Actions: Deprecating set-env and add-path commands)

Running as currently available in /.github/workflows/main.yml throws

Error: The `set-env` command is disabled. Please upgrade to 
using Environment Files  or opt into unsecure command execution 
by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` 
environment variable to `true`. 

More explanation of the use of .prod and non .prod Dockerfile and docker-compose files

When this guide gets updated, would you please explain more about the reason for the use of prod and non prod files?

One thing I've not been as sure about is how closely these files should mirror each other as things grow.

For example, I believe that the reason two docker-compose scripts are used is because during the build step the image must try to be build from scratch if needed. Versus the deploy stage should always rely on an existing image.

Also, that there are some environment variables that you don't want packaged in the image--they are only needed at the time of deployment.

For the web service's Dockerfiles, the only difference is the use of different entrypoint scripts, where the deploy stage actually performs django migration steps.

It isn't as always obvious to me if and when I need to add changes to both Docker files or both compose files.

Consider describing how to manage docker image pruning on a system deployed to from this guide

Suggestion to consider adding a note or ideas for monitoring and handling running up on memory and disk space limitations with this setup over time.

I have been using some variation on this guide for a couple of projects.

One of them, the production and staging servers are the smallest DO droplets available.

Both systems have been very stable, production had 460 days of uptime and 418 days for staging.

It could be my config, but from what I can tell, deploying frequently using this guide can cause a lot of dangling docker images.

docker system prune can handle that, leaving most of the remaining used space taken up by (mostly unneeded) logs.

Though for memory issues, I'm not sure a lot can be done if the orchestration includes all aspects of the web application, nginx and database. I was able to reboot the machine, but going live I'll probably double the ram and start doing monitoring with datadog or similar.

Some errors I've found from disk / memory issues:

failed to register layer: Error processing tar file(exit status 1): write /usr/local/lib/python3.10/site-packages/PIL/__pycache__/ImImagePlugin.cpython-310.pyc: no space left on device

and

[4024141] INTERNAL ERROR: cannot create temporary directory!

The first error shows up during the build and deploy of the deploy stage, and does not trigger a formal error in github actions. So I had to see it, almost by accident as a reason why my container was not getting updated after passing tests.

CI Does Not Build with Uppercase Chars in $GITHUB_REPOSITORY variable

For anyone that has either a username or a repository name in GitHub that contains capital letters, the build job will fail. For example, my username is WayneLambert, therefore my build process failed with the following in the traceback:

Run docker-compose -f docker-compose.ci.yml build
  docker-compose -f docker-compose.ci.yml build
  shell: /bin/bash -e {0}
  env:
    WEB_IMAGE: docker.pkg.github.com/WayneLambert/django-github-digitalocean/web
    NGINX_IMAGE: docker.pkg.github.com/WayneLambert/django-github-digitalocean/nginx
Building web
invalid reference format: repository name must be lowercase
##[error]Process completed with exit code 1.

Full image below:

ci-failure

By the look of things, the variable takes the repository name as a concatenation of the username and the repository name exactly as it is.

Consider explicitly tagging images prior to uploading them to docker registry

It seems like best practice is to explicitly tag an image using :latest. This is currently missing from the workflow for web and nginx: https://github.com/testdrivenio/django-github-digitalocean/blob/master/.github/workflows/main.yml#L45

Tagging like this is done in other parts of the guide, so perhaps this is just an oversight.

I've seen an SO where someone suspects their images on Github container registry are not getting updated because they are not getting explicitly tagged. I can't say if that's the case or not, but I came across it while trying to diagnose for #20.

Regardless, it would be great if the guide detailed automatically tagging images using the short version of the image's git commit hash.

If helpful, I could craft a PR for this issue.

Further elaboration steps in the tutorial for the build and deploy jobs

Thank you for a fantastic tutorial.
I was wondering if you had a the chance to further elaborate/update the tutorial on how to do the following steps.

Here, we defined a build job that will be triggered on all pushes where we:

1. Set the global WEB_IMAGE and NGINX_IMAGE environment variables
2. Check out the repository so the job has access to it
3. Add environment variables to a .env file
4. Set the WEB_IMAGE and NGINX_IMAGE environment variables with so they can be accessed within the Docker Compose file
5. Log in to GitHub Packages
6. Pull the images if they exist
7. Build the images
8. Push the images up to GitHub Packages

And the following

So, in the deploy job, which only runs if the build job completes successfully (via needs: build), we:

1. Check out the repository so the job has access to it
2. Add environment variables to a .env file
3. Add the private SSH key to the ssh-agent and run the agent
4. Copy over the .env and docker-compose.prod.yml files to the remote server
5. SSH to the remove server on DigitalOcean
6. Navigate to the deployment directory and set the environment variables
7. Log in to GitHub Packages
8. Pull the images
9. Spin up the containers
10. End the SSH session

These steps are not so easy to follow as a beginner. So if you can include the code/steps/examples for these parts.

Thanks again for your time!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.