Giter Site home page Giter Site logo

docker-baseimage's Introduction

SDR Docker Base Image

Purpose

Provide a basic image, with all normal packages common to all installs of mikenye, fredclausen, or kx1t SDR docker images, to reduce download time and disk space usage for users.

Adding containers

  1. Create your new docker file
  2. Update github actions
  3. Update the Tags section
  4. Update the Projects and Tag Tree section

Tags

Tag Extends Included Packages
base - s6-overlay (via mikenye/deploy-s6-overlay), mikenye/docker-healthchecks-framework, bc, ca-certificates, curl, gawk, ncat, net-tools, procps, socat
acars-decoder rtlsdr libacars and all prerequisites for full functionality: (zlib1g, libxml2, libsqlite3)
python base python3, python3-pip, python3-setuptools, python3-wheel
readsb-full rtlsdr Contains the latest dev branch of Mictronics/readsb-protobuf and all prerequisites for full functionality: (bladeRF, bladeRF FPGA images, libiio (for PlutoSDR), libad9361-iio (for PlutoSDR))
readsb-netonly base Contains the latest dev branch of Mictronics/readsb-protobuf intended to operate in network only mode.
wreadsb base Contains the latest dev branch of wiedehopf's fork of readsb with rtl-sdr & libusb.
rtlsdr base Contains the latest tagged release of rtl-sdr, and prerequisites (eg: libusb)
soapyrtlsdr rtlsdr Contains the latest tagged release of SoapySDR and SoapyRTLSDR, and prerequisites (python3, python3-pip, python3-setuptools, python3-wheel)
dump978-full soapyrtlsdr Contains the latest tagged release of flightaware/dump978, and prerequisites (various boost libraries)
qemu base Contains qemu-user-static binaries

Using

Simply add FROM ghcr.io/sdr-enthusiasts/docker-baseimage:<tag> at the top of your Dockerfile, replacing <tag> with one of the tags above.

The base image provides an [ENTRYPOINT] for starting the container so unless you have a specific reason to change this you do not have to provide an [ENTRYPOINT] in your Dockerfile.

Example:

FROM ghcr.io/sdr-enthusiasts/docker-baseimage:rtlsdr
RUN ...

Tag-specific Notes

readsb-full

  • The readsb webapp and configuration files have been included in the image (see /usr/share/readsb/html and /etc/lighttpd/conf-available), however lighttpd has not been installed or configured. You will need to do this if you want this functionality in your image.
  • The collectd configuration files have been included in the image (see /etc/collectd/collectd.conf.d and /usr/share/readsb/graphs), however collectd/rrdtool have not been installed or configured. You will need to do this if you want this functionality in your image.
  • The installed version of readsb's protobuf protocol file is located at: /opt/readsb-protobuf, should you need this in your image.
  • bladeRF FPGA firmware images are located at: /usr/share/Nuand/bladeRF

Projects and Tag Tree

Tag Sub-tags Using Up-Stream Projects Using
base ALL sdr-enthusiasts/acars_router,sdr-enthusiasts/airspy-adsb, sdr-enthusiasts/docker-radarbox, sdr-enthusiasts/docker-adsbhub, sdr-enthusiasts/docker-opensky-network, sdr-enthusiasts/docker-rtlsdrairband
acars-decoder - sdr-enthusiasts/docker-acarsdec, sdr-enthusiasts/docker-dumpvdl2, sdr-enthusiasts/docker-vdlm2dec
python - sdr-enthusiasts/docker-acarshub, sdr-enthusiasts/docker-adsbexchange, kx1t/docker-planefence, sdr-enthusiasts/docker-radarvirtuel, sdr-enthusiasts/docker-reversewebproxy, kx1t/docker-raspberry-noaa-v2
rtlsdr acars-decoder, readsb-full, soapyrtlsdr, wreadsb sdr-enthusiasts/acars-oxide
readsb-full - sdr-enthusiasts/docker-readsb-protobuf
readsb-netonly - -
soapyrtlsdr dump978-full -
dump978-full - sdr-enthusiasts/docker-piaware, sdr-enthusiasts/docker-dump978
wreadsb - sdr-enthusiasts/docker-tar1090
qemu - sdr-enthusiasts/docker-flightradar24, sdr-enthusiasts/docker-planefinder

docker-baseimage's People

Contributors

dependabot[bot] avatar fredclausen avatar kx1t avatar mikenye avatar wiedehopf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

docker-baseimage's Issues

Discussion: Removal of `hadolint` step in `linter.yml`

Hi All,

Do we need the hadolint job in linter.yml workflow? If a Dockerfile is changed, it should be linted via the hadolint job in on_pr.yml.

Likewise, we might not need the markdownlint and yamllint jobs in the on_pr.yml workflow, as these get linted via linter.yml.

The jobs seem to be running twice currently, here's a screenshot from a PR I'm drafting, where .MD, .yml and Dockerfiles were added/changed.

image

image

image

Not a big deal, but it may be confusing for contributors.

Cheers.

apt-get upgrade in base image?

In the future I imagine we'll be rebuilding the images once a month, when the upstream -slim image is updated, but is there any harm in running apt-get upgrade? I figure it's worth just double-checking we are fully up to date when we build the image, and if we have reason to kick off an interim build then it'll capture any updates.

Move away from PAT's

I haven't looked in to this at depth but in the situations where we are using PAT to initiate actions (like triggering upstream builds) is it possible to use the build in GitHub tokens to accomplish this? Or can we generate such a token?

add all soapy drivers to soapyrtlsdr

Would probably be better to add all the various soapy drivers when soapy is installed in soapyrtlsdr and change containers compatible with soapy to use soapyrtlsdr as the baseimage in their Dockerfiles.

https://github.com/sdr-enthusiasts/docker-dumphfdl/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-rtlsdrairband/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-dumpvdl2/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-acarsdec/blob/main/Dockerfile

Discussion: How often should we build images?

With the stated goal of this project to reduce download time and disk space utilization for users would it be in our interests to go from daily image builds to something less frequent? I know the major advantage to daily builds is that we capture any installed package updates within 24 hours of them being released, and this is highly advantageous if those updates fix a major bug or security issue.

However, we are doing so at the cost of longer download times for users 7 days a week if they use watch tower or similar systems. With the glacial pace that Debian takes for updates we are very likely seeing no package updates between builds, so we are creating the same image, many days in a row, but docker will see that as a "new" image causing increased downloads.

Security updates, while important, are likely a non-issue for most users as everything is being run behind routers will appropriate firewalls and fine to be captured in the next build a few days out, and if they are a big CVE issue we should care about it's probably big news in tech spaces and we can manually trigger a rebuild on GitHub really easy.

My proposal:

  • Move to weekly builds. Friday afternoon, 1400 UTC time or thereabouts, seems reasonable. We could have our individual projects set up to build 2 or more hours later so that we capture the new images for the build. Friday also seems like the best day because it's the weekend so we as maintainers can deal with any issues in either the base images or our projects, and users will have a better chance to notice and deal with issues as it's the weekend.

  • Alternatively, we continue with daily builds but we pin package versions of all installed Debian packages in the dockerfile- as well as their dependencies - and then check to see what has been updated, update the docker file(s) as appropriate and rebuild what needs to get updated.

Ideally, in either case, we also emulate the same process in all of our projects so that all of us are using the same set of base images always.

The second proposal kind of covers all of the bases - we'll get timely updates basically as soon as they're out, as well as not rebuilding what does't need to get rebuilt. However, this will be at significant increased complexity in the docker files as we will have to go through and pin all versions of the software we are installing as well as dependencies, and creating scripts that GitHub can run via actions to manage all of this because right now GitHub has no built-in actions that cover this. I'm willing to put the effort in if this is the desired course of action.

Dependabot auto-merge and test build CI

Two questions:

  • Should we enable auto-merging of a dependabot PR if it passes the CI?
  • Do we know, during a PR run, that any newly built images are used by images further down the tree? We aren't pushing the image out as a package. What I mean, as an example to be more explicit, is that if :base is updated do all of the images that depend on it also use that newly build :base image or does each child image pull from the most recent (and now old) tagged package?

The second point is important because if it ISN'T using the test images we definitely shouldn't auto-merge because the child builds aren't using the new images and may have problems.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.