sdr-enthusiasts / docker-baseimage Goto Github PK
View Code? Open in Web Editor NEWDocker images used to build SDR docker projects
License: MIT License
Docker images used to build SDR docker projects
License: MIT License
I haven't looked in to this at depth but in the situations where we are using PAT to initiate actions (like triggering upstream builds) is it possible to use the build in GitHub tokens to accomplish this? Or can we generate such a token?
Two questions:
:base
is updated do all of the images that depend on it also use that newly build :base
image or does each child image pull from the most recent (and now old) tagged package?The second point is important because if it ISN'T using the test images we definitely shouldn't auto-merge because the child builds aren't using the new images and may have problems.
In the future I imagine we'll be rebuilding the images once a month, when the upstream -slim
image is updated, but is there any harm in running apt-get upgrade
? I figure it's worth just double-checking we are fully up to date when we build the image, and if we have reason to kick off an interim build then it'll capture any updates.
Would probably be better to add all the various soapy drivers when soapy is installed in soapyrtlsdr
and change containers compatible with soapy to use soapyrtlsdr
as the baseimage in their Dockerfiles.
https://github.com/sdr-enthusiasts/docker-dumphfdl/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-rtlsdrairband/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-dumpvdl2/blob/main/Dockerfile
https://github.com/sdr-enthusiasts/docker-acarsdec/blob/main/Dockerfile
Hi All,
Do we need the hadolint
job in linter.yml
workflow? If a Dockerfile is changed, it should be linted via the hadolint
job in on_pr.yml
.
Likewise, we might not need the markdownlint
and yamllint
jobs in the on_pr.yml
workflow, as these get linted via linter.yml
.
The jobs seem to be running twice currently, here's a screenshot from a PR I'm drafting, where .MD, .yml and Dockerfiles were added/changed.
Not a big deal, but it may be confusing for contributors.
Cheers.
With the stated goal of this project to reduce download time and disk space utilization for users would it be in our interests to go from daily image builds to something less frequent? I know the major advantage to daily builds is that we capture any installed package updates within 24 hours of them being released, and this is highly advantageous if those updates fix a major bug or security issue.
However, we are doing so at the cost of longer download times for users 7 days a week if they use watch tower or similar systems. With the glacial pace that Debian takes for updates we are very likely seeing no package updates between builds, so we are creating the same image, many days in a row, but docker will see that as a "new" image causing increased downloads.
Security updates, while important, are likely a non-issue for most users as everything is being run behind routers will appropriate firewalls and fine to be captured in the next build a few days out, and if they are a big CVE issue we should care about it's probably big news in tech spaces and we can manually trigger a rebuild on GitHub really easy.
My proposal:
Move to weekly builds. Friday afternoon, 1400 UTC time or thereabouts, seems reasonable. We could have our individual projects set up to build 2 or more hours later so that we capture the new images for the build. Friday also seems like the best day because it's the weekend so we as maintainers can deal with any issues in either the base images or our projects, and users will have a better chance to notice and deal with issues as it's the weekend.
Alternatively, we continue with daily builds but we pin package versions of all installed Debian packages in the dockerfile- as well as their dependencies - and then check to see what has been updated, update the docker file(s) as appropriate and rebuild what needs to get updated.
Ideally, in either case, we also emulate the same process in all of our projects so that all of us are using the same set of base images always.
The second proposal kind of covers all of the bases - we'll get timely updates basically as soon as they're out, as well as not rebuilding what does't need to get rebuilt. However, this will be at significant increased complexity in the docker files as we will have to go through and pin all versions of the software we are installing as well as dependencies, and creating scripts that GitHub can run via actions to manage all of this because right now GitHub has no built-in actions that cover this. I'm willing to put the effort in if this is the desired course of action.
Add support for libbladeRF https://github.com/Nuand/bladeRF/wiki/Getting-Started%3A-Linux
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.