Giter Site home page Giter Site logo

actions-operator's People

Contributors

adam-stokes avatar addyess avatar ca-scribner avatar carlcsaposs-canonical avatar chris-sanders avatar johnsca avatar lucabello avatar mateoflorido avatar neoaggelos avatar rgildein avatar sed-i avatar stonepreston avatar weiiwang01 avatar yanksyoon avatar yhaliaw avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

actions-operator's Issues

actions operator fails when running with lxd provider

hey there, thanks for putting together the handy bootstrapping workflow

I have been setting this up with pytest-operator and lxd as the provider and came up against a permissions issue whereby the runner user fails to execute the charm build because it is not a member of the lxd group:

$ tox -e integration
.tox create: /home/runner/work/artifactory-operator/artifactory-operator/.tox/.tox
.tox installdeps: build, setuptools >= 44.0.0, wheel >= 0.34.2, twine >= 3.2.0, tox >= 3.20.1
.package create: /home/runner/work/artifactory-operator/artifactory-operator/.tox/.package
.package installdeps: setuptools>=44.0.0, wheel>=0.34.0
integration create: /home/runner/work/artifactory-operator/artifactory-operator/.tox/integration
integration installdeps: ops>=1.2.0, juju/python-libjuju@refs/heads/master.zip#egg=juju, pytest, pytest-operator
integration inst: /home/runner/work/artifactory-operator/artifactory-operator/.tox/.tmp/package/1/artifactory-operator-0.0.1.tar.gz
integration installed: artifactory-operator==0.0.1,attrs==21.2.0,backcall==0.2.0,bcrypt==3.2.0,cachetools==4.2.2,certifi==2021.5.30,cffi==1.14.6,charset-normalizer==2.0.4,cryptography==3.4.7,decorator==5.0.9,google-auth==2.0.1,idna==3.2,iniconfig==1.1.1,ipdb==0.13.9,ipython==7.26.0,ipython-genutils==0.2.0,jedi==0.18.0,Jinja2==3.0.1,juju @ juju/python-libjuju@refs/heads/master.zip
integration run-test-pre: PYTHONHASHSEED='3884729705'
integration run-test: commands[0] | pytest -v --tb native --show-capture=no --log-cli-level=INFO -s -m integration /home/runner/work/artifactory-operator/artifactory-operator/tests
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/runner/work/artifactory-operator/artifactory-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/runner/work/artifactory-operator/artifactory-operator, configfile: pyproject.toml
plugins: asyncio-0.15.1, operator-0.8.1
collecting ... collected 65 items / 63 deselected / 2 selected

tests/test_integration.py::test_build_and_deploy /snap/bin/juju
/snap/bin/charmcraft

-------------------------------- live log setup --------------------------------
INFO     pytest_operator.plugin:plugin.py:155 Using tmp_path: /home/runner/work/artifactory-operator/artifactory-operator/.tox/integration/tmp/pytest/test-integration-miah0
INFO     pytest_operator.plugin:plugin.py:217 Adding model github-pr-0cb90:test-integration-miah
-------------------------------- live log call ---------------------------------
INFO     pytest_operator.plugin:plugin.py:333 Building charm artifactory
FAILED
tests/test_integration.py::test_bundle 
-------------------------------- live log call ---------------------------------
INFO     pytest_operator.plugin:plugin.py:333 Building charm artifactory
FAILED
------------------------------ live log teardown -------------------------------
INFO     pytest_operator.plugin:plugin.py:231 Model is empty
INFO     pytest_operator.plugin:plugin.py:289 Destroying model test-integration-miah
...<snipped>.........................................................................
 Traceback (most recent call last):
  File "/home/runner/work/artifactory-operator/artifactory-operator/tests/test_integration.py", line 15, in test_bundle
    bundle_path.read_text(), charm=await ops_test.build_charm(charm_path)
  File "/home/runner/work/artifactory-operator/artifactory-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 351, in build_charm
    raise RuntimeError(
RuntimeError: Failed to build charm /home/runner/work/artifactory-operator/artifactory-operator:
LXD requires additional permissions.
Please ensure that the user is in the 'lxd' group. (full execution logs in '/home/runner/snap/charmcraft/common/charmcraft-log-amtk_vsg')

This is a two part problem, one part of which may be addressed in #15 - the runner user must be added to the lxd group. I have done this with the following workaround:

      - name: add user to lxd group
        run: |
          sudo usermod -a -G lxd $USER

the second issue is that the running shell session in the github runner does not acquire the group permission -- so simply adding this group doesn't resolve the bug -- the relevant code/tests need to be executed in a new context. I am currently using the following workaround:

      - name: run integration Tests
        run: |
          sudo su -l -c "$(which bash) -c 'cd $PWD && $(which tox) -e integration'" $USER

Initialize microk8s: Error from server (Forbidden): selfsubjectaccessreviews.authorization.k8s.io is forbidden: User "me" cannot create resource "selfsubjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope

Hi,

We have noticed the following error in our repos, e.g. mysql-k8s:

Run charmed-kubernetes/actions-operator@main
...
Initialize microk8s
...
  /usr/bin/sg microk8s -c microk8s kubectl auth can-i create pods --as=me
  Error from server (Forbidden): selfsubjectaccessreviews.authorization.k8s.io is forbidden: User "me" cannot create resource "selfsubjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope

the same error reported for the current repository tests.

  1. Should it be cleaned / fixed?
  2. Why GH action continued and didn't report an error?

P.S. I am not able to reproduce the same issue locally using the same microk8s version:

> /usr/bin/sg microk8s -c "microk8s kubectl auth can-i create pods --as=me"
no

> microk8s version
MicroK8s v1.26.1 revision 4595

Tnx!

Allow for setting snap channels of all auto-installed snaps

actions-operator conveniently installs charmcraft. However, I would like to install charmcraft from latest/candidate, since that's the channel pointing at 1.3.1. I can run sudo snap refresh charmcraft --channel latest/candidate after the actions-operator step, but it would be nice if I could just add a charmcraft_channel: latest/candidate section to the with: block.

This also seems like something that should be done for all snaps installed by actions-operator, since various use cases like this will probably crop up for other snaps as well.

Not working with strict microk8s

Hey!

I've been doing some experimentation with Juju 3.0 and Microk8s 1.25/strict in Github Actions prior to the 3.0 release.

We're about to hit a fairly big problem across a bunch of teams, in that currently the actions-operator is hardcoded to add the user to the microk8s group, which doesn't exist with the strict microk8s snap (it uses snap_microk8s).

Given that with Juju 3.0 using strict microk8s is a hard requirement, we probably want to sort this before it releases some time during this week

Issue demonstrated here: https://github.com/jnsgruk/parca-k8s-operator/actions/runs/3265300610/jobs/5367304679#step:3:114

with the following:

- name: Setup operator environment
  uses: charmed-kubernetes/actions-operator@main
  with:
    provider: microk8s
    channel: 1.25-strict/stable
    juju-channel: 3.0/candidate

optionally skip `post Setup operator environment`

The post Setup operator environment cleanup happens after all meaningful work is complete and immediately before stopping the workflow. Is cleanup necessary? At least in some cases, like when creating a ephemeral microk8s cluster for testing, couldn't we skip this step and leave microk8s running on the runner (for github to trash themselves)?

There might be cases where cleanup is important (if we bootstrap on an existing cloud?), but for microk8s I think it is unnecessary. It also takes maybe 2 minutes per run, which is a good portion of many integration tests.

Error while executing juju bootstrap microk8s: juju "3.3.1" can only work with strictly confined microk8s

Hi. In the "Bootstrap controller" step I'm getting the error:
ERROR "/var/snap/juju/25912/microk8s/credentials/client.config" does not exist: juju "3.3.1" can only work with strictly confined microk8s

Here is the related step from the action:

       - name: Setup operator environment
        uses: charmed-kubernetes/actions-operator@main
        with:
          provider: microk8s

The full section of the log is:

Bootstrap controller
  /usr/bin/sg microk8s -c juju bootstrap --debug --verbose microk8s github-pr-ede0d-microk8s --model-default test-mode=true --model-default automatically-retry-hooks=false --model-default logging-config="<root>=DEBUG"  --bootstrap-constraints=""
  01:12:56 INFO  juju.cmd supercommand.go:56 running juju [3.3.1 92a5cac53c51249b21ae0905710625725b4defe9 gc go1.21.5]
  01:12:56 DEBUG juju.cmd supercommand.go:57   args: []string{"/snap/juju/25912/bin/juju", "bootstrap", "--debug", "--verbose", "microk8s", "github-pr-ede0d-microk8s", "--model-default", "test-mode=true", "--model-default", "automatically-retry-hooks=false", "--model-default", "logging-config=<root>=DEBUG", "--bootstrap-constraints="}
  01:12:56 DEBUG juju.environs.tools build.go:123 looking for: /snap/juju/25912/bin/juju
  01:12:59 DEBUG juju.environs.tools versionfile.go:54 looking for sha256 8879a099e75efa81dc6a2abf1c1c80aea8d4804b9665cb664c9ea6ea7a0410a4
  ERROR "/var/snap/juju/25912/microk8s/credentials/client.config" does not exist: juju "3.3.1" can only work with strictly confined microk8s
  01:12:59 DEBUG cmd supercommand.go:549 error stack: 
  juju "3.3.1" can only work with strictly confined microk8s
  github.com/juju/juju/caas/kubernetes/provider.getLocalMicroK8sConfig:114: "/var/snap/juju/25912/microk8s/credentials/client.config" does not exist
  github.com/juju/juju/caas/kubernetes/provider.kubernetesEnvironProvider.DetectCloud:58: 
  github.com/juju/juju/cmd/juju/commands.(*bootstrapCommand).detectCloud:1287: 
  github.com/juju/juju/cmd/juju/commands.(*bootstrapCommand).cloud:1226: 
  github.com/juju/juju/cmd/juju/commands.(*bootstrapCommand).Run:708: 
  Error: The process '/usr/bin/sg' failed with exit code 1

You can see it in situ here: https://github.com/catalogicsoftware/cloudcasa-charm/actions/runs/7982420569/job/21795912102?pr=6

I'd suspect something set up wrong in my environment, but I don't really see anything obvious.

Enable ingress on microk8s

In order to be able to use nginx-ingress-integrator and other similar charms, ingress needs to be enabled in microk8s. Currently, this action does not enable it.

Kubernetes model doesn't clean up

Running this on top of a Kubernetes cluster with a charm, I found that when the charm failed on the config hook the model isn't cleaned up. At first glance this might be a difference in how 'machines' are returned on a container model.

Here's the output showing where it failed. It's a little verbose but shows the error.

$ tox -e integration
pyenv doesn't seem to be installed, you probably don't want this plugin installed either.
integration installed: appdirs==1.4.4,attrs==20.3.0,backcall==0.2.0,bcrypt==3.2.0,blessings==1.6,certifi==2020.12.5,cffi==1.14.5,chardet==3.0.4,charm-tools==2.8.3,charmcraft==0.7.0,Cheetah3==3.2.6.post1,colander==1.7.0,cryptography==3.4.6,decorator==4.4.2,dict2colander==0.2,distlib==0.3.1,distro==1.5.0,filelock==3.0.12,httplib2==0.19.0,idna==2.10,iniconfig==1.1.1,ipdb==0.13.6,ipython==7.21.0,ipython-genutils==0.2.0,iso8601==0.1.14,jedi==0.18.0,Jinja2==2.11.2,jsonschema==2.5.1,juju @ https://github.com/juju/python-libjuju/archive/master.zip,jujubundlelib==0.5.6,keyring==20.0.1,launchpadlib==1.10.13,lazr.restfulclient==0.14.3,lazr.uri==1.0.5,libcharmstore==0.0.9,macaroonbakery==1.3.1,MarkupSafe==1.1.1,mypy-extensions==0.4.3,oauthlib==3.1.0,otherstuf==1.1.0,packaging==20.9,paramiko==2.7.2,parse==1.19.0,parso==0.8.1,path==15.1.2,path.py==12.5.0,pathspec==0.3.4,pbr==5.5.1,pexpect==4.8.0,pickleshare==0.7.5,pluggy==0.13.1,prompt-toolkit==3.0.16,protobuf==3.15.5,ptyprocess==0.7.0,py==1.10.0,pyasn1==0.4.8,pycparser==2.20,Pygments==2.8.1,pymacaroons==0.13.0,PyNaCl==1.4.0,pyparsing==2.4.7,pyRFC3339==1.1,pytest==6.2.2,pytest-operator==0.5.1,python-dateutil==2.8.1,pytz==2021.1,PyYAML==5.3.1,requests==2.24.0,requests-toolbelt==0.9.1,requirements-parser==0.2.0,ruamel.yaml==0.15.100,SecretStorage==2.3.1,six==1.15.0,stuf==0.9.16,tabulate==0.8.7,testresources==2.0.1,theblues==0.5.2,toml==0.10.2,toposort==1.6,traitlets==5.0.5,translationstring==1.4,typing-extensions==3.7.4.3,typing-inspect==0.6.0,urllib3==1.25.11,vergit==1.0.2,virtualenv==20.4.2,wadllib==1.3.5,wcwidth==0.2.5,websockets==7.0
integration run-test-pre: PYTHONHASHSEED='4220337346'
integration run-test: commands[0] | pytest -v --tb native --show-capture=no --log-cli-level=INFO -s /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/tests/integration
==================================================================== test session starts =====================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/bin/python
cachedir: .tox/integration/.pytest_cache
rootdir: /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator
plugins: operator-0.5.1
collected 2 items

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
----------------------------------------------------------------------- live log setup -----------------------------------------------------------------------
INFO     pytest_operator.plugin:plugin.py:123 Using tmp_path: /tmp/pytest-of-chris/pytest-42/integration-tests-ytqc0
INFO     pytest_operator.plugin:plugin.py:196 Adding model micro:integration-tests-zh8f
----------------------------------------------------------------------- live log call ------------------------------------------------------------------------
INFO     pytest_operator.plugin:plugin.py:313 Building charm gatekeeper-audit
INFO     juju.model:model.py:1546 Deploying local:kubernetes/gatekeeper-audit-0
INFO     juju.model:model.py:2257 Waiting for model:
  kubernetes/0 [allocating] waiting: installing agent
INFO     juju.model:model.py:2257 Waiting for model:
  kubernetes/0 [allocating] waiting: agent initializing
FAILED
tests/integration/test_charm.py::IntegrationTests::test_status_messages XFAIL (aborted)
tests/integration/test_charm.py::IntegrationTests::test_status_messages ERROR

=========================================================================== ERRORS ===========================================================================
_________________________________________________ ERROR at teardown of IntegrationTests.test_status_messages _________________________________________________
Traceback (most recent call last):
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 132, in inject_fixtures
    cls.loop.run_until_complete(cls.cleanup_model())
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 262, in cleanup_model
    await cls.dump_model()
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/pytest_operator/plugin.py", line 223, in dump_model
    unit.machine.id,
AttributeError: 'NoneType' object has no attribute 'id'
========================================================================== FAILURES ==========================================================================
___________________________________________________________ IntegrationTests.test_build_and_deploy ___________________________________________________________
Traceback (most recent call last):
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/tests/integration/test_charm.py", line 21, in test_build_and_deploy
    await self.model.wait_for_idle(wait_for_active=True, timeout=60 * 60)
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py", line 2249, in wait_for_idle
    _raise_for_status(errors, "error")
  File "/home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py", line 2201, in _raise_for_status
    raise error_type("{}{} in {}: {}".format(
juju.errors.JujuUnitError: Unit in error: kubernetes/0
====================================================================== warnings summary ======================================================================
tests/integration/test_charm.py:15
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/tests/integration/test_charm.py:15: PytestUnknownMarkWarning: Unknown pytest.mark.order - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.order("first")

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:783: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self.stopped = asyncio.Event(loop=loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:166: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self.reconnecting = asyncio.Lock(loop=connection.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:167: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self.close_called = asyncio.Event(loop=connection.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /usr/lib/python3.8/asyncio/tasks.py:578: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    done = Queue(loop=loop)

tests/integration/test_charm.py: 45 warnings
  /usr/lib/python3.8/asyncio/queues.py:48: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self._finished = locks.Event(loop=loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:614: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    for task in asyncio.as_completed(tasks, loop=self.loop):

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:218: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self._drain_lock = asyncio.Lock(loop=loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:977: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.sleep(self.ping_interval, loop=self.loop)

tests/integration/test_charm.py: 46 warnings
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:911: DeprecationWarning: 'with (yield from lock)' is deprecated use 'async with lock' instead
    with (yield from self._drain_lock):

tests/integration/test_charm.py: 42 warnings
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/utils.py:66: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    value = await self._queues[id].get()

tests/integration/test_charm.py: 74 warnings
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/utils.py:127: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    done, pending = await asyncio.wait([task] + event_tasks,

tests/integration/test_charm.py: 35 warnings
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:416: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.wait(

tests/integration/test_charm.py: 16 warnings
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/client/connection.py:411: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    await asyncio.sleep(10, loop=self.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py:461: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self._watch_stopping = asyncio.Event(loop=self._connector.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py:462: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self._watch_stopped = asyncio.Event(loop=self._connector.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py:463: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    self._watch_received = asyncio.Event(loop=self._connector.loop)

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:532: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.wait_for(

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:554: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.wait_for(

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:1077: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.wait_for(

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/websockets/protocol.py:988: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    yield from asyncio.wait_for(

tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy
  /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/lib/python3.8/site-packages/juju/model.py:992: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
    q = asyncio.Queue(loop=self._connector.loop)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================== short test summary info ===================================================================
FAILED tests/integration/test_charm.py::IntegrationTests::test_build_and_deploy - juju.errors.JujuUnitError: Unit in error: kubernetes/0
ERROR tests/integration/test_charm.py::IntegrationTests::test_status_messages - AttributeError: 'NoneType' object has no attribute 'id'
=============================================== 1 failed, 1 xfailed, 288 warnings, 1 error in 65.86s (0:01:05) ===============================================
ERROR: InvocationError for command /home/chris/src/charmed-kubernetes/k8s-opa-operator/opa-audit-operator/.tox/integration/bin/pytest -v --tb native --show-capture=no --log-cli-level=INFO -s tests/integration (exited with code 1)
__________________________________________________________________________ summary ___________________________________________________________________________
ERROR:   integration: commands failed

Actions operator fails with microk8s

Here are what I believe to be the relevant logs

/usr/bin/sg microk8s -c juju bootstrap --debug --verbose microk8s github-pr-329df --bootstrap-constraints cores=2 mem=4G --model-default test-mode=true --model-default image-stream=daily --model-default automatically-retry-hooks=false --model-default logging-config=<root>=DEBUG
14:29:21 INFO  cmd main.go:257 Since Juju 2 is being run for the first time, downloaded the latest public cloud information.
14:29:21 INFO  juju.cmd supercommand.go:56 running juju [2.9.11 0 7fcbdb3115b295c1610287d0db7323dfa72e8f21 gc go1.14.15]
14:29:21 DEBUG juju.cmd supercommand.go:57   args: []string{"/snap/juju/16977/bin/juju", "bootstrap", "--debug", "--verbose", "microk8s", "github-pr-329df", "--bootstrap-constraints", "cores=2"}
14:29:22 DEBUG juju.caas.kubernetes.clientconfig plugins.go:113 polling caas credential rbac secret, in 1 attempt, secret for service account "juju-credential-microk8s" not found
14:29:23 DEBUG juju.caas.kubernetes.clientconfig plugins.go:113 polling caas credential rbac secret, in 2 attempt, secret for service account "juju-credential-microk8s" not found
14:29:24 DEBUG juju.caas.kubernetes.clientconfig plugins.go:113 polling caas credential rbac secret, in 3 attempt, secret for service account "juju-credential-microk8s" not found
14:29:25 DEBUG juju.caas.kubernetes.clientconfig plugins.go:113 polling caas credential rbac secret, in 4 attempt, secret for service account "juju-credential-microk8s" not found
14:29:26 DEBUG juju.caas.kubernetes.clientconfig plugins.go:113 polling caas credential rbac secret, in 5 attempt, secret for service account "juju-credential-microk8s" not found
ERROR resolving microk8s credentials: max duration exceeded: secret for service account "juju-credential-microk8s" not found
14:29:26 DEBUG cmd supercommand.go:537 error stack: 
/build/snapcraft-juju-35d6cf/parts/juju/src/caas/kubernetes/clientconfig/plugins.go:307: secret for service account "juju-credential-microk8s" not found
/build/snapcraft-juju-35d6cf/parts/juju/src/vendor/github.com/juju/retry/retry.go:204: max duration exceeded: secret for service account "juju-credential-microk8s" not found
/build/snapcraft-juju-35d6cf/parts/juju/src/caas/kubernetes/clientconfig/plugins.go:117: 
/build/snapcraft-juju-35d6cf/parts/juju/src/caas/kubernetes/provider/builtin.go:60: resolving microk8s credentials
/build/snapcraft-juju-35d6cf/parts/juju/src/caas/kubernetes/provider/credentials.go:78: 
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/commands/bootstrap.go:1138: 
/build/snapcraft-juju-35d6cf/parts/juju/src/cmd/juju/commands/bootstrap.go:622: 
Error: The process '/usr/bin/sg' failed with exit code 1

Feature request: Add input for microk8s dockerhub mirror configuration

When running many self-hosted runners on a server, installing microk8s can hit the rate-limit on Dockerhub for an account, as microk8s will be downloading images during installation.

One method to bypass this is with a private docker registry that acts as a dockerhub mirror

The microk8s allows for using private docker registry as dockerhub mirror:

https://microk8s.io/docs/dockerhub-limits#h-3-use-a-private-image-registry-to-mirror-dockerhub-5

This setting will need to be setup after the installation of microk8s.

This feature request propose a input option for the workflow that takes the URL of dockerhub mirror. The workflow would then setup microk8s according to the link above after installation.

The `test-sa` workaround fails with microk8s 1.24

Not sure if this is the right place to report this, but:

When attempting the microk8s.kubectl create serviceaccount test-sa workaround, the script times out with
No resources found in default namespace.

await exec_as_microk8s("microk8s kubectl create serviceaccount test-sa");
if (! await retry_until_zero_rc("microk8s kubectl get secrets | grep -q test-sa-token-", 12, 10000)) {
core.setFailed("Timed out waiting for test SA token");
return false;
};

  • Setup works with microk8s 1.23 (example)
  • Setup fails with microk8s 1.24 (example)

microk8s-channel needed.

There is an 'lxd-channel', 'juju-channel' etc., however not one for microk8s. Using 'channel' results in confusion with the lxd channel.

I have this matrix setup:

    strategy:
      fail-fast: false
      matrix:
        # Different clouds
        cloud:
          - "lxd"
          - "microk8s"
        terraform:
          - "1.4.*"
          - "1.5.*"
          - "1.6.*"
        juju:
          - "3.1/stable"

and this test step:

      - name: Setup operator environment
        uses: charmed-kubernetes/actions-operator@main
        with:
          provider: ${{ matrix.cloud }}
          juju-channel: ${{ matrix.juju }}
          channel: "1.28-strict/stable"

However I want the channel to only apply to microk8s, not lxd. Juju 3.1 requires a strictly confined version of microk8s, thus I have to specify the channel as its not default.

Setup fails on Ubuntu-22.04

Using this action:

  name: Integration tests
    runs-on: ubuntu-22.04
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Setup operator environment
        uses: charmed-kubernetes/actions-operator@main
        with:
          provider: lxd

It gets stuck on cmd bootstrap.go:485 Running machine configuration script.... I have let it run for 30 minutes so far. On 22.04, the step finishes within a few minutes.

/usr/bin/sudo apt-get update -yqq
[43](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:43)
  /usr/bin/sudo apt-get install -yqq python3-pip
[44](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:44)
  /usr/bin/sudo --preserve-env=http_proxy,https_proxy,no_proxy pip3 install tox
[45](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:45)
  Collecting tox
[46](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:46)
    Downloading tox-3.25.1-py2.py3-none-any.whl (85 kB)
[47](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:47)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.0/86.0 KB 2.8 MB/s eta 0:00:00
[48](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:48)
  Collecting filelock>=3.0.0
[49](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:49)
    Downloading filelock-3.8.0-py3-none-any.whl (10 kB)
[50](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:50)
  Collecting py>=1.4.17
[51](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:51)
    Downloading py-1.11.0-py2.py3-none-any.whl (98 kB)
[52](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:52)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.7/98.7 KB 8.3 MB/s eta 0:00:00
[53](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:53)
  Requirement already satisfied: six>=1.14.0 in /usr/lib/python3/dist-packages (from tox) (1.16.0)
[54](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:54)
  Collecting toml>=0.9.4
[55](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:55)
    Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
[56](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:56)
  Collecting virtualenv!=20.0.0,!=20.0.1,!=20.0.2,!=20.0.3,!=20.0.4,!=20.0.5,!=20.0.6,!=20.0.7,>=16.0.0
[57](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:57)
    Downloading virtualenv-20.16.3-py2.py3-none-any.whl (8.8 MB)
[58](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:58)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.8/8.8 MB 29.3 MB/s eta 0:00:00
[59](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:59)
  Requirement already satisfied: packaging>=14 in /usr/local/lib/python3.10/dist-packages (from tox) (21.3)
[60](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:60)
  Collecting pluggy>=0.12.0
[61](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:61)
    Downloading pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
[62](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:62)
  Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/lib/python3/dist-packages (from packaging>=14->tox) (2.4.7)
[63](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:63)
  Collecting platformdirs<3,>=2.4
[64](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:64)
    Downloading platformdirs-2.5.2-py3-none-any.whl (14 kB)
[65](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:65)
  Collecting distlib<1,>=0.3.5
[66](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:66)
    Downloading distlib-0.3.5-py2.py3-none-any.whl (466 kB)
[67](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:67)
       ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 467.0/467.0 KB 44.1 MB/s eta 0:00:00
[68](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:68)
  Installing collected packages: distlib, toml, py, pluggy, platformdirs, filelock, virtualenv, tox
[69](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:69)
  Successfully installed distlib-0.3.5 filelock-3.8.0 platformdirs-2.5.2 pluggy-1.0.0 py-1.11.0 toml-0.10.2 tox-3.25.1 virtualenv-20.16.3
[70](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:70)
  WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[71](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:72)
Install Juju
[72](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:73)
  /usr/bin/sudo snap install juju --classic --channel=latest/stable
[73](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:74)
  juju 2.9.33 from Canonical** installed
[74](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:76)
Install tools
[75](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:77)
  /usr/bin/sudo snap install jq
[76](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:78)
  jq 1.5+dfsg-1 from Michael Vogt (mvo*) installed
[77](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:79)
  /usr/bin/sudo snap install charm --classic --channel=latest/stable
[78](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:80)
  charm 2.8.2 from Canonical** installed
[79](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:81)
  /usr/bin/sudo snap install charmcraft --classic --channel=latest/stable
[80](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:82)
  charmcraft 2.0.0 from Canonical** installed
[81](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:83)
  /usr/bin/sudo snap install juju-bundle --classic --channel=latest/stable
[82](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:84)
  juju-bundle 0.4.0 from Kenneth Koski (kennethkoski) installed
[83](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:85)
  /usr/bin/sudo snap install juju-crashdump --classic --channel=latest/stable
[84](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:86)
  juju-crashdump 1.0.2+git120.a838717 from Jason Hobbs installed
[85](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:88)
Bootstrap controller
[86](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:89)
  /usr/bin/sg lxd -c juju bootstrap --debug --verbose lxd github-pr-04a11 --model-default test-mode=true --model-default automatically-retry-hooks=false --model-default logging-config="<root>=DEBUG"  --bootstrap-constraints="cores=2 mem=4G"
[237](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:240)
   - Retrieving image: rootfs: 98% (35.19MB/s)
[238](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:241)
   - Retrieving image: rootfs: 99% (35.25MB/s)
[239](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:242)
   - Retrieving image: rootfs: 100% (35.31MB/s)
[240](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:243)
                                               
[241](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:244)
  08:52:40 DEBUG juju.service discovery.go:67 discovered init system "systemd" from series "focal"
[242](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:245)
  08:52:40 DEBUG juju.provider.lxd environ_broker.go:223 LXD user data; 3867 bytes
[243](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:246)
   - Creating container                        
[244](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:247)
  08:52:40 INFO  juju.container.lxd container.go:256 starting new container "juju-223c17-0" (image "ubuntu-20.04-server-cloudimg-amd64-lxd.tar.xz")
[245](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:248)
  08:52:40 DEBUG juju.container.lxd container.go:257 new container has profiles [default juju-controller]
[246](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:249)
  08:52:52 DEBUG juju.container.lxd container.go:286 created container "juju-223c17-0", waiting for start...
[247](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:250)
   - Container started 
[248](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:251)
                      
[249](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:252)
  08:52:52 INFO  juju.provider.lxd environ_broker.go:48 started instance "juju-223c17-0"
[250](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:253)
  08:52:52 INFO  cmd bootstrap.go:305  - juju-223c17-0 (arch=amd64 mem=4G cores=2)
[251](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:254)
  08:52:52 INFO  juju.environs.bootstrap bootstrap.go:989 newest version: 2.9.33
[252](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:255)
  08:52:52 INFO  juju.environs.bootstrap bootstrap.go:1004 picked bootstrap agent binary version: 2.9.33
[253](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:256)
  08:52:52 INFO  cmd bootstrap.go:625 Installing Juju agent on bootstrap instance
[254](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:257)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:417 searching for signed metadata in datasource "gui simplestreams"
[255](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:258)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:452 looking for data index using path streams/v1/index2.sjson
[256](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:259)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:464 looking for data index using URL https://streams.canonical.com/juju/gui/streams/v1/index2.sjson
[257](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:260)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:467 streams/v1/index2.sjson not accessed, actual error: [{github.com/juju/juju/environs/simplestreams.(*urlDataSource).Fetch:192: "https://streams.canonical.com/juju/gui/streams/v1/index2.sjson" not found}]
[258](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:261)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:468 streams/v1/index2.sjson not accessed, trying legacy index path: streams/v1/index.sjson
[259](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:262)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:487 read metadata index at "https://streams.canonical.com/juju/gui/streams/v1/index.sjson"
[260](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:263)
  08:52:54 DEBUG juju.environs.simplestreams simplestreams.go:1019 finding products at path "streams/v1/com.canonical.streams-released-dashboard.sjson"
[261](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:264)
  08:52:54 INFO  cmd bootstrap.go:782 Fetching Juju Dashboard 0.8.1
[262](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:265)
  08:52:54 DEBUG juju.cloudconfig.instancecfg instancecfg.go:913 Setting numa ctl preference to false
[263](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:266)
  Waiting for address
[264](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:267)
  Attempting to connect to 10.53.219.164:22
[265](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:268)
  08:53:05 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[266](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:269)
  08:53:11 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[267](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:270)
  08:53:16 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[268](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:271)
  08:53:22 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[269](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:272)
  08:53:27 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[270](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:273)
  08:53:32 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[271](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:274)
  08:53:38 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[272](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:275)
  08:53:43 DEBUG juju.provider.common bootstrap.go:647 connection attempt for 10.53.219.164 failed: /var/lib/juju/nonce.txt does not exist
[273](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:276)
  08:53:49 INFO  cmd bootstrap.go:415 Connected to 10.53.219.164
[274](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:277)
  08:53:49 INFO  juju.cloudconfig userdatacfg_unix.go:575 Fetching agent: curl -sSfw 'agent binaries from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.9.33/juju-2.9.33-linux-amd64.tgz]>
[275](https://github.com/canonical/jenkins-agent-operator/runs/7969368204?check_suite_focus=true#step:3:278)
  08:53:49 INFO  cmd bootstrap.go:485 Running machine configuration script...

strict microk8s fails when container-registry-url is set

When using microk8s from the strict channel, if the container-registry-url is set, the action fails with the next error:

EACCES: permission denied, open '/var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml'

The error in the log is:

...
Initialize microk8s
  /usr/bin/bash -c sudo usermod -a -G snap_microk8s $USER
  Error: EACCES: permission denied, open '/var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml'
...

It looks like the problem happens when running the next line while configuring microk8s:

fs.writeFileSync("/var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml", content)

This works correctly when the microk8s snap confinment is classic.

You can see examples of the execution failing here:

Post Setup fails to cleanup and fails the entire test

I had a test that failed during cleanup:

Post job cleanup.
/usr/bin/sg microk8s -c juju models --format=json | jq -r '.models[]."short-name"' | grep -v controller | xargs -n 1 juju destroy-model --destroy-storage --force --no-wait -y
Destroying model
Waiting for model to be removed, 5 application(s).....
Waiting for model to be removed..........................
Model destroyed.
ERROR cannot connect to API: model "test-lma-bundle-b8hw" has been removed from the controller, run 'juju models' and switch to one of them.
Destroying model
Waiting for model to be removed...........................
Model destroyed.
Error: The process '/usr/bin/sg' failed with exit code 123

await exec.exec('sg microk8s -c "juju models --format=json | jq -r \'.models[].\\"short-name\\"\' | grep -v controller | xargs -n 1 juju destroy-model --destroy-storage --force --no-wait -y"');

looks like it picked up a model that was already being destroyed and then it was already gone by the time it tried to (re)remove it. I wonder if we even really need to care about doing that cleanup since we expect the runner VM / container to go away after that anyway? At the very least, we could add an || true to the destroy-model call. -- @johnsca

MicroK8s provider with channel 1.19 fails to bootstrap

Summary

Using the microk8s provider with 1.19 channel consistently fails while bootstrapping, after enabling storage and dns:

This Launchpad bug seems relevant, https://bugs.launchpad.net/juju/+bug/1937282.

Things are better in newer microk8s versions (1.22 tested only). The issue seems to be that the apiserver takes a long time before accepting new connections after restarting, so the kubectl call immediately fails.

Proposed fix

Retry the kubectl calls with a back-off, to account for the api server taking longer than expected to come up.

A quickfix can be seen in https://github.com/neoaggelos/actions-operator/blob/19a8cb6692d5c25f8c67eb0f8735a89ee5267baf/dist/bootstrap/index.js#L1597-L1605

which did result in the bootstrapping process succeeding:

microstack as provider

In several cases, it would be appropriate to have a microstack available as a provider, even though there is currently no support for Octavia.

Means to itest integrations between machine and k8s charm

I would like to write an itest for integrating a machine charm with cos lite.
I wonder if it could be possible to specify multiple providers:

jobs:
#...
        with:
          providers:
          - cloud: microk8s
            microk8s-addons: "storage dns"
          - cloud: lxd

The `latest/stable` channel for juju has been removed from the snap store

The default channel for juju in the snap store has now been changed to 3/stable and channel latest/stable has been removed.

While it is possible to update the default juju-channel option to 3/stable, but that the previous latest/stable channel was pointed with juju 2. This means workflows that relied on the default setting will be upgraded to juju 3.

juju-channel:
description: "snap channel for juju, installed via snap"
required: false
default: "latest/stable"

cleanup fails on lxd

hi again

after a successful test run, I am experiencing failures during cleanup when running on lxd

$ /snap/bin/juju destroy-controller -y github-pr-c60cb --destroy-all-models --destroy-storage
Destroying controller
Waiting for hosted model resources to be reclaimed
Waiting for 2 models
Waiting for 1 model
Waiting for 1 model
All hosted models reclaimed, cleaning up controller machines
Error: The process '/snap/bin/juju' failed with exit code null

I am unclear about why this is happening, but I can try working around it by allowing github to continue-on-error for the moment

thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.