sigsteve / vagrant-caasp Goto Github PK
View Code? Open in Web Editor NEWVagrant deployment of SUSE CaaS Platform (Kubernetes) v4.5
Vagrant deployment of SUSE CaaS Platform (Kubernetes) v4.5
I encountered the error below and was asked to report an issue. I am using Leap 15.1
when executing sudo ./libvirt_setup/openSUSE_vagrant_setup.sh, I get the following error:
chasecrum@linux-yblm:~/code/vagrant-caasp> sudo ./libvirt_setup/openSUSE_vagrant_setup.sh
rpmdev-vercmp
rpmdev-vercmp
rpmdev-vercmp # with no arguments, prompt
Exit status is 0 if the EVR's are equal, 11 if EVR1 is newer, and 12 if EVR2
is newer. Other exit statuses indicate problems.
As a work around, this part of the script was commented out and the following command worked:
zypper --no-gpg-checks in -y https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_x86_64.rpm
After this command successfully completed, the vagrant install script (with commented out section) ran successfully to completion.
Opening this per our recent internal discussion. @sigsteve
We need to add some options for people who want to get their system up to a variety of "levels". The "--full" option deploys the nodes and then deploys CaaSP, but we might also want to add options that are friendly to air-gapped environments or even bandwidth restricted. Things like RMT, container image or chart mirrors.
Perhaps a "--create-helm-mirror" to include an automated deployment of a helm mirror.
See other open issue for Registry mirror.
Instead of hardcoding virbr1, discover available bridge device to use.
Hi there,
we from B1 Systems would be highly interested if the vagrant box that you use could somehow be provided as official download or as kiwi-template (if no official download is possible).
Or better yet, if there was a vagrant box for SUSE CaaSP directly, that would be really awesome and would ease setting up training environments or test setups a lot.
Kind Regards,
Johannes
I think this sentence "Once you have a CaaSP cluster provisioned you can start and stop that cluster by using the cluster.sh script" would be better right above "Usage cluster.sh [options..] [command]"
It seems to split up the ./deploy_caasp.sh reference and doesn't call out the ./cluster.sh portion.
Include the token in the admin.conf when the dashboard is deployed. This will let users login to the dashboard using the config.
It would be great if the vagrant-caasp setup had a simple identity provider built-in, so we can have multiple users with different privileges.
Right now all operations are performed using cluster-admin bindings, which kind of by-passes the whole RBAC idea.
Just stood up the nodes with deploy_caasp.sh
vagrant ssh caasp4-master-1
sudo su - sles
source /vagrant/deploy/00.prep_environment.sh
sles@caasp4-master-1:/vagrant/deploy> source 00.prep_environment.sh
Agent pid 6164
/vagrant/cluster/caasp4-id: Permission denied
sles@caasp4-master-1:/vagrant/deploy> whoami
sles
sles@caasp4-master-1:/vagrant/deploy> ls -al /vagrant/cluster/
total 16
drwxr-xr-x 2 root root 4096 Sep 10 13:07 .
drwxr-xr-x 8 vagrant vagrant 4096 Sep 10 13:07 ..
-rw------- 1 root root 1843 Sep 10 13:07 caasp4-id
-rw-r--r-- 1 root root 414 Sep 10 13:07 caasp4-id.pub
I'd like to use rook with ceph for storage and therefore need an extra disk on each worker node. Could this be easily added?
Also, these nodes have no subscriptions, an easy way to add them would be great so that we can install packages and updates.
Otherwise, great tool - love it! Thanks
step 04 fails now
sles@caasp4-master-1:/vagrant/deploy> source 00.prep_environment.sh
Agent pid 12255
Identity added: /vagrant/cluster/caasp4-id ([email protected])
sles@caasp4-master-1:/vagrant/deploy> ./04.add_workers.sh
Adding workers...
++ seq 1 1
+ for NUM in $(seq 1 $NWORKERS)
+ skuba node join --role worker --user sles --sudo --target caasp4-worker-1 caasp4-worker-1
W0911 15:19:37.896159 12263 ssh.go:306]
The authenticity of host '192.168.121.130:22' can't be established.
ECDSA key fingerprint is d5:73:2c:f5:7d:b2:fd:09:95:c7:ca:0f:b8:39:90:af.
I0911 15:19:37.896243 12263 ssh.go:307] accepting SSH key for "caasp4-worker-1:22"
I0911 15:19:37.896253 12263 ssh.go:308] adding fingerprint for "caasp4-worker-1:22" to "known_hosts"
E0911 15:19:37.916194 12263 ssh.go:237] ssh authentication error: please make sure you have added to your ssh-agent a ssh key that is authorized in "caasp4-worker-1".
F0911 15:19:37.916268 12263 join.go:51] error joining node caasp4-worker-1: failed to apply state kubernetes.install-node-pattern: failed to initialize client: authentication error
+ set +x
Waiting for masters to be ready
Waiting for workers to be ready........
The syntax for helm 3 has changed and we will need to update scripts that deploy with it.
Enabling the swapaccount=1
boot option is a prerequisite for CAP, at least when installing with the Diego scheduler, so can this be added to the vagrant-caasp box?
Podman is needed for various docker CLI needs, including logging into private registries.
Buildah isn't supported inside the cluster but it is needed to build images from Dockerfiles in CaaSPv4.
$ cat zypper-version
#!/bin/bash
#zypper_version=($(zypper -V))
zypper_version='1.14.27'
if [[ ${zypper_version[1]} < '1.14.4' ]]
then
echo "<"
else
echo ">"
fi
$ ./zypper-version
<
which is wrong
this would work, but requires rpmdev-vercmp provided by rpmdevtools from openSUSE:Backports:SLE-15-SP1 repo
[...]
zypper_version=($(rpm -q --qf 'zypper-%{V}\n' zypper))
#https://unix.stackexchange.com/questions/163702/bash-script-to-verify-that-an-rpm-is-at-least-at-a-given-version
rpmdev-vercmp $zypper_version 'zypper-1.14.4'
if [[ $? == 11 ]]
then
zypper --no-gpg-checks in -y https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_x86_64.rpm
[...]
as /usr/bin/rpmdev-vercmp is a Python script, you could simply copy it to /usr/local/bin/
Hi,
today my openSUSE upgrades Vagrant to 2.2.6.
Try to install vagrant-libvirt this is the error:
Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Traceback (most recent call last):
18: from /usr/share/vagrant/gems/bin/vagrant:23:in <main>' 17: from /usr/share/vagrant/gems/bin/vagrant:23:in
load'
16: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/bin/vagrant:166:in <top (required)>' 15: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/environment.rb:290:in
cli'
14: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/cli.rb:66:in execute' 13: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/root.rb:66:in
execute'
12: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:69:in execute' 11: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:69:in
each'
10: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/install.rb:70:in block in execute' 9: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/command/base.rb:14:in
action'
8: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/runner.rb:102:in run' 7: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/util/busy.rb:19:in
busy'
6: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/runner.rb:102:in block in run' 5: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/builder.rb:116:in
call'
4: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/warden.rb:50:in call' 3: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/builtin/before_trigger.rb:23:in
call'
2: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/action/warden.rb:50:in call' 1: from /usr/share/vagrant/gems/gems/vagrant-2.2.6/plugins/commands/plugin/action/install_gem.rb:30:in
call'
/usr/share/vagrant/gems/gems/vagrant-2.2.6/lib/vagrant/plugin/manager.rb:156:in install_plugin': undefined method
name' for nil:NilClass (NoMethodError)
I'm getting the following error at stage 02:
sles@caasp4-master-1:/vagrant/deploy> ./00.prep_environment.sh
++ ssh-agent -s
We need to have the ability to shutdown and bring up a cluster that's already been deployed.
In have followed all steps trying to install macOS Catalina using VirtualBox on Windows 10-64 bit. I get the following error being thrown in the UEFI the code error is as follows -
UEFI Interactive Shell v21.2
EDK II
UEFI v2.70 (EDK II, 0x000100000)
Mapping table
BLKO: Alias(s):
PCIRoot (0x0)/Pci(0xC,0x0)/USB (0x8,0x0)
BLK1: Alias(s):
PCIRoot(0x0)/Pci(0xD,0x0/Sata(0x0,0xFFFF,0x0)
Press ESC in 1 seconds to skip startup.nsh. or any other key to continue.
Shell>_
Any help would be appreciated.
I'm getting a ton of errors during deployment for the skuba, kubectl, and helm commands not found. I'm wondering if I'm using the correct base box. Can you please provide the md5sum here for the base box to validate.
Thanks!
Currently the default log levels in skuba are used, which is not a lot of information if you hit a problem. It would be helpful to be able to pass through a log level (-v [0-10] is currently supported) without having to edit each script in the deploy/ dir.
Bootstrapping cluster...
+ skuba -v8 node bootstrap --user sles --sudo --target caasp4-master-1 caasp4-master-1
I0910 16:18:43.737952 2615 config.go:38] loading configuration from "kubeadm-init.conf"
I0910 16:18:43.740529 2615 states.go:35] === applying state kubernetes.install-node-pattern ===
W0910 16:18:43.803161 2615 ssh.go:306]
The authenticity of host '127.0.0.1:22' can't be established.
ECDSA key fingerprint is 33:8c:92:15:f5:63:7e:51:d4:58:45:67:22:b3:84:bc.
I0910 16:18:43.803276 2615 ssh.go:307] accepting SSH key for "caasp4-master-1:22"
I0910 16:18:43.803377 2615 ssh.go:308] adding fingerprint for "caasp4-master-1:22" to "known_hosts"
E0910 16:18:43.818124 2615 ssh.go:237] ssh authentication error: please make sure you have added to your ssh-agent a ssh key that is authorized in "caasp4-master-1".
F0910 16:18:43.818191 2615 bootstrap.go:49] error bootstraping node: failed to apply state kubernetes.install-node-pattern: failed to initialize client: authentication error
+ skuba cluster status
E0910 16:18:43.877490 2625 status.go:34] unable to get cluster status: unable to get admin client set: could not load admin kubeconfig file: failed to load admin kubeconfig: open admin.conf: no such file or directory
+ set +x
mkdir: cannot create directory ‘/home/sles/.kube’: File exists
+ kubectl get nodes -o wide
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ set +x
sles@caasp4-master-1:/vagrant/deploy> ssh-add -l
2048 SHA256:Azz4dBfdtn7Lan0EWGNDvEEyZdgRxlMzL5HgyPq+8uM sles@caasp4-master-1 (RSA)
~~~
After manually adding the missing public key to the worker node, I was able to run 05.setup_helm.sh, then tried 06.add_k8s_nfs-sc.sh, which failed
sles@caasp4-master-1:/vagrant/deploy> ./05.setup_helm.sh
Setting up helm...
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Creating /home/sles/.helm
Creating /home/sles/.helm/repository
Creating /home/sles/.helm/repository/cache
Creating /home/sles/.helm/repository/local
Creating /home/sles/.helm/plugins
Creating /home/sles/.helm/starters
Creating /home/sles/.helm/cache/archive
Creating /home/sles/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/sles/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
deployment.extensions/tiller-deploy patched
sles@caasp4-master-1:/vagrant/deploy> ./06.add_k8s_nfs-sc.sh
Adding NFS storage class...
Error: could not find a ready tiller pod
Opening this per our recent internal discussion. @sigsteve
We need to add some options for people who want to get their system up to a variety of "levels". The "--full" option deploys the nodes and then deploys CaaSP, but we might also want to add options that are friendly to air-gapped environments or even bandwidth restricted.
Perhaps a "--create-registry-mirror" to include an automated deployment of a registry-mirror.
See other open issue for Helm mirror.
Explore alternate vagrant-libvirt network types, such as bridge, hostdev, passthrough, etc.
The documentation lacks instructions that firewalld needs to accept NFS connections (or to disable the firewalld).
Cheers
This needs to be added so podman will work as a non-root users (rootless in the docs):
usermod -v 10000000-20000000 -w 10000000-20000000 sles
We should validate the deployment requirements are possible on the host, before attempting to deploy.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.