plone / ansible-playbook Goto Github PK
View Code? Open in Web Editor NEWAn Ansible playbook for automated deployment of full-stack Plone servers.
An Ansible playbook for automated deployment of full-stack Plone servers.
nginx defaults to a max upload size of 1 megabyte which is a bit low for uploading documents to a Plone site. We might want to set it to something higher.
Would it be better to pin ansible ? This would make sure that the playbooks are working, sometimes ansible introduces changes to the syntax and this could let to a possible breakage.
It's not clear what can be tested under Testing with Vagrant. It would be good to set the user expectation so they don't spend time trying to get something to work that will not, and to improve documentation for things that do work, including links to settings in this repo and Plone docs.
The running of the playbook itself is one test.
It took a while to figure out how to SSH in. I got some settings from vbox_host.cfg
. Eventually I figured out that this works: ssh -i /path/to/virtualbox/private_key [email protected] -p2222
I tried all the suggested ports over HTTP:
http://127.0.0.1:1080 (web server/virtual hosting. Links and subrequests to assets (JS, CSS, images) won't work. If there is a known fix or workaround, it would be good to document that.)
http://127.0.0.1:2080 (haproxy stats page?)
http://127.0.0.1:7081 ("Plone is up and running" page)
http://127.0.0.1: 9080 (webdav?)
http://127.0.0.1: 5949 (munin?)
Is there anything that can be tested, and if so through what method?
I volunteer to do any documentation for this.
I've wanted to run up a private demo, so I found I could add auth_basic to the nginx config with the following configuration:
webserver_virtualhosts:
- hostname: "demo1.mycompany.com"
default_server: yes
extra: |
auth_basic "Private Demo - please enter your credentials";
auth_basic_user_file /etc/nginx/htpasswd;
Unfortunately, this doesn't work unless I also comment out this line in the nginx role's host template. I notice from the history of that file that item.extra
used to be added after that line. Can we revert to that ordering?
It would be nice if I could supply a custom buildout directory as a local path on the ansible control machine.
buildout.cfg
etc. have to live in the repository root, unless I am missing something.==> trusty: Running provisioner: ansible...
trusty: Running ansible-playbook...
The Ansible software could not be found! Please verify
that Ansible is correctly installed on your host system.
If you haven't installed Ansible yet, please install Ansible
on your host system. Vagrant can't do this for you in a safe and
automated way.
Please check https://docs.ansible.com for more information.
In case you think this might be a useful feature...
To host a Plone site from a domain folder, e.g. https://demo.plone.io/backend, I added these commits in the plone.io branch https://github.com/plone/ansible-playbook/tree/plone.io :
then all I had to do was specify location_subfolder: backend
Currently deploying to a selinux enabled server works but you end up with a non working installation, nginx can't connect to varnish, and haproxy can't connect to plone instances, so you only get 50x responses from nginx.
This is whilst being test-deployed to Varnish, MOTD shows the below (some details changed for confidentiality).
Custom Services/Ports
aclient: /usr/local/plone-5.0/aclient
/AClient: demo1.jowettenterprises.com []
zeo server: 127.0.0.1:8100
zeo clients: 127.0.0.1:8081
bclient: /usr/local/plone-5.0/aclient
/Plone: demo2.jowettenterprises.com []
zeo server: 127.0.0.1:7100
zeo clients: 127.0.0.1:8081
varnish: 127.0.0.1:6081
varnish admin: 127.0.0.1:6082
postfix: 25 (host-only)
The issues with the above are:
The plone servers seem to be working correctly, so I believe this is just an issue with motd
Would it be interesting to allow users to compile their own python?
I created such an ansible role just for that: https://galaxy.ansible.com/gforcada/compile-python/
Since changing version_comparaison to version in several scripts (nginx/tasks/main.yml, plone.plone_server/templates/buildout.cfg.j2) ansible plone is not compatible anymore with Ansible < 2.5
So this:
fail: msg="We need Ansible >= 2.0. Please update your kit. 'pip install -U Ansible'"
when: ansible_version.major < 2
could be updated IMO.
When running ansible-playbook version 1.3.7 with ansible 2.8.5 and vagrant 2.2.5 on MacOS 10.13.6 you may get the error:
[WARNING]: - plone.plone_server was NOT installed successfully: - command git checkout 1.3.7 failed in
directory /tmp/tmpsvtPtq (rc=1)
For me changing the version for in ansible.plone_server in requirements.yml
- src: https://github.com/plone/ansible.plone_server.git
version: 1.3.6
name: plone.plone_server
did the job to fix the issue, since the official version in the https://github.com/plone/ansible.plone_server.git is currently still 1.3.6.
There is a problem with haproxy health check configuration after haproxy update on Centos 7:
New haproxy package requires additional "tcp-check connect" directive before "tcp-cgeck send exit\n". Without the "connect" line healt checks don't work anymore.
Files needed to be updates:
ansible-playbook/roles/haproxy/templates/haproxy.cfg.j2
Lines:
{% if haproxy_version >= '1.5.0' and front_end.plone_client_tcpcheck|default(plone_client_tcpcheck|default(false)) %}
option tcp-check
+tcp-check connect
tcp-check send quit\n
{% else %}
Can anyone update the package? I am not familiar enough with github.
On Centos7 the default virtualenv does not include the following three pip installable packages which are installed by default on other versions of Ubuntu and Centos when creating a new virtualenv:
Running:
sudo -u plone_buildout /usr/local/plone-5.0/zeoserver/bin/pip freeze
shows these packages are missing on the default Centos7 installation.
output example:
wsgiref==0.1.2
Manual fix, log into the server and:
sudo -u plone_buildout /usr/local/plone-5.0/zeoserver/bin/pip install six
sudo -u plone_buildout /usr/local/plone-5.0/zeoserver/bin/pip install appdirs
sudo -u plone_buildout /usr/local/plone-5.0/zeoserver/bin/pip install packaging
check your work:
sudo -u plone_buildout /usr/local/plone-5.0/zeoserver/bin/pip freeze
Output example after fix (and successful playbook rerun):
appdirs==1.4.3
packaging==16.8
pyparsing==2.2.0
six==1.10.0
wsgiref==0.1.2
zc.buildout==2.9.3
Similar / same? issue: pypa/setuptools#942
Currently the makefile uses virtualenv-2.7
It is not unusual for virtualenv-2.7 to be installed simply as virtualenv.
The current make file would force an end user to create a symlink.
Instead we should consider using
virtualenv -p python2.7
Would be nice to have, in my opinion. Are there any plans / ideas for it yet?
I have an alias that lives in my host file, something like this:
canary ansible_port=5555 ansible_host=192.0.2.50
After 5794ac4 logwatch no longer sends mail. The mail.log is as follows:
Sep 21 15:42:00 canary postfix/pickup[13509]: A7B1A4E28E: uid=0 from=<root>
Sep 21 15:42:00 canary postfix/cleanup[14582]: A7B1A4E28E: message-id=<20160921154200.A7B1A4E28E@canary>
Sep 21 15:42:00 canary postfix/qmgr[1358]: A7B1A4E28E: from=<root@canary>, size=3930, nrcpt=1 (queue active)
Sep 21 15:42:00 canary postfix/smtp[14584]: A7B1A4E28E: to=<root@canary>, orig_to=<root>, relay=none, delay=1.2, delays=1.1/0.02/0.09/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=canary type=AAAA: Host not found)
Sep 21 15:42:00 canary postfix/cleanup[14582]: C8E3A4E2B9: message-id=<20160921154200.C8E3A4E2B9@canary>
Sep 21 15:42:00 canary postfix/qmgr[1358]: C8E3A4E2B9: from=<>, size=5719, nrcpt=1 (queue active)
Sep 21 15:42:00 canary postfix/bounce[14597]: A7B1A4E28E: sender non-delivery notification: C8E3A4E2B9
Sep 21 15:42:00 canary postfix/qmgr[1358]: A7B1A4E28E: removed
Sep 21 15:42:00 canary postfix/smtp[14584]: C8E3A4E2B9: to=<root@canary>, relay=none, delay=0.07, delays=0.01/0/0.06/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=canary type=AAAA: Host not found)
Sep 21 15:42:00 canary postfix/qmgr[1358]: C8E3A4E2B9: removed
There's a lot I don't understand about postfix, but adding the alias back into that line "fixes it" for me.
When the file local-configure.yml don't define a plone_instance_name an error occur during the
template processing.
host os: debian jessie.
guest os: ubuntu xenial (also on debian jessie)
ansible --version
ansible 2.3.1.0
config file =
configured module search path = Default w/o overrides
python version = 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2]
TASK [restart_script : Create restart-if-hot script] *****************************************************************************************************************
failed: [xenial] (item={'webserver_virtualhosts': [{u'hostname': u'xenial', u'zodb_path': u'/Plone', u'aliases': [u'default']}]}) => {"failed": true, "item": {"webserver_virtualhosts": [{"aliases": ["default"], "hostname": "xenial", "zodb_path": "/Plone"}]}, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'plone_instance_name'"}
I noticed this because I'm muddling through how to use Let's Encrypt for secure certificates and auto-renewal in a cron job together with this project, as mentioned in #61.
In the documentation, from the note under Virtual hosting setup:
If you are setting up an https server, you must supply certificate and key files. The files will be copied from your local machine (the one containing the playbook) to the target server.
Then in the next section, Certificates:
To use files that already exist on the controlled server, use:
I think these two items inconsistent. If you agree, I volunteer to provide the text in a PR.
Bringing machine 'trusty' up with 'virtualbox' provider...
==> trusty: Checking if box 'ubuntu/trusty64' is up to date...
==> trusty: There was a problem while downloading the metadata for your box
==> trusty: to check for updates. This is not an error, since it is usually due
==> trusty: to temporary network problems. This is just a warning. The problem
==> trusty: encountered was:
==> trusty:
==> trusty: The requested URL returned error: 404 Not Found
==> trusty:
==> trusty: If you want to check for box updates, verify your network connection
==> trusty: is valid and try again.
==> trusty: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> trusty: flag to force provisioning. Provisioners marked to run always will still run.
ansible reports the following error "Failed to create bus connection".
This seems to be due to the fact that dbus is not installed
see: ansible/ansible#25543
The workaround is to install dbus on the target server
apt-get install dbus
The solution would be to add dbus as a dependency.
I toke some time to find the docs about the custom buildout github configuration. I think it would be good to have a link to the docs on custom_playbook.rst, also I could make a merge request but didn't figure out how to cross reference the docs. I think it could be something like this:
If you choose the git repository strategy, your buildout skeleton must, at a minimum, include bootstrap.py
and buildout.cfg
files. It will also commonly contain a src/
subdirectory and extra configuration files. It will probably not contain bin/
, var/
or parts/
directories. Those will typically be excluded in your .gitignore
file.
To do this, you might have to configure the :ref:plone_buildout_git_repo <plone_buildout_git_repo>
variable.
If you use a buildout directory checkout, you must still specify in your Playbook variables the names and listening port numbers of any client parts you wish included in the load balancer configuration. Also specify the name of your ZEO server part if it is not zeoserver
.
Similar to issue #94.
When the file local-configure.yml don't define a plone_instance_name an error occur during the
template processing.
host os: debian jessie.
guest os: ubuntu xenial (also on debian jessie)
ansible --version
ansible 2.3.1.0
config file =
configured module search path = Default w/o overrides
python version = 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2]
TASK [audit : set_fact] **********************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'plone_instance_name'
fatal: [xenial]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""}
Changes in the workings of the ansible_selinux fact cause selinux errors on installs lacking selinux.
Ubuntu 16.04 requires Python 2 installed for this playbook to run.
On Digital Ocean, I did the following:
(ansible_work) Steves-iMac:ansible-playbook stevepiercy$ ansible XXX.stevepiercy.us -a "whoami"
XXX.stevepiercy.us | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Shared connection to ###.###.###.### closed.\r\n",
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
"msg": "MODULE FAILURE",
"rc": 0
}
A quick search led me to SO posts a-plenty, but I couldn't decide which looked the best for automation. Instead I just installed Python 2 via SSH with sudo apt-get install python-simplejson
. Now the command works:
(ansible_work) Steves-iMac:ansible-playbook stevepiercy$ ansible XXX.stevepiercy.us -a "whoami"
XXX.stevepiercy.us | SUCCESS | rc=0 >>
spiercy
I hang my head in shame for not automating gooder.
I also saw an issue in the tracker that mentioned this, and it seems that it got overlooked in the subsequent PRs.
#16
#17
#43
The postfix configuration includes {{ inventory_hostname }}
in the mydestination
setting. This means that postfix considers itself to be the intended recipient of mail at the domain being hosted via HTTP, even though the domain's MX record probably points elsewhere. Is there a reason this is included in mydestination?
Hello,
provisioning with ansible (version 2.2.0 from fedora 25 repository) and the latest stable version of your playbook (version 1.2.10) failed when running vagrant up centos7 with this local-configure.yml :
install_loadbalancer: yes
install_proxycache: yes
install_muninnode: no
install_mailserver: no
install_webserver: yes
install_logwatch: no
install_fail2ban: no
admin_email: "[email protected]"
timezone: "Europe/Brussels\n"
playbook_plones:
- plone_instance_name: xxxx4310
plone_target_path: "/usr/local/plone"
plone_use_supervisor: yes
plone_initial_password: xxxxx
plone_version: '4.3.10'
plone_create_site: no
plone_client_count: 4
plone_zserver_threads: 2
plone_zodb_cache_size: 50000
plone_client_max_memory: 1000MB
plone_backup_at:
minute: 30
hour: 1
weekday: "*"
plone_pack_at:
minute: 30
hour: 3
weekday: 7
plone_keep_days: 7
plone_keep_backups: 3
plone_autorun_buildout: no
webserver_virtualhosts:
- hostname: xxxxxxxxxx.be
port: 80
protocol: http
extra: return 301 https://$server_name$request_uri;
- hostname: xxxxxxxxxx.be
port: 443
default_server: yes
zodb_path: /Plone
protocol: https
certificate_file: private/plone.crt
key_file: private/plone.key
There is a problem with the installation of wv from repoforge.org... This package is no longer online
TASK [plone.plone_server : CentOS 7: Ensure wv installed] **********************
fatal: [centos7]: FAILED! => {"changed": false, "failed": true, "msg": "Failure downloading http://pkgs.repoforge.org/wv/wv-1.2.4-1.el6.rf.x86_64.rpm, 'NoneType' object has no attribute 'read'"}
Even the website hosting this repo no longer exist, a better idea could be to use the package from epel for el6 https://dl.fedoraproject.org/pub/epel/6/x86_64/ it's probably more stable than repoforge.org but I don't know enough (yet) plone to test this
Best regards,
J.
This may open a can of worms, but I was wondering if there is any interest in supporting other platforms, to wit CentOS. If so, I can make a pull request of a branch I have with a few changes that make the playbook work on CentOS 7. It was pretty easy, for the most part it was just a matter of using yum instead of apt. A couple of roles will not work, and compatible replacements will be needed, such as fail2ban and unattended-upgrades.
Debops is a fine collection of Ansible roles, designed with extensibility and cooperation in mind. If you'd base this role on debops you'll save a lot of effort for you. Reusing these roles is easy.
Debops already contains roles for setting up nginx, firewall (ferm), postfix and monit.
Note: I'm not involved in debops, but I'm convinced, this is the way to go. Then I started looking for Ansible recipes/roles I found a lot of them having poor quality or being to less modular. Debops seams to be the first fine collection of ansible-roles I've found.
TASK [docker : install docker-py] ************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [docker : service on] *******************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [docker : Create docker_home directory] *************************************************************************************************************************************
ok: [127.0.0.1]
TASK [docker-jenkins : Create /docker/docker_jenkins directory] ******************************************************************************************************************
ok: [127.0.0.1]
TASK [docker-jenkins : Copy files to server] *************************************************************************************************************************************
ok: [127.0.0.1]
TASK [docker-jenkins : Copy templates to server] *********************************************************************************************************************************
ok: [127.0.0.1] => (item=Dockerfile)
ok: [127.0.0.1] => (item=security.groovy)
TASK [docker-jenkins : Build docker-jenkins image] *******************************************************************************************************************************
ok: [127.0.0.1]
TASK [docker-jenkins : Create docker-jenkins container] **************************************************************************************************************************
fatal: [127.0.0.1]: FAILED!
=====》
{
"changed": false,
"module_stderr": "Shared connection to 127.0.0.1 closed.
",
"module_stdout": "Traceback (most recent call last):
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 2081, in
main()
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 2076, in main
cm = ContainerManager(client)
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 1703, in init
self.present(state)
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 1723, in present
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 825, in create_parameters
host_config=self._host_config(),
File "/tmp/ansible_PjsG9O/ansible_module_docker_container.py", line 931, in _host_config
return self.client.create_host_config(**params)
File "/usr/lib/python2.7/site-packages/docker/api/container.py", line 157, in create_host_config
return utils.create_host_config(*args, **kwargs)
TypeError: create_host_config() got an unexpected keyword argument 'init'
",
"msg": "MODULE FAILURE",
"rc": 1
}
Would be nice to score SSL labs A+
First of all, thank you for this phenomenal project!
I was able to run the playbook to install Plone 5.0.7 on Ubuntu Xenial 16.04 LTS in Vagrant on macOS Sierra. 🎉
I noticed a couple of possible updates, but I wanted to check with you before submitting a PR.
Reading through the README and the docs, it is getting easy to get confused :)
In the docs we have:
At the moment, while the environment with the fullest support for the target server is Debian/Ubuntu, some initial support is available for CentOS.
Would it an idea, to clarify that a bit ? I can't get from that how far is it supported or not ? What should I do ? I have a CentOS server and would like to use it, but well I do not know :)
In lines 51-53:
{% if instance_config.plone_version | version_compare('5.0.6', '<=') %}
Products.PloneHotfix20160830
{% endif %}
I'm not sure how to update this. There is a newer hotfix for 5.0.6, but I don't know whether to stack the hotfixes or replace them.
Also should there be a stanza for 5.0.7? 5.0.7 includes the hotfix, so I don't think it does.
Hello,
I want to add some settings in haproxy.cfg dedicated to the way we balance the users between all the backend servers. A dev told me about the appsession parameter (http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4-appsession), because for him the sessions data aren't shared between the zeo client (running on the same server).
I don't know if it's true or not, I can't find the information about how plone stores the session data(https://docs.plone.org/develop/plone/sessions/index.html). But adding this line after the listen section solves our problems:
appsession __ac len 32 timeout 3h
So our listen section looks like this:
listen Plone
bind *:8080
appsession __ac len 32 timeout 3h
server client1 127.0.0.1:8081 check maxconn 1 inter 10000 downinter 2000 rise 1
server client2 127.0.0.1:8082 check maxconn 1 inter 10000 downinter 2000 rise 1
server client3 127.0.0.1:8083 check maxconn 1 inter 10000 downinter 2000 rise 1
server client4 127.0.0.1:8084 check maxconn 1 inter 10000 downinter 2000 rise 1
server client5 127.0.0.1:8085 check maxconn 1 inter 10000 downinter 2000 rise 1
server client6 127.0.0.1:8086 check maxconn 1 inter 10000 downinter 2000 rise 1
server client7 127.0.0.1:8087 check maxconn 1 inter 10000 downinter 2000 rise 1
server client8 127.0.0.1:8088 check maxconn 1 inter 10000 downinter 2000 rise 1
server client9 127.0.0.1:8089 check maxconn 1 inter 10000 downinter 2000 rise 1
server client10 127.0.0.1:8090 check maxconn 1 inter 10000 downinter 2000 rise 1
Could you tell me:
Do you think it's a good idea to add an extra parameter/variable to add a new way to customize the haproxy configuration?
Best regards,
L00ptr
What's the best way to use this playbook to deploy multiple sites? I know I can have a separate fork checked out for each site, but that feels hard to maintain. Is there a way I can have my site-specific configuration in a .yml file in the site's repository and have it say "run the playbook from plone/ansible-playbook (or a central installation of it on my computer) with this configuration"?
I am testing the ansible-playbook STABLE branch version 1.3.1 2018-05-07 with a local config based on a local-configure.yml
copy of the [1] sample-multiserver.yml
example.
[1] https://github.com/plone/ansible-playbook/blob/STABLE/sample-multiserver.yml
Testing fails in Vagrant with the bionic beaver VM in Virtualbox 5.2.8 r121009 with Ansible 2.5.5 on Mac OSX 10.11.6.
I changed the name of the VM in the vagrantfile by adding after
config.vm.provider "virtualbox" do |v|
...
v.customize ["modifyvm", :id, "--name", "Ansible-Plone" ]
I kept the both Plone 5 and 4 examples in the local-configure.yml
file after adding the global values for admin_email:
, plone_initial_password:
, and deactivating use of muninnode mode by using install_muninnode: no
as described.
Then the version of the primary Plone was changed to '5.1.2'
For completeness: The first provisioning attempt failed after the initial vagrant up
with an error
"You must set the admin_email variable."
but it was set correctly without typo in the local-configure.yml
file.
After a second try using vagrant provison
the Ansible prfovisioning stops with the error:
"msg": "Destination directory /usr/local/plone-5.1/secondary/scripts does not exist"
full error text
TASK [restart_script : Create restart script] **********************************
changed: [bionic] => (item={u'loadbalancer_options': u'maxconn 1 inter 60000 downinter 2000 rise 1 fall 2 on-error mark-down error-limit 15', u'plone_zeo_port': 8100, u'loadbalancer_port': 8080, u'webserver_virtualhosts': [{u'hostname': u'bionic', u'zodb_path': u'/Plone', u'aliases': [u'default']}, {u'protocol': u'https', u'hostname': u'bionic', u'zodb_path': u'/Plone', u'certificate_file': u'tests/snakeoil.pem', u'key_file': u'tests/snakeoil.pem', u'aliases': [u'default']}], u'plone_instance_name': u'primary', u'plone_client_base_port': 8081, u'loadbalancer_listen_extra': u'timeout connect 30s # longer timeout for primary'})
failed: [bionic] (item={u'plone_buildout_cfg': u'buildout.cfg', u'plone_hot_monitor': u'cron', u'loadbalancer_healthcheck': False, u'plone_zeo_port': 7100, u'webserver_virtualhosts': [{u'hostname': u'localhost', u'location_extra': u'# test comment here; added via location_extra', u'zodb_path': u'/Plone', u'extra': u'# test comment here; added via extra'}, {u'hostname': u'test.example.com', u'zodb_path': u'/Plone'}], u'loadbalancer_port': 7080, u'plone_version': u'4.3.17', u'plone_client_base_port': 7081, u'plone_instance_name': u'secondary'}) => {"changed": false, "checksum": "1d71e639adf8c685116a9a41c101e972a33edda3", "item": {"loadbalancer_healthcheck": false, "loadbalancer_port": 7080, "plone_buildout_cfg": "buildout.cfg", "plone_client_base_port": 7081, "plone_hot_monitor": "cron", "plone_instance_name": "secondary", "plone_version": "4.3.17", "plone_zeo_port": 7100, "webserver_virtualhosts": [{"extra": "# test comment here; added via extra", "hostname": "localhost", "location_extra": "# test comment here; added via location_extra", "zodb_path": "/Plone"}, {"hostname": "test.example.com", "zodb_path": "/Plone"}]}, "msg": "Destination directory /usr/local/plone-5.1/secondary/scripts does not exist"}
after using vagrant ssh
to inspect the VM I found that actually the following directories were available:
/usr/local/plone-5.1/primary/scripts
/usr/local/plone-4.3/secondary/scripts
The alert related directory /usr/local/plone-5.1/secondary/scripts
makes no sense for this example setup and was missing.
There must be a bug or option missing in the role creating the restart stuff.
I need to dig deeper to find it out myself.
I try to override a variable within a host_vars file:
inventory/host_vars/plone.yml
plone_target_path: /home/webmaster/buildouts/
Still the target stays as /usr/local/plone-*
I need to do it like this:
inventory/host_vars/plone.yml
instances:
- plone_target_path: /home/webmaster/buildouts/
playbooks/plone.yml
---
- name: Manage Plone
hosts: [ 'plone_server' ]
gather_facts: True
become: True
roles:
- role: ansible.plone_server
tags: ['role::plone_server']
plone_config: "{{ instances[0] }}"
Now it works.
There are a few issues when deploying to Ubuntu 15.04:
/etc/fail2ban/jail.d/defaults-debian.conf
with just a single section for [sshd]
, with a single line: enabled = true
that causes fail2ban to break.The first issue can be solved with a -m raw
command before doing the deployment.
For the last one, I don't really have a solution, other than commenting out (or deleting the whole file) and re-deploying again.
I have a branch with my suggested changes, and I'll open a PR. I'm not sure how different OS versions should be dealt with.
We should increase the default lockout fail2ban time.
In the daily fail2ban reports for our servers I am seeing a lot of repeat doorknob jiggling. In my own servers for which I greatly increased the fail2ban lockout time I see a much much smaller number of failed attempts.
Logwatch tasks fails on debian jessie because update-notifier-common was removed from this debian version.
jnv/ansible-role-unattended-upgrades#6 (comment)
failed: [jessie] (item=[u'logwatch', u'update-notifier-common']) => {
"failed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": null,
"cache_valid_time": 0,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"name": [
"logwatch",
"update-notifier-common"
],
"only_upgrade": false,
"package": [
"logwatch",
"update-notifier-common"
],
"purge": false,
"state": "present",
"update_cache": null,
"upgrade": null
}
},
"item": [
"logwatch",
"update-notifier-common"
],
"msg": "No package matching 'update-notifier-common' is available"
}
I can't figure out how to connect to client_reserved, which listens on port 8083.
I tried with an ssh tunnel and without.
Could you please add an example to the documentation?
Thanks
Full failure report on the prompt for the plone-5.2/primary:
...
TASK [plone.plone_server : Superlance installed] *******************************
fatal: [bionic]: FAILED! => {"changed": false, "msg": "Unable to find any of pip3 to use. pip needs to be installed."}
...
You can quickly fix that after the initial vagrant up by installing pip3 in the vm like this and reprovision the vm
While the VM is still up after vagrant up:
vagrant ssh # enter the VM
sudo apt install python3-pip # install pip by hand (only for testing)
now reprovision:
vagrant provision
To permanently include the pip3 command you need to enhance the playbook.
I come back if I fixed it or someone else jumps in to choose the proper location to put the code in.
must be something similar to https://relativkreativ.at/articles/how-to-install-python-with-ansible
For now I have no idea where the pip3 requirement comes from in the current playbook version 1.3.7 since
I set up a server using the default mailserver settings, but get this in /var/log/mail.log:
Mar 19 16:59:41 postfix/master[1418]: warning: process /usr/lib/postfix/smtpd pid 15705 exit status 1
Mar 19 16:59:41 postfix/master[1418]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling
Mar 19 17:00:41 postfix/smtpd[15777]: fatal: in parameter smtpd_relay_restrictions or smtpd_recipient_restrictions, specify at least one working instance of: reject_unauth_destination, defer_unauth_destination, reject, defer, defer_if_permit or check_relay_domains
I have modified the plone_zeo_port setting, and this is reflected correctly in parts/zeoserver/etc/zeo.conf. However parts/client?/etc/zope.conf always show the following:
<zodb_db main>
# Main database
cache-size 30000
# Blob-enabled ZEOStorage database
<zeoclient>
read-only false
read-only-fallback false
blob-dir /var/local/plone-4.3/cbt/blobstorage
shared-blob-dir on
server 8100
storage 1
name zeostorage
var /usr/local/plone-4.3/cbt/parts/client3/var
cache-size 128MB
</zeoclient>
mount-point /
</zodb_db>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.