Giter Site home page Giter Site logo

ansible-galaxy's Introduction

Galaxy

An Ansible role for installing and managing Galaxy servers. Despite the name confusion, Galaxy bears no relation to Ansible Galaxy.

Getting started with this module? Check out our Tutorial

Requirements

This role has the same dependencies as the git module. In addition, pip and Python virtualenv are required. These can easily be installed via a pre-task in the same play as this role:

- hosts: galaxyservers
  pre_tasks:
    - name: Install Dependencies
      apt:
        name: "{{ item }}"
      become: yes
      when: ansible_os_family == 'Debian'
      with_items:
        - git
        - python-pip
        - python-virtualenv
    - name: Install Dependencies
      yum:
        name: "{{ item }}"
      become: yes
      when: ansible_os_family == 'RedHat'
      with_items:
        - git
        - python-virtualenv
  roles:
    - galaxyproject.galaxy

If your git executable is not on $PATH, you can specify its location with the git_executable variable. Likewise with the virtualenv executable and corresponding galaxy_virtualenv_command variable.

Role Variables

Not all variables are listed or explained in detail. For additional information about less commonly used variables, see the defaults file.

Many variables control where specific files are placed and where Galaxy writes data. In order to simplify configuration, you may select a layout with the galaxy_layout variable. Which layout you choose affects the required variables.

Required variables

If using any layout other than root-dir:

  • galaxy_server_dir: Filesystem path where the Galaxy server code will be installed (cloned).

If using root-dir:

  • galaxy_root: Filesystem path of the root of a Galaxy deployment, the Galaxy server code will be installed in to a subdirectory of this directory.

Optional variables

The galaxy_config_perms option controls the permissions that Galaxy configuration files will be set to. This option has been added in version 0.9.18 of the role and the default value is 0640 (user read-write, group read-only, other users have no permissions). In older versions, the role did not control the permissions of configuration files, so be aware that your configuration file permissions may change as of 0.9.18 and later.

Layout control

  • galaxy_layout: available layouts can be found in the vars/ subdirectory and possible values include:
    • root-dir: Everything is laid out in subdirectories underneath a single root directory.
    • opt: An FHS-conforming layout across multiple directories such as /opt, /etc/opt, etc.
    • legacy-improved: Everything underneath the Galaxy server directory, as with run.sh.
    • legacy: The default layout prior to the existence of galaxy_layout and currently the default so as not to break existing usage of this role.
    • custom: Reasonable defaults for custom layouts, requires setting a few variables as described in vars/layout-custom.yml

Either the root-dir or opt layout is recommended for new Galaxy deployments.

Options below that control individual file or subdirectory placement can still override defaults set by the layout.

New options for Galaxy 22.01 and later

The role can now manage the Galaxy service using gravity. This is the default for Galaxy 22.05 and later. Additionally, support for the galaxy_restart_handler_name variable has been removed. If you need to enable your own custom restart handler, you can use the "listen" option to the handler as explained in the handler documentation. The handler should "listen" to the topic "restart galaxy".

From release 22.01 Galaxy can serve different static content per host (e.g. subdomain) and you can set themes per host.

By setting galaxy_manage_subdomain_static: yes you enable the creation of static directories and configuration per host and by setting galaxy_manage_themes: yes the role will append your themes_config.yml file specified under galaxy_themes_conf_path to your themes files after coping them over to your galaxy server and create the respective configuration.

In order to use this feature, you need to create the following directory structure under files/ (customizable with the galaxy_themes_ansible_file_path variable):

files/galaxy/static
├──<subdomain-name-1>
│   ├── static
│   │   ├── dist (optional)
│   │   │   └── some-image.png 
│   │   ├── images (optional)
│   │   │   └── more-content.jpg
│   │   └── welcome.html (optional, galaxyproject.org will be displayed otherwise.)
│   └── themes 
│       └── <subdomain-name-1>.yml           
├── <subdomain-name-2>                            
│   ├── static
│   │   ├── dist (optional)
│   │   │   ├── another-static-image.svg
│   │   │   └── more-static-content-2.svg
│   │   └── welcome.html (optional)
│   └── themes
│       └── <subdomain-name-2>.yml
... (and many more subdomains)

Where the should exactly match your subdomain's name. The subdirectories static and themes are mandatory, as well as the correctly named theme file (if you enabled galaxy_manage_themes), while all subdirectories in static are optional.
Which subdirectories and files are copied is managed by the static_galaxy_themes_keys variable.

Also make sure that you set galaxy_themes_welcome_url_prefix, so your welcome pages are templated correctly.

It is mandatory to set the variables under galaxy_themes_subdomains as shown in the example in defaults/main.yml. If you enabled the galaxy_manage_host_filters variable, you can also specify the tool sections that should be shown for each individual subdomain.

New options for Galaxy 18.01 and later

  • galaxy_config_style (default: yaml): The type of Galaxy configuration file to write, yaml for the YAML format supported by Gunicorn or ini-paste for the traditional PasteDeploy-style INI file
  • galaxy_app_config_section (default: depends on galaxy_config_style): The config file section under which the Galaxy config should be placed (and the key in galaxy_config in which the Galaxy config can be found. If galaxy_config_style is yaml the default is galaxy. If galaxy_config_style is ini-paste, the default is app:main.

Feature control

Several variables control which functions this role will perform (all default to yes except where noted):

  • galaxy_create_user (default: no): Create the Galaxy user. Running as a dedicated user is a best practice, but most production Galaxy instances submitting jobs to a cluster will manage users in a directory service (e.g. LDAP). This option is useful for standalone servers. Requires superuser privileges.
  • galaxy_manage_paths (default: no): Create and manage ownership/permissions of configured Galaxy paths. Requires superuser privileges.
  • galaxy_manage_clone: Clone Galaxy from the source repository and maintain it at a specified version (commit), as well as set up a [virtualenv][virtualenv] from which it can be run.
  • galaxy_manage_download: Download and unpack Galaxy from a remote archive url, as well as set up a [virtualenv][virtualenv] from which it can be run.
  • galaxy_manage_existing: Take over a Galaxy directory that already exists, as well as set up a [virtualenv][virtualenv] from which it can be run. galaxy_server_dir must point to the path which already contains the source code of Galaxy.
  • galaxy_manage_static_setup: Manage "static" Galaxy configuration files - ones which are not modifiable by the Galaxy server itself. At a minimum, this is the primary Galaxy configuration file, galaxy.ini.
  • galaxy_manage_mutable_setup: Manage "mutable" Galaxy configuration files - ones which are modifiable by Galaxy (e.g. as you install tools from the Galaxy Tool Shed).
  • galaxy_manage_database: Upgrade the database schema as necessary, when new schema versions become available.
  • galaxy_fetch_dependencies: Fetch Galaxy dependent modules to the Galaxy virtualenv.
  • galaxy_build_client: Build the Galaxy client application (web UI).
  • galaxy_client_make_target (default: client-production-maps): Set the client build type. Options include: client, client-production and client-production-maps. See Galaxy client readme for details.
  • galaxy_manage_systemd (default: no): Install a systemd service unit to start and stop Galaxy with the system (and using the systemctl command).
  • galaxy_manage_errordocs (default: no): Install Galaxy-styled 413 and 502 HTTP error documents for nginx. Requires write privileges for the nginx error document directory.
  • galaxy_manage_cleanup (default: no): Install a cron job to clean up Galaxy framework and job execution temporary files. Requires tmpwatch(8) on RedHat-based systems or tmpreaper(8) on Debian-based systems. See the galaxy_tmpclean_* vars in the defaults file for details.

Galaxy code and configuration

Options for configuring Galaxy and controlling which version is installed.

  • galaxy_config: The contents of the Galaxy configuration file (galaxy.ini by default) are controlled by this variable. It is a hash of hashes (or dictionaries) that will be translated in to the configuration file. See the Example Playbooks below for usage.
  • galaxy_config_files: List of hashes (with src and dest keys) of files to copy from the control machine. For example, to set job destinations, you can use the galaxy_config_dir variable followed by the file name as the dest, e.g. dest: "{{ galaxy_config_dir }}/job_conf.xml". Make sure to add the appropriate setup within galaxy_config for each file added here (so, if adding job_conf.xml make sure that galaxy_config.galaxy.job_config_file points to that file).
  • galaxy_config_templates: List of hashes (with src and dest keys) of templates to fill from the control machine.
  • galaxy_local_tools: List of local tool files or directories to copy from the control machine, relative to galaxy_local_tools_src_dir (default: files/galaxy/tools in the playbook). List items can either be a tool filename, or a dictionary with keys file, section_name, and, optionally, section_id. If no section_name is specified, tools will be placed in a section named Local Tools.
  • galaxy_local_tools_dir: Directory on the Galaxy server where local tools will be installed.
  • galaxy_dynamic_job_rules: List of dynamic job rules to copy from the control machine, relative to galaxy_dynamic_job_rules_src_dir (default: files/galaxy/dynamic_job_rules in the playbook).
  • galaxy_dynamic_job_rules_dir (default: {{ galaxy_server_dir }}/lib/galaxy/jobs/rules): Directory on the Galaxy server where dynamic job rules will be installed. If changed from the default, ensure the directory is on Galaxy's $PYTHONPATH (e.g. in {{ galaxy_venv_dir }}/lib/python2.7/site-packages) and configure the dynamic rules plugin in job_conf.xml accordingly.
  • galaxy_repo (default: https://github.com/galaxyproject/galaxy.git): Upstream Git repository from which Galaxy should be cloned.
  • galaxy_commit_id (default: master): A commit id, tag, branch, or other valid Git reference that Galaxy should be updated to. Specifying a branch will update to the latest commit on that branch. Using a real commit id is the only way to explicitly lock Galaxy at a specific version.
  • galaxy_force_checkout (default: no): If yes, any modified files in the Galaxy repository will be discarded.
  • galaxy_clone_depth (default: unset): Depth to use when performing git clone. Leave unspecified to clone entire history.

Additional config files

Some optional configuration files commonly used in production Galaxy servers can be configured from variables:

  • galaxy_dependency_resolvers: Populate the dependency_resolvers_conf.yml file. See the sample XML configuration for options.
  • galaxy_container_resolvers: Populate the container_resolvers_conf.yml file. See the sample XML configuration for options.
  • galaxy_job_metrics_plugins: Populate the job_metrics_conf.yml file. See the sample XML configuration for options.

As of Galaxy 21.05 the sample configuration files for these features are in XML, but YAML is supported like so:

galaxy_dependency_resolvers:
  - type: <XML tag name>
    <XML attribute name>: <XML attribute value>

For example:

galaxy_dependency_resolvers:
  - type: galaxy_packages
  - type: conda
    prefix: /srv/galaxy/conda
    auto_init: true
    auto_install: false

Path configuration

Options for controlling where certain Galaxy components are placed on the filesystem.

  • galaxy_venv_dir (default: <galaxy_server_dir>/.venv): The role will create a [virtualenv][virtualenv] from which Galaxy will run, this controls where the virtualenv will be placed.
  • galaxy_virtualenv_command: (default: virtualenv): The command used to create Galaxy's virtualenv. Set to pyvenv to use Python 3 on Galaxy >= 20.01.
  • galaxy_virtualenv_python: (default: python of first virtualenv or python command on $PATH): The python binary to use when creating the virtualenv. For Galaxy < 20.01, use python2.7 (if it is not the default), for Galaxy >= 20.01, use python3.5 or higher.
  • galaxy_config_dir (default: <galaxy_server_dir>): Directory that will be used for "static" configuration files.
  • galaxy_mutable_config_dir (default: <galaxy_server_dir>): Directory that will be used for "mutable" configuration files, must be writable by the user running Galaxy.
  • galaxy_mutable_data_dir (default: <galaxy_server_dir>/database): Directory that will be used for "mutable" data and caches, must be writable by the user running Galaxy.
  • galaxy_config_file (default: <galaxy_config_dir>/galaxy.ini): Galaxy's primary configuration file.

User management and privilege separation

  • galaxy_separate_privileges (default: no): Enable privilege separation mode.
  • galaxy_user (default: user running ansible): The name of the system user under which Galaxy runs.
  • galaxy_privsep_user (default: root): The name of the system user that owns the Galaxy code, config files, and virtualenv (and dependencies therein).
  • galaxy_group: Common group between the Galaxy user and privilege separation user. If set and galaxy_manage_paths is enabled, directories containing potentially sensitive information such as the Galaxy config file will be created group- but not world-readable. Otherwise, directories are created world-readable.

Access method control

The role needs to perform tasks as different users depending on which features you have enabled and how you are connecting to the target host. By default, the role will use become (i.e. sudo) to perform tasks as the appropriate user if deemed necessary. Overriding this behavior is discussed in the defaults file.

systemd

systemd is the standard system init daemon on most modern Linux flavors (and all of the ones supported by this role). If galaxy_manage_systemd is enabled, a galaxy service will be configured in systemd to run Galaxy. This service will be automatically started and configured to start when your system boots. You can control the Galaxy service with the systemctl utility as the root user or with sudo:

# systemctl start galaxy     # start galaxy
# systemctl reload galaxy    # attempt a "graceful" reload
# systemctl restart galaxy   # perform a hard restart
# systemctl stop galaxy      # stop galaxy

You can use systemd user mode if you do not have root privileges on your system by setting galaxy_systemd_root to false. Add --user to the systemctl commands above to interact with systemd in user mode:

Error documents

  • galaxy_errordocs_dir: Install Galaxy-styled HTTP 413 and 502 error documents under this directory. The 502 message uses nginx server side includes to allow administrators to create a custom message in ~/maint when Galaxy is down. nginx must be configured separately to serve these error documents.
  • galaxy_errordocs_server_name (default: Galaxy): used to display the message "galaxy_errdocs_server_name cannot be reached" on the 502 page.
  • galaxy_errordocs_prefix (default: /error): Web-side path to the error document root.

Miscellaneous options

  • galaxy_admin_email_to: If set, email this address when Galaxy has been updated. Assumes mail is properly configured on the managed host.
  • galaxy_admin_email_from: Address to send the aforementioned email from.

Dependencies

None

Example Playbook

Basic

Install Galaxy on your local system with all the default options:

- hosts: localhost
  vars:
    galaxy_server_dir: /srv/galaxy
  connection: local
  roles:
     - galaxyproject.galaxy

If your Ansible version >= 2.10.4, then when you run ansible-playbook playbook.yml you should supply an extra argument -u $USER, otherwise you will get an error.

Once installed, you can start with:

$ cd /srv/galaxy
$ sh run.sh

Best Practice

Install Galaxy as per the current production server best practices:

  • Galaxy code (clone) is "clean": no configs or mutable data live underneath the clone
  • Galaxy code and static configs are privilege separated: not owned/writeable by the user that runs Galaxy
  • Configuration files are not world-readable
  • PostgreSQL is used as the backing database
  • The 18.01+ style YAML configuration is used
  • Two job handler mules are started
  • When the Galaxy code or configs are updated by Ansible, Galaxy will be restarted using galaxyctl or systemctl restart galaxy-*
- hosts: galaxyservers
  vars:
    galaxy_config_style: yaml
    galaxy_layout: root-dir
    galaxy_root: /srv/galaxy
    galaxy_commit_id: release_23.0
    galaxy_separate_privileges: yes
    galaxy_force_checkout: true
    galaxy_create_user: yes
    galaxy_manage_paths: yes
    galaxy_manage_systemd: yes
    galaxy_user: galaxy
    galaxy_privsep_user: gxpriv
    galaxy_group: galaxy
    postgresql_objects_users:
      - name: galaxy
        password: null
    postgresql_objects_databases:
      - name: galaxy
        owner: galaxy
    galaxy_config:
      gravity:
        process_manager: systemd
        galaxy_root: "{{ galaxy_root }}/server"
        galaxy_user: "{{ galaxy_user_name }}"
        virtualenv: "{{ galaxy_venv_dir }}"
        gunicorn:
          # listening options
          bind: "unix:{{ galaxy_mutable_config_dir }}/gunicorn.sock"
          # performance options
          workers: 2
          # Other options that will be passed to gunicorn
          # This permits setting of 'secure' headers like REMOTE_USER (and friends)
          # https://docs.gunicorn.org/en/stable/settings.html#forwarded-allow-ips
          extra_args: '--forwarded-allow-ips="*"'
          # This lets Gunicorn start Galaxy completely before forking which is faster.
          # https://docs.gunicorn.org/en/stable/settings.html#preload-app
          preload: true
        celery:
          concurrency: 2
          enable_beat: true
          enable: true
          queues: celery,galaxy.internal,galaxy.external
          pool: threads
          memory_limit: 2
          loglevel: DEBUG
        handlers:
          handler:
            processes: 2
            pools:
              - job-handlers
              - workflow-schedulers
      galaxy:
        database_connection: "postgresql:///galaxy?host=/var/run/postgresql"
  pre_tasks:
    - name: Install Dependencies
      apt:
        name:
          - sudo
          - git
          - make
          - python3-venv
          - python3-setuptools
          - python3-dev
          - python3-psycopg2
          - gcc
          - acl
          - gnutls-bin
          - libmagic-dev
      become: yes
  roles:
    # Install with:
    #   % ansible-galaxy install galaxyproject.postgresql
    - role: galaxyproject.postgresql
      become: yes
    # Install with:
    #   % ansible-galaxy install natefoo.postgresql_objects
    - role: galaxyproject.postgresql_objects
      become: yes
      become_user: postgres
    - role: galaxyproject.galaxy

License

Academic Free License ("AFL") v. 3.0

Author Information

This role was written and contributed to by the following people:

ansible-galaxy's People

Contributors

abretaud avatar afgane avatar almahmoud avatar areso avatar banshee1221 avatar bgruening avatar cat-bro avatar dannon avatar dometto avatar drosofff avatar gmauro avatar guerler avatar hexylena avatar jdavcs avatar jmchilton avatar ksuderman avatar martenson avatar mira-miracoli avatar mvdbeek avatar natefoo avatar nsoranzo avatar nuwang avatar pcm32 avatar pvanheus avatar remimarenco avatar rhpvorderman avatar sanjaysrikakulam avatar sbelluzzo avatar slugger70 avatar torfinnnome avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-galaxy's Issues

errordocs image files

@natefoo while I was trying to fix a bug in GalaxyKickStart, I incidentally came to the errordocs.yml file.
I guess that the files
- content_bg.png
- error_message_icon.png
- masthead_bg.png
are supposed to be sourced from the templates directory but I cannot find them there

Building/upgrading Galaxy db with ansible role

Hi,

We are trying to streamline the process of deploying a local galaxy through the ansible roles provided here on github. We have an existing db server runnning MySQL which I'm trying to connect to, so we're not using the PostgreSQL role. During the "Create Galaxy DB" or the "Upgrade Galaxy DB" I keep getting errors. Any idea what could be the cause? Did I leave out some sort of setting? More info below:

Error when starting from a fresh db:

    failed: [localhost] => {"changed": true, "cmd": ["/mnt/galaxy-dist/.venv/bin/python", "/mnt/galaxy-dist/scripts/create_db.py", "-c", "/mnt/galaxy-dist/galaxy.ini"], "delta": "0:00:23.572947", "end":"2015-11-10 09:43:08.047408", "rc": -9, "start": "2015-11-10 09:42:44.474461", "warnings": []}
stderr: /mnt/galaxy-dist/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/relationships.py:2396: SAWarning: Non-simple column elements in primary join condition for property LibraryDatasetDatasetAssociation.visible_children - consider using remote() annotations to mark the remote side.
/mnt/galaxy-dist/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64-ucs4.egg/sqlalchemy/orm/relationships.py:2396: SAWarning: Non-simple column elements in primary join condition for property HistoryDatasetAssociation.visible_children - consider using remote() annotations to mark the remote side.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to create tables for tracking workflow invocations.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 41 -> 42...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Drop and readd workflow invocation tables, allowing null jobs
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 42 -> 43...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to create tables and columns for sharing visualizations.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 43 -> 44...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to add a notify column to the request table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 44 -> 45...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to add the request_type_permissions table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 45 -> 46...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to create tables for handling post-job actions.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 46 -> 47...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Add a user_id column to the job table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Updating user_id column in job table for  0  rows...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Updated the user_id column for  0  rows in the job table.
INFO:galaxy.model.migrate.check:0  rows have no user_id since the value was NULL in the galaxy_session table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 47 -> 48...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Add a state column to the history_dataset_association and library_dataset_dataset_association table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 48 -> 49...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migration script to add the api_keys table.
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 49 -> 50...
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:========================================
INFO:galaxy.model.migrate.check:This script drops tables that were associated with the old Galaxy Cloud functionality.
INFO:galaxy.model.migrate.check:========================================
INFO:galaxy.model.migrate.check:
INFO:galaxy.model.migrate.check:Migrating 50 -> 51...
FATAL: all hosts have already failed -- aborting

Error when upgrading existing db:

failed: [localhost] => {"changed": true, "cmd": ["/mnt/galaxy-dist/.venv/bin/python", "/mnt/galaxy-dist/scripts/manage_db.py", "-c", "/mnt/galaxy-dist/galaxy.ini", "upgrade"], "delta": "0:00:36.505907", "end": "2015-11-10 09:54:06.970974", "rc": -9, "start": "2015-11-10 09:53:30.465067", "warnings": []}
stdout: 50 -> 51...
Migration script to add imported column for jobs table.
done
51 -> 52...
Migration script to add the sample_dataset table and remove the 'dataset_files' column
from the 'sample' table
done
52 -> 53...
Migration script to create tables for rating histories, datasets, workflows, pages, and visualizations.
done
53 -> 54...
Migration script to add dbkey column for visualization.
done
54 -> 55...
Migration script to add the post_job_action_association table.
done
55 -> 56...
Migration script to create tables for adding explicit workflow outputs.
done
56 -> 57...
Migration script to modify the 'notify' field in the 'request' table from a boolean
to a JSONType
done
57 -> 58...
Migration script to create table for exporting histories to archives.
done
58 -> 59...
Migration script to modify the 'file_path' field type in 'sample_dataset' table
to 'TEXT' so that it can support large file paths exceeding 255 characters
done
59 -> 60...
Migration script to create column and table for importing histories from
file archives.
done
60 -> 61...
Migration script to create tables task management.
done
61 -> 62...
Migration script to create table for associating sessions and users with
OpenIDs.
done
62 -> 63...
Migration script to create a new 'sequencer' table
done
63 -> 64...
Migration script to add the run and sample_run_association tables.
done
64 -> 65...
Migration script to add 'name' attribute to the JSON dict which describes
a form definition field and the form values in the database. In the 'form_values'
table, the 'content' column is now a JSON dict instead of a list.
done
65 -> 66...
Migration script to create table for storing deferred job and managed transfer
information.
done
66 -> 67...
Migration script to populate the 'sequencer' table and it is populated using unique
entries in the 'datatx_info' column in the 'request_type' table. It also deletes the 'datatx_info'
column in the 'request_type' table and adds a foreign key to the 'sequencer' table. The
actual contents of the datatx_info column are stored as form_values.
done
67 -> 68...
This migration script renames the sequencer table to 'external_service' table and
creates a association table, 'request_type_external_service_association' and
populates it. The 'sequencer_id' foreign_key from the 'request_type' table is removed.
The 'sequencer_type_id' column is renamed to 'external_service_type_id' in the renamed
table 'external_service'. Finally, adds a foreign key to the external_service table in the
sample_dataset table and populates it.
done
68 -> 69...
Migration script to rename the sequencer information form type to external service information form
done
69 -> 70...
Migration script to add 'info' column to the transfer_job table.
done
70 -> 71...
Migration script to add 'workflow' and 'history' columns for a sample.
done
71 -> 72...
Migration script to add 'pid' and 'socket' columns to the transfer_job table.
done
72 -> 73...
Migration script to add 'ldda_parent_id' column to the implicitly_converted_dataset_association table.
done
73 -> 74...
Migration script to add 'purged' column to the library_dataset table.
done
74 -> 75...
Migration script to add a 'subindex' column to the run table.
done
75 -> 76...
This migration script fixes the data corruption caused in the form_values
table (content json field) by migrate script 65.
No corrupted rows found.
done
76 -> 77...
Migration script to create table for storing tool tag associations.
done
77 -> 78...
Migration script to add 'total_size' column to the dataset table, 'purged'
column to the HDA table, and 'disk_usage' column to the User and GalaxySession
tables.
done
78 -> 79...
Migration script to add the job_to_input_library_dataset table.
done
79 -> 80...
Migration script to create tables for disk quotas.
done
80 -> 81...
Migration script to add a 'tool_version' column to the hda/ldda tables.
done
81 -> 82...
Migration script to add the tool_shed_repository table.
done
82 -> 83...
Migration script to add 'prepare_input_files_cmd' column to the task table and to rename a column.
done
83 -> 84...
Migration script to add 'ldda_id' column to the implicitly_converted_dataset_association table.
done
84 -> 85...
Migration script to add 'info' column to the task table.
done
85 -> 86...
FATAL: all hosts have already failed -- aborting

Exerpt from playbook.yaml

    ---
    - hosts: os-builder
      sudo: yes
      vars:
        user: galaxy
        uid: 1015
        passwd: galaxy
        dbuser: "{{user}}"
        dbpasswd: "{{passwd}}"
      roles:
      - role: ../ansible-galaxy-os
        #mandatory
        galaxy_user_name: "{{user}}"
        galaxy_user_uid: "{{uid}}"
      - role: ../ansible-nginx
      - role: ../ansible-galaxy
        galaxy_server_dir: /mnt/galaxy-dist
        galaxy_vcs: git              
        galaxy_fetch_eggs: yes         
        galaxy_changeset_id: master      
        galaxy_config:            
          "server:main":
            host: 0.0.0.0
          "app:main":
            database_connection: "mysql://galaxy:[email protected]/galaxy"
            brand: CMGG
            allow_library_path_paste: True
            require_login: True
            allow_user_impersonation: True
            expose_user_name: True
            expose_user_email: True
        galaxy_config_files: machine.
          - src: "config/auth_conf.xml"
            dest: "{{galaxy_server_dir}}/config"

Node version cannot be changed

Because we use the command module to install node with nodeenv, and because we determine whether that task needs to run by checking for the existence of {{ galaxy_venv_dir }}/bin/npm, once installed, the node version will not change even if galaxy_node_version is changed.

maybe we should restrict some of the unnecessary privileges in the systemd unit.

$ systemd-analyze security galaxy | tail
✗ CapabilityBoundingSet=~CAP_SYS_CHROOT                       Service may issue chroot()                                                   0.1
✗ ProtectHostname=                                            Service may change system host/domainname                                    0.1
✗ CapabilityBoundingSet=~CAP_BLOCK_SUSPEND                    Service may establish wake locks                                             0.1
✗ CapabilityBoundingSet=~CAP_LEASE                            Service may create file leases                                               0.1
✗ CapabilityBoundingSet=~CAP_SYS_PACCT                        Service may use acct()                                                       0.1
✗ CapabilityBoundingSet=~CAP_SYS_TTY_CONFIG                   Service may issue vhangup()                                                  0.1
✗ CapabilityBoundingSet=~CAP_WAKE_ALARM                       Service may program timers that wake up the system                           0.1
✗ RestrictAddressFamilies=~AF_UNIX                            Service may allocate local sockets                                           0.1

→ Overall exposure level for galaxy.service: 9.2 UNSAFE 😨

Privilege separation (privsep) functionality needs an overhaul

Currently there are a lot of cases where you need to set some combination of ansible_user, galaxy_user, galaxy_privsep_user, galaxy_remote_users, galaxy_become_users, etc. even when you are not using privsep mode.

The goal of all this user manipulation inside the role is to be able to run all the individual features of this role as different users without requiring the use of root, and to support different privilege escalation scenarios such sites where sudo is not possible and root privileges are obtained with ssh -l root instead, deploying on to root squashed NFS, etc., and the fact that we need different privileges for different features in the role. Ansible itself doesn't provide any framework for this, so we've built it into the role.

Ansible has of course rapidly developed in the interim and there may be some new features that would allow us to do this in a better way. We could also consider breaking each bit of functionality into separate roles that are then combined into a collection. Other ideas very much welcome.

The alternative is the way the role used to work: If you want privilege separation or want it to create users for you (requiring root privileges) etc., you call it multiple times with the right combination of become, become_user, remote_user, and the control variables flipped on or off depending on what task(s) you are performing. This is much uglier in my opinion.

#116 and #118 are both related.

galaxy_config_dir

Should galaxy_config_dir default from "{{ galaxy_server_dir }}" to "{{ galaxy_server_dir }}/config"?

Default playbook paths for configs and other path defaulting issues

Some of the new features (local tool deployment, dynamic rules) automatically look at a default path in the playbook for source files. But for other things, like setting up a job_conf.xml, require specifying the source path. In addition, the job_config_path doesn't default to {{ galaxy_config_dir }}/job_conf.xml so you have to explicitly set the var in galaxy_config to use a custom job conf.

Option to install additional python packages into virtual environment

There could be a list variable galaxy_additional_venv_packages to specify additional python packages to install into the galaxy virtual environment. My use case is that Galaxy Australia wants to install a package 'total-perspective-vortex' into galaxy's virtual environment. The most logical time to install this than after the venv is created and before galaxy is restarted.

uWSGI config loses order

Currently, we take the contents of galaxy_config.uwsgi (the value of which is a hash) to generate the uwsgi: section of galaxy.yml, and do some munging in the uwsgi_yaml filter plugin to turn it in to uWSGI's non-conforming internal yaml syntax.

In uWSGI config syntax the order of options matter, because it has control structures like:

galaxy_config:
  uwsgi:
    if-exists: /srv/galaxy/test/var/zerg-run.fifo
    hook-accepting1-once: writefifo:/srv/galaxy/test/var/zerg-run.fifo 2q
    endif:

However, our template just sorts the keys of the hash and writes them in that order. You can work around this with e.g.:

galaxy_config:
  uwsgi:
    if-exists: |
      {{ galaxy_mutable_data_dir }}/zerg-run.fifo
      hook-accepting1-once: writefifo:{{ galaxy_mutable_data_dir }}/zerg-run.fifo 2q
      endif:

If you need to have more than one if-exists block, you should be able to put the "body" of the block into list members underneath galaxy_config.uwsgi.if-exists. Even with this workaround, we still may write an invalid config because certain blocks or individual directives may need to come before others.

But we should probably allow galaxy_config.uwsgi to be a list so that the role user's order can be maintained. List member values could be either a hash with a single key/value:

galaxy_config:
  uwsgi:
    - {socket: 127.0.0.1:4096}
    - {if-exists: {{ galaxy_mutable_data_dir }}/zerg-run.fifo}
    - {hook-accepting1-once: writefifo:{{ galaxy_mutable_data_dir }}/zerg-run.fifo 2q}
    - {endif: null}

or just a string in uWSGI-yaml syntax (same as above but members are not wrapped in { ... }).

This raises a related question: if you compile uWSGI against libyaml, does that make it impossible to use these control structures, since then the config needs to be in proper yaml syntax?

remote_user value __galaxy_remote_user undefined

From gitter:

@NielsMol
However, when I tried to run the playbook again with ansible-playbook galaxy.yml, I received an error at the task: galaxyproject.galaxy : Create Galaxy user

TASK [galaxyproject.galaxy : Create Galaxy user] ****************************************************************************************************************************************************************
fatal: [UNRAVEL-Galaxy-Version-21-5.fairheartgalaxy-avans.surf-hosted.nl]: FAILED! => {"msg": "The field 'remote_user' has an invalid value, which includes an undefined variable. The error was: '__galaxy_remote_user' is undefined"}

PLAY RECAP ******************************************************************************************************************************************************************************************************
UNRAVEL-Galaxy-Version-21-5.fairheartgalaxy-avans.surf-hosted.nl : ok=37   changed=0    unreachable=0    failed=1    skipped=30   rescued=0    ignored=0

A workaround:

@HugoJH
Hi @NielsMol, I just faced this one. adding -e ansible_user=galaxy to your ansible-playbook call should make it work.

pip installs failing, starting release_19.05

I am scratching my head on the following issue:

Starting sunday Jan 12, pip install of requirements.txt is failing for galaxy versions > 19.01 with the pip error ERROR: Package 'setuptools' requires a different Python: 2.7.12 not in '>=3.5'. This results in error in this playbook for the indicated versions.

My guess from that it is related to upgrades in https://wheels.galaxyproject.org/simple/ or in https://pypi.python.org/simple or to the use of --index-url or --extra-index-url parameters by this role.

Any insights would be appreciated.

Install miniconda and make virtualenv from conda

This would be pretty good to have for e.g. CentOS 7, where Python 3 may not be installed (or if it is, it may not be installed on the cluster). Having an option to use conda for Galaxy's venv/python would fix this.

Use local tool dirs

Hi,
I noticed that when you use directories for your local tool instead of just a xml (for example, if you need another python script), it will not update the local_tool_conf.xml:

{% if tool.endswith('.xml') %}
<tool file="{{ tool }}" />
{% endif %}

In my case I put in my group_var:

galaxy_local_tools:
  - splitbedGraphFromBed
  - ScaleBedGraph

And I have:

$ tree files/galaxy/tools/
files/galaxy/tools/
├── ScaleBedGraph
│   ├── ScaleBedGraph.sh
│   └── ScaleBedGraph.xml
└── splitbedGraphFromBed
    ├── splitbedGraphFromBed.py
    └── splitbedGraphFromBed.xml

2 directories, 4 files

Then I needed to put a templates/galaxy/config/tool_conf.xml.j2 with all the default tools + at the end:

  <section id="LDtools" name="LDtools">
    <tool file="{{ galaxy_local_tools_dir }}/splitbedGraphFromBed/splitbedGraphFromBed/splitbedGraphFromBed.xml" />
    <tool file="{{ galaxy_local_tools_dir }}/ScaleBedGraph/ScaleBedGraph/ScaleBedGraph.xml" />
  </section>

Were there a better way?
Something like:

{% for tool in galaxy_local_tools %}
{% if not tool.endswith('.xml') %}
    <tool file="{{ galaxy_local_tools_dir }}/{{ tool }}/{{ tool }}/{{ tool }}.xml" />
{% endif %}
{% endfor %}

?
Also I found a bit strange there the tool directory is put in a directory tool...

Build client after a Galaxy update

We switch our target branch ("a fork of the dev").

It raised an error during the client building:

TASK [galaxyproject.galaxy : Install yarn] *************************************
ok: [192.168.105.118]

TASK [galaxyproject.galaxy : Build client] *************************************
fatal: [192.168.105.118]: FAILED! => {"changed": false, "cmd": "/usr/bin/gmake client-production-maps", "msg": "", "rc": 2, "stderr": "error [email protected]: The engine \"node\" is incompatible with this module. Expected version \"^8.10.0 || ^10.13.0 || >=11.10.1\". Got \"9.11.1\"\nerror Found incompatible module.\ngmake: *** [node-deps] Erreur 1\n", "stderr_lines": ["error [email protected]: The engine \"node\" is incompatible with this module. Expected version \"^8.10.0 || ^10.13.0 || >=11.10.1\". Got \"9.11.1\"", "error Found incompatible module.", "gmake: *** [node-deps] Erreur 1"], "stdout": "cd client && yarn install --network-timeout 300000 --check-files\nyarn install v1.17.3\n[1/4] Resolving packages...\n[2/4] Fetching packages...\ninfo [email protected]: The platform \"linux\" is incompatible with this module.\ninfo \"[email protected]\" is an optional dependency and failed compatibility check. Excluding it from installation.\ninfo Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.\n", "stdout_lines": ["cd client && yarn install --network-timeout 300000 --check-files", "yarn install v1.17.3", "[1/4] Resolving packages...", "[2/4] Fetching packages...", "info [email protected]: The platform \"linux\" is incompatible with this module.", "info \"[email protected]\" is an optional dependency and failed compatibility check. Excluding it from installation.", "info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command."]

I have to update the Galaxy dependencies using:

(.venv) [root@galaxy-dev server]# ./scripts/common_startup.sh --no-create-venv

Maybe there was another way to update node?

Should tasks/client.yml block include tasks/dependencies.yml

Duplicate dictionary keys not allowed

Accoring to this project's README and Galaxy documentation, the following is part of a valid playbook configuration:

uwsgi:
  mule: lib/galaxy/main.py
  mule: lib/galaxy/main.py

However, this does not work: Ansible expects legal YAML, which does not allow duplicate dictionary keys. As a result, only the last key will be processed.

This would work if these were items in a sequence:

uwsgi:
  some-key:
    - mule: lib/galaxy/main.py
    - mule: lib/galaxy/main.py

However, that changes the format of the output configuration used by uwsgi.

One solution is to transform Ansible's output, flattening the sequence by "pulling it" up one level and removing its key. This can be done with a custom Ansible plugin. I'm submitting a pull request with a possible solution (this could be overkill if the uwsgi format can be easily changed to expect a sequence of mules, or other duplicate keys... )

shed_tool_conf.xml default tool_path

Hi,

Be ready because the froggies 🇫🇷 are intensively using the ansible galaxyproject role to eventually launch the usegalaxy.fr or something like that :)

The default shed_tool_conf.xml

<?xml version="1.0"?>
<toolbox tool_path="database/shed_tools">
</toolbox>

In the admin UI panel, I get this message:

Error cloning repository: Command '['hg', 'clone', '-r', u'11', u'https://toolshed.g2.bx.psu.edu/repos/lecorguille/msnbase_readmsdata', u'/shared/mfs/data/dev/galaxy/server/database/shed_tools/toolshed.g2.bx.psu.edu/repos/lecorguille/msnbase_readmsdata/86a20118e743/msnbase_readmsdata']' returned non-zero exit status 255 Output was: abort: Permission denied: '/shared/mfs/data/dev/galaxy/server/database/shed_tools'

It's logical since the galaxy user don't have the right to modify anything in the {{ galaxy_server_dir }}

Here is our config:

---

# galaxy
galaxy_server_dir: "{{ galaxy_root }}/server"
galaxy_venv_dir: "{{ galaxy_root }}/.venv"
galaxy_config_dir: "{{ galaxy_root }}/config"
galaxy_mutable_config_dir: "{{ galaxy_root }}/mutable-config"
galaxy_mutable_data_dir: "{{ galaxy_root }}/mutable-data"
galaxy_log_dir: "{{ galaxy_root }}/var/log/"
galaxy_shed_tools_dir: "{{ galaxy_root }}/shed_tools"
galaxy_config_style: yaml
galaxy_layout: root-dir
galaxy_separate_privileges: yes
galaxy_manage_paths: yes
galaxy_user: galaxy
galaxy_privsep_user: root
galaxy_group: galaxy
galaxy_manage_clone: yes

# supervisord
supervisor_socket_user: galaxy
supervisor_socket_chown: galaxy
supervisor_programs:
  - name: galaxy
    state: present
    command: uwsgi --yaml '{{ galaxy_config_dir }}/galaxy.yml' --logto2 '{{ galaxy_root }}/var/log/uwsgi.log'
    configuration: |
      autostart=true
      autorestart=false
      startretries=1
      startsecs=10
      user=galaxy
      umask=022
      directory={{ galaxy_server_dir }}
      environment=HOME={{ galaxy_root }},VIRTUAL_ENV={{ galaxy_venv_dir }},PATH={{ galaxy_venv_dir }}/bin:%(ENV_PATH)s,DRMAA_LIBRARY_PATH={{ drmaa_library_path }}
galaxy_restart_handler_name: Restart Galaxy

Is there any var to set this <toolbox tool_path="database/shed_tools"> or do we miss something to set the root dir of Galaxy?

In the meanwhile, I guess, I will play with the module replace.

On unprivileged setup, ansible_user_uid undefined

When running on an unprivileged setup, same user ssh-in and installing galaxy, I get the following error:

TASK [galaxyproject.galaxy : Set Galaxy user facts] ***************************************************************
fatal: [galaxy-gxa-002]: FAILED! => {"msg": "The field 'become' has an invalid value, which includes an undefined variable. The error was: 'ansible_user_uid' is undefined"}

This doesn't happen for some reason when the user ssh-ing and the galaxy_user (still all unprivileged) are different. Any ideas?

Problem seems to be here: https://github.com/galaxyproject/ansible-galaxy/blob/master/defaults/main.yml#L51

Trouble signing up / logging into galaxy via Github

I'm having trouble signing up / logging into galaxy via Github... I had a defunct email as the primary address in my Github profile. I've since fixed that and revoked/reauthorized galaxy, but it feels like there's something cached somewhere.

jlozadad in IRC suggested I create an issue in the ansible-galaxy repo.

If this isn't the correct place, please point me in the right direction.

Thanks in advance,

-s

Problems with ansible 2.3.0.0

Hello,

i have difficulties running ansible version 2.3.0.0.
Should the playbook in the README file work with that ansible version?

I got permission errors when during the posgres installation, but was able to resolve this modifying the playbook

instead of
galaxyproject.postgresql

writing

-role: galaxyproject.postgresql
  become: yes

but now faile the task

TASK [natefoo.postgresql_objects : Create and drop users]

with


fatal: [10.60.64.239]: FAILED! => {
    "failed": true, 
    "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of '/tmp/ansible-tmp-1496834871.61-164298282349901/': Operation not permitted\nchown: changing ownership of '/tmp/ansible-tmp-1496834871.61-164298282349901/postgresql_user.py': Operation not permitted\n). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"

'

Any ideas?

Rebuild client only when needed

Should the make client be handled by as part of calling scripts/common_startup.sh instead of invoked directly ? common_startup.sh has some smart logic to not rebuild the client if there are no changes since the last build and reimplement it in this role doesn't seem to make a lot of sense.

Tool installation via webinterface fails with permission errors

When i try to install tools from toolshed via the webinterface, i get permission denied errors for ../server/database, ../server/tools and ../server/tool-data. If i change the permissions of the directorys by hand, it will crash a bit later anyways.

I used the recommended settings to install my instance with galaxy_separate_privileges: true and have the privileged user gxpriv and the normal user galaxy. My instance is started via supervisorctl and by the user gxpriv.
Folder <galaxy_install_folder>/server/database has 0755 and gxpriv:galaxy
Folder <galaxy_install_folder>/server/tools has 0755 and gxpriv:galaxy
Folder <galaxy_install_folder>/server/tool-data has 0755 and gxpriv:galaxy

Error cloning repository: Command '['hg', 'clone', '-r', u'11', u'https://toolshed.g2.bx.psu.edu/repos/devteam/blast_datatypes', u'/srv/galaxy/server/database/shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/blast_datatypes/01b38f20197e/blast_datatypes']' returned non-zero exit status 255 Output was: abort: Permission denied: '/srv/galaxy/server/database/shed_tools' 

Any idea how to fix this or what the root of the problem could be?

Create tool_data_path with other directories

Can the tool_data_path, defined in galaxy_config.galaxy.tool_data_path be added to the folder creation task?

I try to deploy a blastdb.loc, but the tool_data_path-directory hasn't been created at the time where the galaxy_config_files-array gets processed.

galaxy_config_files:
  - src: files/galaxy/blastdb.loc
    dest: "{{ galaxy_mutable_data_dir }}/tool-data/blastdb.loc"
  - src: files/galaxy/blastdb_p.loc
    dest: "{{ galaxy_mutable_data_dir }}/tool-data/blastdb_p.loc"
...
galaxy_config:
  galaxy:
    tool_data_path: {{ galaxy_mutable_data_dir }}/tool-data

Nginx proxy permission denied to static content

Hello,
i'm trying to set up an server with ansible and followed the galaxy admin training tutorial. Everything is working well so far. Now i tried to use some of the best practice settings, that are described in the readme of this repo. Problem is now, that the nginx doesn't have access to the static file folder.

All file are placed in /srv/galaxy and are only accessable by the galaxy-group. The static content that nginx should serve is placed in /srv/galaxy/server/static. When nginx tries to server the static files, i get a permission denied error, because nginx is not part of the galaxy group and is not allowed to read anything in /srv/galaxy/* and i dont want to give nginx full read access to my galaxy folder including configs etc.

My plan is so far, that i will have to write a role, that will copy the content of /srv/galaxy/server/static to /var/www/ and change permissions to www-data so nginx can access and serve them.

Is there any better solution for this? (Maybe a config-var that lets ansible deploy static content to a custom location)
Is using nginx as proxy to serve static a good way of should i just let uwsgi do the job?

Some of my group_vars

...
galaxy_separate_privileges: true
galaxy_manage_paths: true
galaxy_layout: root-dir
galaxy_root: /srv/galaxy
galaxy_file_path: /data
galaxy_user:
  name: galaxy
  shell: /bin/bash
galaxy_group: galaxy
galaxy_privsep_user: gxpriv
galaxy_commit_id: release_19.01
...
galaxy_config:
  galaxy:
    ...
    file_path: /data
    ...
  uwsgi:
    ...
    socket: 127.0.0.1:8080
    static-map:
      - /static/style={{ galaxy_server_dir }}/static/style/blue
      - /static={{ galaxy_server_dir }}/static
    master: true
    ...

nginx_package_name: nginx-full 
nginx_remove_default_vhost: true
nginx_server_names_hash_bucket_size: "128"
nginx_vhosts:
  - listen: 80
    server_name: "{{ hostname }}"
    root: "/var/www/{{ hostname }}"
    index: index.html
    access_log: /var/log/nginx/access.log
    error_log: /var/log/nginx/error.log
    state: present
    filename: "{{ hostname }}.conf"
    extra_parameters: |
        client_max_body_size 10G; 
        uwsgi_read_timeout 2400; 
        location / {
            uwsgi_pass      127.0.0.1:8080;
            uwsgi_param UWSGI_SCHEME $scheme;
            include         uwsgi_params;
        }
        location /static {
                alias {{ galaxy_server_dir }}/static;
                expires 24h;
        }
        ....

Parts of my playbook.yml

  roles:
    - galaxyproject.repos
    - role: galaxyproject.postgresql
      become: true
    - role: natefoo.postgresql_objects
      become: true
      become_user: postgres
    - galaxyproject.galaxy
    - role: geerlingguy.pip
      become: true
    - role: customModifications
      become: true
      become_user: gxpriv
    - role: usegalaxy-eu.supervisor
      become: true
    - role: geerlingguy.nginx
      become: true

Undefined layout variable errors

@vazovn reported an issue on Gitter with the layout set to root-dir and tool_dependency_dir set in galaxy_config.galaxy, that attempting to use galaxy_tool_dependency_dir in another var (miniconda_prefix) was causing the error:

FAILED! => {"msg": "The conditional check ''galaxy_' ~ item in vars and item in ((galaxy_config | default({}))[galaxy_app_config_section] | default({}))' failed. The error was: error while evaluating conditional ('galaxy_' ~ item in vars and item in ((galaxy_config | default({}))[galaxy_app_config_section] | default({}))): 'galaxy_tool_dependency_dir' is undefined\n\nThe error appears to be in '/uio/kant/usit-ft-u1/nikolaiv/galaxy-ImmunoHub-playbooks/roles/galaxyproject.galaxy/tasks/layout.yml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Check that any explicitly set Galaxy config options match the values of explicitly set variables\n ^ here\n"}

Adding galaxy_tool_dependency_dir to the Set any unset variables from layout defaults task fixes this. I am not sure why the vars in the last 2 tasks of tasks/layout.yml aren't defaulted in the "Set any unset..." task, but if there's no reason not to, we should default them there.

`--help` and `--version` throw an error

$ ansible-galaxy --help
Usage: ansible-galaxy [delete|import|info|init|install|list|login|remove|search|setup] [--help] [options] ...

Options:
-h, --help show this help message and exit
-v, --verbose verbose mode (-vvv for more, -vvvv to enable connection
debugging)
--version show program's version number and exit
ERROR! Missing required action
$ echo $?
5

Warnings

bare variable

[DEPRECATION WARNING]: evaluating galaxy_manage_static_setup as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This
feature will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

for the following vars:

  • galaxy_manage_static_setup
  • galaxy_create_privsep_user
  • galaxy_create_user
  • galaxy_fetch_dependencies
  • galaxy_manage_download
  • galaxy_manage_paths
  • galaxy_manage_clone
  • galaxy_manage_database (Include database management tasks)

jinja

 [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ galaxy_create_user if __galaxy_privsep_user_name != 'root' else false }}

more for dagobah but

[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 192.52.34.236 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the
discovered platform python for this host. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information. This feature will be removed in version 2.12. Deprecation warnings can be disabled by
 setting deprecation_warnings=False in ansible.cfg.

tar dependencies aren't versioned

When using tar files to download dependencies the url for the dependency can change, but if the role name used in the dependency file is not changed, it is considered the same dependency and not re-downloaded.

This means to ensure the correct dependency is downloaded either --force-with-deps has to be used everywhere (since sub dependencies won't be updated if you just use --force) or you have to change the role name, forcing you to update the tasks using it.

If you do update the role name you'll slowly fill your ~/.ansible/roles with every version of the role you've ever used.

I purpose that either:

  • meta/main.yml should be able to hold a version number which is checked against.
  • the url used to download the role is stored beside the role in ~/.ansible/roles and checked against when a new role is installed with the same name.

Correct configuration to use slurm for jobs

Hello,

i am trying to run admin training for Connecting Galaxy to a compute cluster. I could reproduce the steps until "Galaxy and Slurm". The first log output is different and i couldn't figure out why/what i have to do to make it look like expected.
How needs my uwsgi configuration has to look like to be able to run jobs on the slurm-nodes?
I tried to leave some of the config parts already, but nothing really worked.

Part of my config_vars (the config i use comes from ansible galaxy deployment admin training)

galaxy_config:
  ...
  uwsgi:
    # Default values
    socket: 127.0.0.1:8080
    buffer-size: 16384
    processes: 1
    threads: 4
    offload-threads: 2
    static-map:
      - /static/style={{ galaxy_server_dir }}/static/style/blue
      - /static={{ galaxy_server_dir }}/static
    master: true
    virtualenv: "{{ galaxy_venv_dir }}"
    pythonpath: "{{ galaxy_server_dir }}/lib"
    module: galaxy.webapps.galaxy.buildapp:uwsgi_app()
    thunder-lock: true
    die-on-term: true
    hook-master-start:
      - unix_signal:2 gracefully_kill_them_all
      - unix_signal:15 gracefully_kill_them_all
    py-call-osafterfork: true
    enable-threads: true
    # Our additions
    mule:
      - lib/galaxy/main.py
      - lib/galaxy/main.py
    farm: job-handlers:1,2

job_conf.xml

<?xml version="1.0"?>
<job_conf>
    <plugins workers="4">
        <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>
        <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner"/>
    </plugins>
    <destinations default="slurm">
        <destination id="slurm" runner="slurm"/>
        <destination id="local" runner="local"/>
    </destinations>
</job_conf>

Role fails with missing python dependencies

Using the ansible-galaxy role to install a server on a vagrant RHEL 7.2 box fails at various stages with dependency errors. First it was six but as soon as I install them by hand in the virtualenv new errors keep coming up.

I run run ansible version 2.2.0.0 on the host.

Playbook:

---
- hosts: galaxy
  pre_tasks:
    - name: Install Python Virtual Environment
      package:
        name: python-virtualenv
        state: present
    - name: Install Git
      package:
        name: git
        state: present

  roles:
    - galaxy

Group Configuration:

---
galaxy_server_dir: /galaxy
galaxy_vcs: git
galaxy_changeset_id: master

Error:

TASK [galaxy : Get current Galaxy DB version] **********************************
task path: /Users/fred/workspaces/ansible-galaxy-installer/roles/galaxy/tasks/database.yml:4
Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/commands/command.py
<galaxy.vagrant.dev> ESTABLISH SSH CONNECTION FOR USER: vagrant
<galaxy.vagrant.dev> SSH: EXEC ssh -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /Users/fred/workspaces/ansible-galaxy-installer/.vagrant/machines/galaxy/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=none galaxy.vagrant.dev '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1482028066.04-292265552081 `" && echo ansible-tmp-1482028066.04-292265552081="` echo $HOME/.ansible/tmp/ansible-tmp-1482028066.04-292265552081 `" ) && sleep 0'"'"''
<galaxy.vagrant.dev> PUT /var/folders/71/2xvg3s813x5_t15q3frx28tm0000gp/T/tmp_xKUfu TO /home/vagrant/.ansible/tmp/ansible-tmp-1482028066.04-292265552081/command.py
<galaxy.vagrant.dev> SSH: EXEC sftp -b - -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /Users/fred/workspaces/ansible-galaxy-installer/.vagrant/machines/galaxy/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=none '[galaxy.vagrant.dev]'
<galaxy.vagrant.dev> ESTABLISH SSH CONNECTION FOR USER: vagrant
<galaxy.vagrant.dev> SSH: EXEC ssh -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /Users/fred/workspaces/ansible-galaxy-installer/.vagrant/machines/galaxy/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=none galaxy.vagrant.dev '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1482028066.04-292265552081/ /home/vagrant/.ansible/tmp/ansible-tmp-1482028066.04-292265552081/command.py && sleep 0'"'"''
<galaxy.vagrant.dev> ESTABLISH SSH CONNECTION FOR USER: vagrant
<galaxy.vagrant.dev> SSH: EXEC ssh -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /Users/fred/workspaces/ansible-galaxy-installer/.vagrant/machines/galaxy/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=none -tt galaxy.vagrant.dev '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-asxukvejpxeobceuxyjxoechxqarrdon; GALAXY_EGGS_PATH=/galaxy/eggs /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1482028066.04-292265552081/command.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1482028066.04-292265552081/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
fatal: [galaxy.vagrant.dev]: FAILED! => {
    "changed": false, 
    "cmd": [
        "/galaxy/.venv/bin/python", 
        "/galaxy/scripts/manage_db.py", 
        "-c", 
        "/galaxy/galaxy.ini", 
        "db_version"
    ], 
    "delta": "0:00:00.010776", 
    "end": "2016-12-18 02:26:33.038926", 
    "failed": true, 
    "failed_when_result": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "/galaxy/.venv/bin/python /galaxy/scripts/manage_db.py -c /galaxy/galaxy.ini db_version", 
            "_uses_shell": false, 
            "chdir": "/galaxy", 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "warn": true
        }, 
        "module_name": "command"
    }, 
    "rc": 1, 
    "start": "2016-12-18 02:26:33.028150", 
    "stderr": "Traceback (most recent call last):\n  File \"/galaxy/scripts/manage_db.py\", line 7, in <module>\n    from migrate.versioning.shell import main\nImportError: No module named migrate.versioning.shell", 
    "stdout": "", 
    "stdout_lines": [], 
    "warnings": []
}

ghost role frozen in infinite 'Running' status

Hi everyone,

I am trying to get rid of the old role and add a new one instead.

I have removed old role repo from github, but the ghost role in ansible galaxy is still there frozen in infinite 'Running' status.

The switch on the left is disabled so it does not allow me to remove it.

galaxy_issue

On the other hand the new role I want to add instead has the same name, so when I try adding it, I receive the following error:

galaxy_issue2

How can I clean this up?

Thank you.

Travis CI web hook creates a new role on Galaxy

Until a few months ago, you could have a separate repo name on GitHub (e.g. ansible-role-beegfs) and role name on Ansible galaxy (stackhpc.beegfs). However, it now appears to be the case that when Galaxy webhook is triggered by Travis CI upon passing, a new role with undesirable name is created (e.g. stackhpc.ansible-role-beegfs). This is really inconvenient as we have lots of other roles that follow a similar convention and has worked until these changes were implemented. Is this here to stay or is this a known issue that is going to be dealt with?

Include DB upgrade messages in debug output?

12:45:12 TASK [galaxyproject.galaxy : Upgrade Galaxy DB] ********************************

12:45:47 changed: [sn04.bi.uni-freiburg.de]
12:45:47 

hides a lot of potentially important information

Using role with a `--become-user` call fails 0.8.4 --> 0.9.0

I used to call this role within a playbook at version 0.8.4 and executing a call in the form of:

ansible-playbook --become --become-user=gxa_galaxy --inventory=hosts playbook/playbook.yml

used to work. I need to do it like this since I ssh with one user but become another one for running things. The error I get now is:

TASK [galaxyproject.galaxy : Set Galaxy user facts] ************************************************************************************************************************************************
fatal: []: FAILED! => {"msg": "The field 'remote_user' has an invalid value, which includes an undefined variable. The error was: '__galaxy_remote_user' is undefined"}
	to retry, use: --limit @/Users/pmoreno/Development/ansible/galaxy-lsf/playbook/playbook.retry

I think that the role now tries to set __galaxy_remote_user from ansible_user. But if I set ansible_user to the user I become (the user I want galaxy to use), I cannot ssh into the target machine.

galaxy.yml

Hi @ALL

Shouldn't we implement a galaxy.yml.j2 in this repo at some point ? I can of course contribute.

Best

Mutable Configs Broken in Dev (Installable Galaxy)

TASK [galaxyprojectdotorg.galaxy : Instantiate mutable configuration files] ****
failed: [localhost] (item={u'dest': u'/galaxy/migrated_tools_conf.xml', u'src': u'/galaxy/config/migrated_tools_conf.xml.sample'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["cp", "/galaxy/config/migrated_tools_conf.xml.sample", "/galaxy/migrated_tools_conf.xml"], "delta": "0:00:00.305767", "end": "2019-08-09 17:39:02.255266", "item": {"dest": "/galaxy/migrated_tools_conf.xml", "src": "/galaxy/config/migrated_tools_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-08-09 17:39:01.949499", "stderr": "cp: cannot stat '/galaxy/config/migrated_tools_conf.xml.sample': No such file or directory", "stderr_lines": ["cp: cannot stat '/galaxy/config/migrated_tools_conf.xml.sample': No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [localhost] (item={u'dest': u'/galaxy/shed_data_manager_conf.xml', u'src': u'/galaxy/config/shed_data_manager_conf.xml.sample'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["cp", "/galaxy/config/shed_data_manager_conf.xml.sample", "/galaxy/shed_data_manager_conf.xml"], "delta": "0:00:00.333531", "end": "2019-08-09 17:39:02.741726", "item": {"dest": "/galaxy/shed_data_manager_conf.xml", "src": "/galaxy/config/shed_data_manager_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-08-09 17:39:02.408195", "stderr": "cp: cannot stat '/galaxy/config/shed_data_manager_conf.xml.sample': No such file or directory", "stderr_lines": ["cp: cannot stat '/galaxy/config/shed_data_manager_conf.xml.sample': No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [localhost] (item={u'dest': u'/galaxy/shed_tool_data_table_conf.xml', u'src': u'/galaxy/config/shed_tool_data_table_conf.xml.sample'}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["cp", "/galaxy/config/shed_tool_data_table_conf.xml.sample", "/galaxy/shed_tool_data_table_conf.xml"], "delta": "0:00:00.312092", "end": "2019-08-09 17:39:03.237230", "item": {"dest": "/galaxy/shed_tool_data_table_conf.xml", "src": "/galaxy/config/shed_tool_data_table_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-08-09 17:39:02.925138", "stderr": "cp: cannot stat '/galaxy/config/shed_tool_data_table_conf.xml.sample': No such file or directory", "stderr_lines": ["cp: cannot stat '/galaxy/config/shed_tool_data_table_conf.xml.sample': No such file or directory"], "stdout": "", "stdout_lines": []}

Enabling `enable_beta_gdpr` with privilege separation fails

I tried to make a small change to the setup explained in https://training.galaxyproject.org/training-material/topics/admin/tutorials/ansible-galaxy/tutorial.html , i.e. I've added enable_beta_gdpr: true to galaxy_config: > galaxy: in group_vars/galaxyservers.yml
The basic configuration in the tutorial setup is:

galaxy_separate_privileges: true
galaxy_manage_paths: true
galaxy_layout: root-dir
galaxy_root: /srv/galaxy

After the change, I see the following traceback in the Galaxy logs:

Jun 22 23:23:02 galaxy uwsgi[3402783]: DEBUG:galaxy.config:Configuration directory is /srv/galaxy/server/config
Jun 22 23:23:02 galaxy uwsgi[3402783]: DEBUG:galaxy.config:Data directory is /srv/galaxy/var
Jun 22 23:23:02 galaxy uwsgi[3402783]: DEBUG:galaxy.config:Managed config directory is /srv/galaxy/server/config
Jun 22 23:23:03 galaxy uwsgi[3402783]: DEBUG:galaxy.containers:config file '/srv/galaxy/server/config/containers_conf.yml' does not exist, running with default config
Jun 22 23:23:03 galaxy uwsgi[3402783]: Traceback (most recent call last):
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/config.py", line 565, in configure
Jun 22 23:23:03 galaxy uwsgi[3402783]: handler = self.configure_handler(handlers[name])
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/config.py", line 738, in configure_handler
Jun 22 23:23:03 galaxy uwsgi[3402783]: result = factory(**kwargs)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/handlers.py", line 150, in __init__
Jun 22 23:23:03 galaxy uwsgi[3402783]: BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/handlers.py", line 57, in __init__
Jun 22 23:23:03 galaxy uwsgi[3402783]: logging.FileHandler.__init__(self, filename, mode, encoding, delay)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/__init__.py", line 1032, in __init__
Jun 22 23:23:03 galaxy uwsgi[3402783]: StreamHandler.__init__(self, self._open())
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/__init__.py", line 1061, in _open
Jun 22 23:23:03 galaxy uwsgi[3402783]: return open(self.baseFilename, self.mode, encoding=self.encoding)
Jun 22 23:23:03 galaxy uwsgi[3402783]: PermissionError: [Errno 13] Permission denied: '/srv/galaxy/server/compliance.log'
Jun 22 23:23:03 galaxy uwsgi[3402783]: During handling of the above exception, another exception occurred:
Jun 22 23:23:03 galaxy uwsgi[3402783]: Traceback (most recent call last):
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/srv/galaxy/server/lib/galaxy/webapps/galaxy/buildapp.py", line 50, in app_factory
Jun 22 23:23:03 galaxy uwsgi[3402783]: app = galaxy.app.UniverseApplication(global_conf=global_conf, **kwargs)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/srv/galaxy/server/lib/galaxy/app.py", line 83, in __init__
Jun 22 23:23:03 galaxy uwsgi[3402783]: config.configure_logging(self.config)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/srv/galaxy/server/lib/galaxy/config/__init__.py", line 1077, in configure_logging
Jun 22 23:23:03 galaxy uwsgi[3402783]: logging.config.dictConfig(logging_conf)
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/config.py", line 802, in dictConfig
Jun 22 23:23:03 galaxy uwsgi[3402783]: dictConfigClass(config).configure()
Jun 22 23:23:03 galaxy uwsgi[3402783]: File "/usr/lib64/python3.6/logging/config.py", line 573, in configure
Jun 22 23:23:03 galaxy uwsgi[3402783]: '%r: %s' % (name, e))
Jun 22 23:23:03 galaxy uwsgi[3402783]: ValueError: Unable to configure handler 'compliance_log': [Errno 13] Permission denied: '/srv/galaxy/server/compliance.log'
Jun 22 23:23:03 galaxy systemd[1]: galaxy.service: main process exited, code=exited, status=1/FAILURE

When started without privilege separation (e.g. via run.sh on a fresh git clone), compliance.log is created in the Galaxy root directory.

[0.8.4] Instantiate mutable configuration files fails as files are not found (in 19.09)

When running 0.8.4 version of this role (as on 0.9.0 I don't get that far), in the mutable setup, it fails on:

TASK [galaxyproject.galaxy : Instantiate mutable configuration files] ******************************************************************************************************************************
failed: [hx-noah-01-06] (item={u'dest': u'/some-path/galaxy-gxa-001-data/config/shed_tool_conf.xml', u'src': u'/some-path/galaxy-19.09/config/shed_tool_conf.xml.sample'}) => {"changed": true, "cmd": ["cp", "/some-path/galaxy-19.09/config/shed_tool_conf.xml.sample", "/some-path/galaxy-gxa-001-data/config/shed_tool_conf.xml"], "delta": "0:00:00.003480", "end": "2019-10-28 11:42:48.868639", "item": {"dest": "/some-path/galaxy-gxa-001-data/config/shed_tool_conf.xml", "src": "/some-path/galaxy-19.09/config/shed_tool_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-10-28 11:42:48.865159", "stderr": "cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_tool_conf.xml.sample’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_tool_conf.xml.sample’: No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [hx-noah-01-06] (item={u'dest': u'/some-path/galaxy-gxa-001-data/config/migrated_tools_conf.xml', u'src': u'/some-path/galaxy-19.09/config/migrated_tools_conf.xml.sample'}) => {"changed": true, "cmd": ["cp", "/some-path/galaxy-19.09/config/migrated_tools_conf.xml.sample", "/some-path/galaxy-gxa-001-data/config/migrated_tools_conf.xml"], "delta": "0:00:00.002804", "end": "2019-10-28 11:42:51.011456", "item": {"dest": "/some-path/galaxy-gxa-001-data/config/migrated_tools_conf.xml", "src": "/some-path/galaxy-19.09/config/migrated_tools_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-10-28 11:42:51.008652", "stderr": "cp: cannot stat ‘/some-path/galaxy-19.09/config/migrated_tools_conf.xml.sample’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/some-path/galaxy-19.09/config/migrated_tools_conf.xml.sample’: No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [hx-noah-01-06] (item={u'dest': u'/some-path/galaxy-gxa-001-data/config/shed_data_manager_conf.xml', u'src': u'/some-path/galaxy-19.09/config/shed_data_manager_conf.xml.sample'}) => {"changed": true, "cmd": ["cp", "/some-path/galaxy-19.09/config/shed_data_manager_conf.xml.sample", "/some-path/galaxy-gxa-001-data/config/shed_data_manager_conf.xml"], "delta": "0:00:00.002556", "end": "2019-10-28 11:42:53.245771", "item": {"dest": "/some-path/galaxy-gxa-001-data/config/shed_data_manager_conf.xml", "src": "/some-path/galaxy-19.09/config/shed_data_manager_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-10-28 11:42:53.243215", "stderr": "cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_data_manager_conf.xml.sample’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_data_manager_conf.xml.sample’: No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [hx-noah-01-06] (item={u'dest': u'/some-path/galaxy-gxa-001-data/config/shed_tool_data_table_conf.xml', u'src': u'/some-path/galaxy-19.09/config/shed_tool_data_table_conf.xml.sample'}) => {"changed": true, "cmd": ["cp", "/some-path/galaxy-19.09/config/shed_tool_data_table_conf.xml.sample", "/some-path/galaxy-gxa-001-data/config/shed_tool_data_table_conf.xml"], "delta": "0:00:00.002730", "end": "2019-10-28 11:42:55.385746", "item": {"dest": "/some-path/galaxy-gxa-001-data/config/shed_tool_data_table_conf.xml", "src": "/some-path/galaxy-19.09/config/shed_tool_data_table_conf.xml.sample"}, "msg": "non-zero return code", "rc": 1, "start": "2019-10-28 11:42:55.383016", "stderr": "cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_tool_data_table_conf.xml.sample’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/some-path/galaxy-19.09/config/shed_tool_data_table_conf.xml.sample’: No such file or directory"], "stdout": "", "stdout_lines": []}
	to retry, use: --limit @/Users/pmoreno/Development/ansible/galaxy-lsf/playbook/playbook.retry

as none of the files listed in galaxy_mutable_configs seem to be there anymore (the samples) on 19.09 checkout.

Permissions on galaxy directory during privilege separation + NGINX access

During deployment with priv-sep, the permissions for the galaxy directory can become too restrictive.

Overriding them with

--extra-vars "__galaxy_dir_perms='0755'"

solves the issue, but, it is sub-optimal for new admins.

It is set due to

__galaxy_dir_perms: "{{ '0750' if __galaxy_user_group == __galaxy_privsep_user_group else '0755' }}"

But for EU (and others), we have it set such that:

galaxy_separate_privileges: true
galaxy_group: galaxy
galaxy_system_group: galaxy

Thus, given

__galaxy_user_group: "{{ ((galaxy_group | default({})).name | default(galaxy_group)) if galaxy_group is defined else (__galaxy_group_result.results[0].ansible_facts.getent_group.keys() | first) }}"
__galaxy_privsep_user_group: "{{ ((galaxy_group | default({})).name | default(galaxy_group)) if galaxy_group is defined else (__galaxy_group_result.results[1].ansible_facts.getent_group.keys() | first) }}"

In both cases, the galaxy_group variable is set, and so those values are equivalent. But it doesn't account for the fact that nginx still needs access.

galaxy_shed_tools_dir

Should galaxy_shed_tools_dir default from "{{ galaxy_server_dir }}/../shed_tools" to "{{ galaxy_server_dir }}/shed_tools"?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.