oravirt / ansible-oracle Goto Github PK
View Code? Open in Web Editor NEWOracle related stuff. Installs RAC/RAC One Node/Single Instance
License: MIT License
Oracle related stuff. Installs RAC/RAC One Node/Single Instance
License: MIT License
I added a logrotate template for all files from ADR which were not cleaned up by ADR.
I also plan another cron for the ADR-stuff for a completed rotation of all logfiles.
I am not sure when the 2nd part is finished as I need a real working ADR flush, which was not easy the old days with different versions of Oracle on 1 host and never tested it again. Normaly I only remove the XML-files with find and not using the automatic stuff from ADR anymore.
Which path should we go?
The expected way from Oracle or the easy way with find?
Basically, I'm trying to follow this guide https://oravirt.wordpress.com/2014/10/01/getting-started-with-ansible-oracle/ with ansible-oracle from master branch.
Here is my ansible-run stack:
TASK: [oradb-create | Create database(s)] *************************************
failed: [localhost] => (item=[{'oracle_init_params': 'open_cursors=300,processes=700', 'is_racone': 'false', 'oracle_db_type': 'SI', 'service_name': 'orcl_serv', 'storage_type': 'FS', 'oracle_db_passwd': 'Oracle123', 'home': 'home1', 'oracle_db_mem_percent': 30, 'is_container': 'false', 'redolog_size_in_mb': 100, 'num_pdbs': 1, 'pdb_prefix': 'pdb', 'oracle_db_name': 'orcl', 'state': 'present', 'oracle_edition': 'EE', 'oracle_database_type': 'MULTIPURPOSE', 'oracle_version_db': '12.1.0.2'}, {'cmd': 'grep orcl:/u01/app/oracle/12.1.0.2/home1 /etc/oratab |wc -l', 'end': '2016-02-29 12:21:00.348156', 'stderr': '', 'stdout': '0', 'changed': True, 'rc': 0, 'item': {'oracle_init_params': 'open_cursors=300,processes=700', 'is_racone': 'false', 'oracle_db_type': 'SI', 'service_name': 'orcl_serv', 'storage_type': 'FS', 'oracle_db_passwd': 'Oracle123', 'home': 'home1', 'oracle_db_mem_percent': 30, 'is_container': 'false', 'redolog_size_in_mb': 100, 'num_pdbs': 1, 'pdb_prefix': 'pdb', 'oracle_db_name': 'orcl', 'state': 'present', 'oracle_edition': 'EE', 'oracle_database_type': 'MULTIPURPOSE', 'oracle_version_db': '12.1.0.2'}, 'warnings': [], 'delta': '0:00:00.012415', 'invocation': {'module_name': u'shell', 'module_complex_args': {}, 'module_args': u'grep orcl:/u01/app/oracle/12.1.0.2/home1 /etc/oratab |wc -l'}, 'stdout_lines': ['0'], 'start': '2016-02-29 12:21:00.335741'}]) => {"changed": true, "cmd": "export PATH='$PATH:/u01/app/oracle/12.1.0.2/home1/bin'; export LD_LIBRARY_PATH='$LD_LIBRARY_PATH:/u01/app/oracle/12.1.0.2/home1/lib'; time /u01/app/oracle/12.1.0.2/home1/bin/dbca -responseFile /u01/stage/rsp/dbca_orcl.rsp -silent -redoLogFileSize 100", "delta": "0:00:04.049797", "end": "2016-02-29 12:21:05.187685", "item": [{"home": "home1", "is_container": "false", "is_racone": "false", "num_pdbs": 1, "oracle_database_type": "MULTIPURPOSE", "oracle_db_mem_percent": 30, "oracle_db_name": "orcl", "oracle_db_passwd": "Oracle123", "oracle_db_type": "SI", "oracle_edition": "EE", "oracle_init_params": "open_cursors=300,processes=700", "oracle_version_db": "12.1.0.2", "pdb_prefix": "pdb", "redolog_size_in_mb": 100, "service_name": "orcl_serv", "state": "present", "storage_type": "FS"}, {"changed": true, "cmd": "grep orcl:/u01/app/oracle/12.1.0.2/home1 /etc/oratab |wc -l", "delta": "0:00:00.012415", "end": "2016-02-29 12:21:00.348156", "invocation": {"module_args": "grep orcl:/u01/app/oracle/12.1.0.2/home1 /etc/oratab |wc -l", "module_complex_args": {}, "module_name": "shell"}, "item": {"home": "home1", "is_container": "false", "is_racone": "false", "num_pdbs": 1, "oracle_database_type": "MULTIPURPOSE", "oracle_db_mem_percent": 30, "oracle_db_name": "orcl", "oracle_db_passwd": "Oracle123", "oracle_db_type": "SI", "oracle_edition": "EE", "oracle_init_params": "open_cursors=300,processes=700", "oracle_version_db": "12.1.0.2", "pdb_prefix": "pdb", "redolog_size_in_mb": 100, "service_name": "orcl_serv", "state": "present", "storage_type": "FS"}, "rc": 0, "start": "2016-02-29 12:21:00.335741", "stderr": "", "stdout": "0", "stdout_lines": ["0"], "warnings": []}], "rc": 1, "start": "2016-02-29 12:21:01.137888", "warnings": []}
stderr:
real 0m4.047s
user 0m3.011s
sys 0m0.339s
stdout: Cleaning up failed steps
5% complete
Copying database files
7% complete
8% complete
9% complete
DBCA Operation failed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl4.log" for further details.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/home/vagrant/my-single-instance-install.retry
localhost : ok=65 changed=9 unreachable=0 failed=1
real 1m47.695s
user 0m5.501s
sys 0m4.417s
All other useful files like:
— /u01/app/oracle/cfgtoollogs/dbca/orcl/orcl4.log;
— /u01/app/oracle/cfgtoollogs/dbca/orcl/trace.log;
— my-single-instance-install.yml
are in attachment.
Please forgive me for any stupidness. Having never worked with Ansible before I'm muddling my way through this.
I'm trying to follow the single instance install (ASM) but seem to be failing at the filesystem stage.
I keep hitting the below, having tried to fudge around it I'm not entirely sure what it is I'm missing:
TASK [orahost : filesystem | Create directories] ********************************************************************************************************************************************************************************************
fatal: [os-dat-ans-tst-01.local]: FAILED! => {"msg": "could not find 'filesystem' key in iterated item '{u'device': u'/dev/xvdc', u'mntp': u'/u01', u'vgname': u'vgora', u'pvname': u'/dev/xvdc1', u'lvname': u'lvora'}'"}
to retry, use: --limit @/root/ansible-oracle/single-instance-db-on-asm.retry
Hello,
Is there away to configure SSH equivalence for the user, we're using root at the moment, as part of the common tasks?
Hi
Hope you are doing good..
sorry i was bit late in getting good infrastructure with bigger VM to try this...
I have used latest code clone which you have fixed for issues mentioned in #61
The below is error
TASK [orahost : Install packages required by Oracle on OL/RHEL] **************************************************************************************
failed: [1.2.3.202] (item=[u'libselinux-python', u'procps', u'psmisc', u'module-init-tools', u'ethtool', u'bc', u'bind-utils', u'nfs-utils', u'util-linux-ng', u'xorg-x11-utils', u'xorg-x11-xauth', u'binutils', u'compat-libstdc++-33', u'compat-libstdc++-33.i686', u'unixODBC-devel', u'unixODBC-devel.i686', u'gcc', u'gcc-c++', u'glibc', u'glibc.i686', u'glibc-devel', u'glibc-devel.i686', u'libaio', u'libaio-devel', u'libaio.i686', u'libaio-devel.i686', u'libgcc', u'libgcc.i686', u'libstdc++', u'libstdc++-devel', u'libstdc++.i686', u'libstdc++-devel.i686', u'make', u'sysstat', u'openssh-clients', u'compat-libcap1', u'xorg-x11-xauth', u'xorg-x11-xinit', u'libXtst', u'xdpyinfo', u'xterm', u'xsetroot', u'libXp', u'libXt', u'libXtst', u'ntp', u'ksh', u'lvm2', u'xfsprogs', u'btrfs-progs', u'parted', u'nc', u'smartmontools', u'elfutils-libelf-devel.i686', u'elfutils-libelf-devel']) => {"changed": false, "item": ["libselinux-python", "procps", "psmisc", "module-init-tools", "ethtool", "bc", "bind-utils", "nfs-utils", "util-linux-ng", "xorg-x11-utils", "xorg-x11-xauth", "binutils", "compat-libstdc++-33", "compat-libstdc++-33.i686", "unixODBC-devel", "unixODBC-devel.i686", "gcc", "gcc-c++", "glibc", "glibc.i686", "glibc-devel", "glibc-devel.i686", "libaio", "libaio-devel", "libaio.i686", "libaio-devel.i686", "libgcc", "libgcc.i686", "libstdc++", "libstdc++-devel", "libstdc++.i686", "libstdc++-devel.i686", "make", "sysstat", "openssh-clients", "compat-libcap1", "xorg-x11-xauth", "xorg-x11-xinit", "libXtst", "xdpyinfo", "xterm", "xsetroot", "libXp", "libXt", "libXtst", "ntp", "ksh", "lvm2", "xfsprogs", "btrfs-progs", "parted", "nc", "smartmontools", "elfutils-libelf-devel.i686", "elfutils-libelf-devel"], "msg": "Failure talking to yum: failure: repodata/repomd.xml from rhel-7-server-beta-debug-rpms: [Errno 256] No more mirrors to try.\nhttps://cdn.redhat.com/content/beta/rhel/server/7/x86_64/debug/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden"}
Can you please help
The stuff added in $ORACLE_HOME/sqlplus/admin/glogin.sql should be configurable, not hardcoded in the tasks.
Add to a list and loop through
The current way of creating a DB in oradb-manage-db
is:
There are a bunch of tasks before and after to check if the db already exists and whatnot.
There is no easy way of enforcing state on the DB (setting the DB in archivelog mode, flashback etc).
Using the module oracle_db
instead will make that easier.
It is still possible to use a responsefile, but it is also possible to just pass parameters inline.
The changes will mainly affect this file: oradb-manage-db/tasks/manage-db.yml
Despite setting the variable oracle_reco_dir_fs: /u01/fra, recovery area always end up in $ORACLE_BASE/fast_recovery_area when using DBCA.
I still didn't find a workaround except by using custom scripts with CREATE DATABASE. This is a minor issue but needs to be addressed anyway. Maybe I missed something while repeatedly reading Oracle official docs ;)
If you have mulitple dbs and their names clash in a grep, e.g.:
oracle_databases:
- home: db1
oracle_db_name: orcl
# ...
- home: db1
oracle_db_name: orcl2
The oradb-create Add dotprofile (1) task which basically greps for existing running dbs gets
orcl2
orcl
for the orcl db when it should only get orcl
. This results in the .profile_orlc being badly formed around the ORACLE_SID
bit.
Workaround is to not have clashing names (orcl1 & orcl2 instead of orcl & orcl2), so not a huge issue.
Fix is to amend the grep to use the -w flag.
I like to add support for custom ORACLE_HOME for each Datanase. I mentioned it in a pull request and you wrote that this is really welcome.
This is currently possible, but not with custom path for ORACLE_HOME.
There are a lot of tasks and templates with '{{ oracle_base }}/{{ item.oracle_version_db }}/{{ item.home }}'
except '{{ oracle_home_db }}'
. I think you added this variable to the defaults at a later time and didn't fixed the tasks.
I like to add the variable oracle_home to oracle_databases and change the following line in oradb-create/defaults/main.yml:
oracle_home_db: "{% if item.oracle_home is defined %}{{ item.oracle_home }}{% else %}{{ oracle_base }}/{{ item.oracle_version_db }}/{{ item.home }}{% endif %}"
This is compatible against all existing configurations and give us tha ability to define a custom ORACLE_HOME for each database. As a side effect. You also get the ability to define a custom ORACLE_HOME for all databases, which is not hard coded anymore when oracle_home_db is defined as a variable.
I'll start the work on this change, if this is fine for you.
$ pwd
ansible-oracle
$ grep -R oracle_sysconfig *
orahost/defaults/main.yml: oracle_sysconfig:
orahost/tasks/main.yml: with_items: oracle_sysconfig
IMHO, that names of the variable *_sysconfig, which indeed is used by the module sysctl,
and the values finally go to /etc/sysctl.conf, not something in /etc/sysconfig/ dir.
So, my opinion+question is, this variable needs to be called like: oracle_sysctl 👍
Do you have plans to support this OS at any point officially? I got your project working for it but it would need some more modifications to make it more generic for all OS's.
I've tried to diagnose this, struggling to unfortunately:
TASK [oradb-create : Add dotprofile (2)] ****************************************************************************************************************************************************************************************************
failed: [OL7-ANS-TST.local] (item=[{u'tablespaces': [{u'name': u'blerghapp1_data', u'autoextend': True, u'state': u'present', u'next': u'5M', u'content': u'permanent', u'maxsize': u'500M', u'bigfile': True, u'size': u'10M'}], u'is_container': True, u'users': [{u'state': u'present', u'grants': [u'approle1'], u'default_tablespace': u'blerghapp1_data', u'schema': u'blerghapp1'}], u'roles': [{u'state': u'present', u'grants': [u'create session', u'create table', u'select any table', u'select any dictionary'], u'name': u'approle1'}], u'oracle_db_type': u'SI', u'oracle_database_type': u'MULTIPURPOSE', u'init_parameters': [{u'scope': u'both', u'state': u'present', u'name': u'db_recovery_file_dest_size', u'value': u'20G'}], u'oracle_db_name': u'blergh', u'datafile_dest': u'+DATA', u'redolog_size_in_mb': 100, u'state': u'present', u'oracle_edition': u'EE', u'oracle_db_mem_totalmb': 1024, u'oracle_version_db': u'12.2.0.1', u'recoveryfile_dest': u'+FRA', u'services': [{u'state': u'started', u'name': u'blerghapp1_service'}], u'home': u'db1', u'storage_type': u'ASM'}, {'ansible_parsed': True, 'stderr_lines': [], u'cmd': u'ps -ef | grep -w "ora_pmon_blergh" |grep -v grep | sed 's/^.*pmon//g'', u'end': u'2018-06-13 09:15:23.387528', '_ansible_no_log': False, u'stdout': u'blergh', '_ansible_item_result': True, u'changed': False, 'item': {u'tablespaces': [{u'name': u'blerghapp1_data', u'autoextend': True, u'state': u'present', u'bigfile': True, u'content': u'permanent', u'maxsize': u'500M', u'next': u'5M', u'size': u'10M'}], u'is_container': True, u'users': [{u'state': u'present', u'grants': [u'approle1'], u'default_tablespace': u'blerghapp1_data', u'schema': u'blerghapp1'}], u'roles': [{u'state': u'present', u'grants': [u'create session', u'create table', u'select any table', u'select any dictionary'], u'name': u'approle1'}], u'oracle_db_type': u'SI', u'oracle_database_type': u'MULTIPURPOSE', u'init_parameters': [{u'scope': u'both', u'state': u'present', u'name': u'db_recovery_file_dest_size', u'value': u'20G'}], u'oracle_db_name': u'blergh', u'datafile_dest': u'+DATA', u'redolog_size_in_mb': 100, u'state': u'present', u'oracle_edition': u'EE', u'oracle_db_mem_totalmb': 1024, u'home': u'db1', u'recoveryfile_dest': u'+FRA', u'services': [{u'state': u'started', u'name': u'blerghapp1_service'}], u'oracle_version_db': u'12.2.0.1', u'storage_type': u'ASM'}, u'delta': u'0:00:00.061656', u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'raw_params': u'ps -ef | grep -w "ora_pmon_blergh" |grep -v grep | sed 's/^.*pmon//g'', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'blergh'], u'start': u'2018-06-13 09:15:23.325872', '_ansible_ignore_errors': None, 'failed': False}]) => {"changed": false, "item": [{"datafile_dest": "+DATA", "home": "db1", "init_parameters": [{"name": "db_recovery_file_dest_size", "scope": "both", "state": "present", "value": "20G"}], "is_container": true, "oracle_database_type": "MULTIPURPOSE", "oracle_db_mem_totalmb": 1024, "oracle_db_name": "blergh", "oracle_db_type": "SI", "oracle_edition": "EE", "oracle_version_db": "12.2.0.1", "recoveryfile_dest": "+FRA", "redolog_size_in_mb": 100, "roles": [{"grants": ["create session", "create table", "select any table", "select any dictionary"], "name": "approle1", "state": "present"}], "services": [{"name": "blerghapp1_service", "state": "started"}], "state": "present", "storage_type": "ASM", "tablespaces": [{"autoextend": true, "bigfile": true, "content": "permanent", "maxsize": "500M", "name": "blerghapp1_data", "next": "5M", "size": "10M", "state": "present"}], "users": [{"default_tablespace": "blerghapp1_data", "grants": ["approle1"], "schema": "blerghapp1", "state": "present"}]}, {"_ansible_ignore_errors": null, "_ansible_item_result": true, "_ansible_no_log": false, "ansible_parsed": true, "changed": false, "cmd": "ps -ef | grep -w "ora_pmon_blergh" |grep -v grep | sed 's/^.*pmon//g'", "delta": "0:00:00.061656", "end": "2018-06-13 09:15:23.387528", "failed": false, "invocation": {"module_args": {"raw_params": "ps -ef | grep -w "ora_pmon_blergh" |grep -v grep | sed 's/^.*pmon//g'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true}}, "item": {"datafile_dest": "+DATA", "home": "db1", "init_parameters": [{"name": "db_recovery_file_dest_size", "scope": "both", "state": "present", "value": "20G"}], "is_container": true, "oracle_database_type": "MULTIPURPOSE", "oracle_db_mem_totalmb": 1024, "oracle_db_name": "blergh", "oracle_db_type": "SI", "oracle_edition": "EE", "oracle_version_db": "12.2.0.1", "recoveryfile_dest": "+FRA", "redolog_size_in_mb": 100, "roles": [{"grants": ["create session", "create table", "select any table", "select any dictionary"], "name": "approle1", "state": "present"}], "services": [{"name": "blerghapp1_service", "state": "started"}], "state": "present", "storage_type": "ASM", "tablespaces": [{"autoextend": true, "bigfile": true, "content": "permanent", "maxsize": "500M", "name": "blerghapp1_data", "next": "5M", "size": "10M", "state": "present"}], "users": [{"default_tablespace": "blerghapp1_data", "grants": ["approle1"], "schema": "blerghapp1", "state": "present"}]}, "rc": 0, "start": "2018-06-13 09:15:23.325872", "stderr": "", "stderr_lines": [], "stdout": "blergh", "stdout_lines": ["blergh"]}], "msg": "AnsibleUndefinedVariable: 'dbh' is undefined"}
There is a problem with role_separation=false
:
Role separation should use different users for GI_HOME and ORACLE_HOME of the databases and not different group assignments in the responsefile for GI_HOMEs.
Information from grid-tbr_oraha.rsp with role_separation=false
oracle.install.asm.OSASM=dba
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=oper
gridSetup.sh
is reporting the following warnings:
[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41812] OSDBA and OSASM are the same OS group.
CAUSE: The chosen values for OSDBA group and the chosen value for OSASM group are the same.
ACTION: Select an OS group that is unique for ASM administrators. The OSASM group should not be the same as the OS groups that grant privileges for Oracle ASM access, or for database administration.
Before manually installing libaio 32 bits I get the following error when running netca or dbca:
TASK [oradb-create : Create database(s)] ************************************************************************************
failed: [ansible-node1] ...
...
oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:189)"], "stdout": "UnsatisfiedLinkError exception loading native library: njni11\njava.lang.UnsatisfiedLinkError: /u01/app/oracle/11.2.0.4/db1/lib/libnjni11.so: libaio.so.1: cannot open shared object file: No such file or directory", "stdout_lines": ["UnsatisfiedLinkError exception loading native library: njni11", "java.lang.UnsatisfiedLinkError: /u01/app/oracle/11.2.0.4/db1/lib/libnjni11.so: libaio.so.1: cannot open shared object file: No such file or directory"]}
After installing the missing package it runs OK.
It should be possible to specify which 'opatchauto' patch (and potentially one-off's) should be applied to a specific system.
So I'm thinking something like this.
gi_patches:
opatchauto:
- some_opatchauto_identifier
oneoff:
- some_oneoff_identifier
- some_oneoff_identifier
where 'some-*_identifier' either maps to an existing construct (oracle_sw_image_gi_psu
) or something new.
Right now, a ORACLE_HOME is based on the configuration in the oracle_databases
structure.
A new structure (oracle_homes
) will be created where the oracle_home and its possible patches will be dealt with.
Another list will contain the homes (e.g oracle_homes_installed
) a specific configuration should have installed.
oracle_databases
will only contain a reference to the oracle_homes
structure.
Tasks:
Transfer oracle installfiles to server (www)
Transfer oracle installfiles to server (local)
Extract files to stage-area
Extract files to stage-area (from remote location)
should check checkdbswinstall.[item#].stdout != 1 in the when statement
Password management will be broken out, such that it can be put in a separate file that easily can be vaulted.
My installations are creating databases with memory_target which means that Oracle is unable to use HugePages, which is fine in small environments with less processes.
The current way for setting the hugepages in percent of RAM is not a good idea, because it is hard to define a good limit in systems with big memory. I prefer a solution with a fixed value of number of hugepages or a memory size which will be calculated to the right number of pages.
Any wish how we could solve this? I could implement it but like to know how it should be implemented in ansible-oracle.
The last step is the correction for real usage of HugePages while creating the database which is a 2nd part of this problem.
Hello,
It runs fine until the oraswdb-role then has this error.
TASK [oraswdb-install : Check if DB-server is already installed] *************** fatal: [xxx.xx.xx.xx]: FAILED! => {"failed": true, "msg": "'unicode object' has no attribute 'oracle_version_db'"}
Can you share an example orafs variables file please?
It's referenced in single-instance-db-on-fs.yml.
Hi I'm just trying to run out of the box script and get the following error on my system:
ansible-playbook single-instance-db-on-fs.yml
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'hostgroup' is undefined
The error appears to have been in '/home/neo/local_projects/ansible-oracle/single-instance-db-on-fs.yml': line 4, column 4, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
I'm running ansible 2.5.3 on Ubuntu 16.04
I've met the following problem : Oracle GI 11.2 Installation on RHEL 7 – Error: ohasd failed to start the Clusterware.
orahost-storage in the
task name: ASM-storage | Create device to label mappings for asm
loops on with_items: "{{asm_diskgroups}}" which is undefined because it's commented out in its default.yaml file.
This is more of a "discussion/feature request than an issue but I'd like to look at the possibility of having the option to label RAW disk devices as ASM disks opposed to requiring partitions. The primary reason is online resizing (expanding) ASM DG's is so much easier without dealing with partitions.
It looks like part of the work is done with committing of the new partition_devices var and have plans too expand upon this when I have time.
I'm a newbie to ansible and trying so hard to understand the way you configured oracle_home_db variable in the oraswdb-install role but it's not clear to me, can you please explain why you did not put a static value to this variable same as you did with oracle_base and oracle_inventory_loc?
oracle_home_db: "{% if item.0 is defined %}{% if item.0.oracle_home is defined %}{{ item.0.oracle_home}}{% else %}{{ oracle_base }}/{{ item.0.oracle_version_db }}/{{ item.0.home }}{% endif %}{% else %}{% if item.oracle_home is defined %}{{ item.oracle_home }}{% else %}{{ oracle_base }}/{{ item.oracle_version_db }}/{{ item.home }}{% endif %}{% endif %}"
Thank you
I reorgnized the directory structure for my inventory in ansible-oracle.
tree inventory/
inventory/
└── orafs
├── group_vars
│ └── all
├── hosts
└── host_vars
└── oradb03
└── vars.yml
└── oradb04
└── vars.yml
Replace hosts with symbolic links to the files in inventory when a central file for the hosts-inventory is used. The advantage is a clear structure for each 'hosts' in the playbook.
Adding the ansible-vault is very easy. Just add a file vault.yml in the directory of vars.yml.
The host_vars and group_vars from '/' could be removed. This makes the whole structure for projects with own configurations much cleaner. Removing the examples is very easy as well.
What do you think about the structure?
Hello,
When running an 11.2.0.4 SW install with minimal specs I have this error at SW response file creation:
TASK [oraswdb-install : Setup response file for install (DB)] ****************************************************
failed: [ansible-node1] (item=[{u'state': u'present', u'oracle_init_params': u'', u'oracle_db_password': u'oracle', u'oracle_db_type': u'SI', u'service_name': u'test11g', u'oracle_db_name': u'test11g', u'listener_name': u'LISTENER', u'redolog_size_in_mb': 100, u'storage_type': u'FS', u'oracle_edition': u'SE', u'oracle_db_mem_totalmb': 1024, u'oracle_database_type': u'MULTIPURPOSE', u'home': u'db1', u'oracle_version_db': u'11.2.0.4'}, {'_ansible_parsed': True, 'stderr_lines': [u'grep: /u01/app/oraInventory/ContentsXML/inventory.xml: No such file or directory']
When I checked the box indeed the /u01/app/oraInventory directory did not exist.
in oraswgi-install role, the main.yaml contains a broken task
is missing remote_src=yes otherwise it looks for the zip file in the ansible manager machine.
I prefer installations with a NFS-Share for installation media and rman Backups.
What do you think about removing the tasks Mount nfs share with installation media
after enabling the autofs?
I could create a PR for adding and enabling autofs. The mounts could be removed during the refactoring.
The current method has issues in RAC environments.
I think the following logic should work:
What do you think about this idea?
This task has a problem, with the following configuration:
install_from_nfs: false
oracle_sw_copy: false
oracle_sw_unpack: false
This results in trying to mount the nfs, which is not needed in my configuration, because I still have the files unpacked in a local folder.
The fix is very simple:
old:
when: not oracle_sw_copy|bool and not oracle_sw_unpack|bool
new:
when: install_from_nfs
Both variables rely on install_from_nf
s in defaults/main.yml
:
oracle_sw_copy: "{% if install_from_nfs==true %}false{% else %}true{% endif %}"
oracle_sw_unpack: "{% if install_from_nfs==true %}false{% else %}true{% endif %}"
Should I create a pull request for you?
This will track fixes for #59
We really need a solution for creating Database who are license compliant.
There are a couple of problems with the current implementation of creating databases. The biggest issue is, that there is no way to configure the options inside a database when creating Multitenant Databases without using sql-scripts.
I plan to replace the current funtions with a set of sql-Statements as template.
Is it fine to do it and sending a pull request to you?
I haven't really done a lot to improve the old code up until now, so I'm going through that and adjusting it where it is needed. This will probably be a an on-going thing for a while.
I will try not to change variable definitions if I don't have to, to minimize the potential impact.
In general:
Up until now, I haven't really used any v2.x specific things but that will change as of now.
oraswdb-install
will be changed to use loop_control to iterate over each home. When this change is pushed Ansible >= 2.1 is needed
New roles (some will not work with 1.x):
orahost-ssh
- will replace the old way of dealing with ssh-keys in RAC (already available)
oradb-manage-db
- will consolidate (and replace) oradb-create & oradb-delete
The following roles will also be coming shortly.
oradb-manage-pdb
oradb-manage-users
oradb-manage-roles
oradb-manage-grants
oradb-manage-tablespaces
oradb-manage-services
oradb-manage-parameter
In <Deploying Oracle RAC 11g R2 Database on Red Hat Enterprise Linux 6>
Section 4.1.4 "Creating a Database using Database Configuration Assistant (DBCA) ", it use Redo Logs diskgroup to suppert "Multiplex Redo Logs and Control Files", will this feature be added to ansible-oracle?
Hello,
I have this error message when attempting a dry run on a fresh RHEL 6.8 install:
TASK [orahost : Disable Transparent Hugepages (in grub.conf)] *****************************************************************
task path: /home/arnaw/ansible-oracle/roles/orahost/tasks/RedHat-6.yml:8
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Error while setting attributes: /usr/bin/chattr: Clearing extent flag not supported on /etc/grub.conf.11711.2017-10-22@11:47:14~
failed: [ansible-node1] (item=0) => {"changed": false, "details": "Error while setting attributes: /usr/bin/chattr: Clearing extent flag not supported on /etc/grub.conf.11711.2017-10-22@11:47:14~\n", "failed": true, "gid": 0, "group": "root", "item": "0", "mode": "0600", "msg": "chattr failed", "owner": "root", "path": "/etc/grub.conf.11711.2017-10-22@11:47:14~", "secontext": "system_u:object_r:etc_t:s0", "size": 839, "state": "file", "uid": 0}
I have no idea if lineinfile module makes use of the shell chattr command in its code, maybe you could try the replace module instead, which basically do the same, for example by replacing "quiet" in kernel line with "quiet transparent_hupage=never"?
For me it worked when I replaced the lineinfile with the following:
replace: dest=/etc/grub.conf regexp='quiet' replace='quiet transparent_hugepage=never'
Not a clean solution of course, just an idea for the right regexp ;)
Regards,
I am working on a solution for Note 2284463.1 who explains a problem with Kernel 3.10.0 on OL/RHEL7. The Kernel is the RHCK under OL and the normal kernel for RHEL.
A patch is requirred after the unzip of the installation media. Otherwise the configuration will fail with
CLSRSC-400: A system reboot is required to continue installing.
The patch from 2284463.1 is part of the last Release Update from January 2018. That's why I am working on a solution to install the RU during oraswgi-install for 12.2 for OL/RHEL 6 and 7.
This will change a lot in roles/oraswgi-install/tasks/12.2.0.1.yml
but it is mandatory for a working installation of GI/Restart 12.2 on RHEL7.
This role needs to copy the PSU from a source. All other installation roles are able to install from an extracted source.
This is not easy to fix, because the following files are stored in oracle_psu_stage
:
ocm.rsp
ocmrsp.expect
I think these files should be placed in oracle_rsp_stage
and the installation from an extracted source is possible - especially with a read only mounted source.
Ansible 2.3 began deprecation for jinja2 templating in when
keys
ansible/ansible#22397
Change when
keys to conform to new standard
The master_node
variable will be removed.
It is perfectly possible to solve this in a different way, so I'm going to remove the need to set that variable in host_vars
by removing it altogether.
The configure_cluster
variable (True/False)
will used as the 'thing' that decides whether or not to run things on only one node. A cluster_master
variable (defaults to the first host in the play) is also added and being used to differentiate between the first node and the other nodes
e.g
- name: run root.sh on the first node
shell:......
run_once: "{{ configure_cluster }}" <-- Will only run on the first node in the `hostgroup` in the case of rac, and on all nodes in the group if not rac
- name: run root.sh on other hosts
shell: ......
when: configure_cluster and inventory_hostname != cluster_master <-- will run on 'other' nodes and only if we're trying to install rac
The branch masternode
contains the changes made.
Hello,
Today when connecting to my test db I noticed the following:
[oracle@node1 ~]$ . oraenv
ORACLE_SID = [oracle] ? test11g
/u01/app/oracle/11.2.0.4/db1/bin/orabase: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory
The Oracle base has been set to
[oracle@node1 ~]$
So I installed libgcc.i686 and the error disappeared. Also I checked Oracle doc here, it specifies this package is required before install:
https://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm#BHCGJCEA
oraswgi-opatch is pretty simple, because there is only 1 ORACLE_HOME and 1 ASM-Instance on the system. Finally 'opatch auo' makes the installation very simple. :-)
oraswdb-opatch may be a lot more complex, because we need a solution with different ORACLE_HOMEs for every Instance on a host. (I still have a working version of ansible-oracle with custom ORACLE_HOMEs for each oracle_database)
Removing and Reinstall of OPatch should only be done when required. A better way is getting the version with opatch version and compare it against the version in the staging area. There is a version.txt in the Archive to get the version without executing the opatch. There is no version.txt in a fresh installed ORACLE_HOME, but it should be very easy to compare these files and install when different.
What about the post operations after install of PSU or a OneOff?
How could we keep the information for a required postinstallation in a database? Is a file for each instance+patch in a directory on the database server a good idea? We can't hold this information on the install server as it is dependent on the Instance and there is a huge different ways and scripts for post installations...
Hi Experts,
I am having issues running this play book. As per instruction, by default without chaning any parameters one can install oracle db 12.12
Steps I have followed:
[WARNING]: Could not match supplied host pattern, ignoring: orafs
PLAY [Host configuration] *******************************************************************************************************************************************************************
skipping: no hosts matched
PLAY [Database Server Installation & Database Creation] *************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP **********************************************************************************************************************************************************************************
real 0m0.978s
user 0m0.830s
sys 0m0.144s
can some on help me...i did not change any thing..in any file...but i am getting this error
please help
There is a big danger in creating diskgroups with normal/high redundancy when using more then 2 disks in 2 Storage systems. Most people wants to use 2 failure groups for a normal redundancy diskgroup, but ASM will create a failure group for each disk when no failure group name is used while creating the diskgroup or adding a disk.
When 1 storage system fails, more then 1 failure group is missing and the diskgroup is immediatly dismounted and cannot mounted until a minimum of number Failgroups-1 is availible in normal redundancy.
I add the failgroup-parameter to asm_diskgroups for normal/high redundancy. Existing configurations are not affected but keep in mind that most admins use a wrong diskgroup configuration.
Example:
asm_diskgroups:
- diskgroup: grid
properties:
- {redundancy: normal, ausize: 4}
attributes:
- {name: 'compatible.rdbms', value: 11.2.0.4.0}
- {name: 'compatible.asm', value: 12.1.0.2.0}
disk:
- {device: /dev/sdc, asmlabel: grid01, failgroup: fg1}
- {device: /dev/sdc, asmlabel: grid01, failgroup: fg2}
Any plan to add a dataguard configuration between 2 instances?
I am working on a lot of enhancements in ansible-oracle and doing this with Ansible 2.1.
I never tested my changes against older versions.
Is there a good reasong for sticking to an old version?
Thanks for pushing the role to ansible-oracle.
I did some changes to fit it better in ansible-oracle.
I did all changes and could create a pull request.
Hello,
When using DBCA there is no way to customize the number of redolog groups and file destination, same with controlfiles.
Doing it in a SQL script after db creation is tricky: redo group must be unused to change file location and if controfiles are stored in ASM you must use RMAN to add one.
So if anyone has ideas. I was thinking maybe to give the possibility to execute a custom SQL script given as a new group/host var after db creation.
As of 9b357ca, it is possible to have custom storage configurations for each database (i.e different diskgroups or filesystem etc).
However, for 11.2 the custom recoveryfile_dest, doesn't work properly if filesystem storage is used.
It always uses the dbca template default ($ORACLE_BASE/flash_recovery_area).
This is not a bug in the ansible code (as it works properly with 12.1 and if using ASM in 11.2) but rather in dbca, and there is not really anything that can be done from the ansible side.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.