summaryrefslogtreecommitdiffstats
path: root/roles
diff options
context:
space:
mode:
authorTomas Sedovic <tomas@sedovic.cz>2017-06-16 18:59:45 +0200
committerRyan Cook <rcook@redhat.com>2017-06-16 09:59:45 -0700
commita6bf0552961c7b8c63639850fd0501941e89938f (patch)
treefbde420c20e6f18fcc8063e268a1525f8848c8d0 /roles
parent8b3c681c8083dc217340f826827c8c27e32c0ebb (diff)
downloadopenshift-a6bf0552961c7b8c63639850fd0501941e89938f.tar.gz
openshift-a6bf0552961c7b8c63639850fd0501941e89938f.tar.bz2
openshift-a6bf0552961c7b8c63639850fd0501941e89938f.tar.xz
openshift-a6bf0552961c7b8c63639850fd0501941e89938f.zip
Add an Openstack provider (#397)
* First cut at the rhc-ose-ansible structure * New OSE3 docker host builder and OpenStack ansible provisioning support * Support for supplying flavor name and moved around variables * Refactored OpenStack provisioning to be a generic role. Created OpenShift specific playbook * Registry Role for ansible playbooks * Added immediate=yes to have firwalld port take affect; restructured registry role; changed true to yes in module parameters * added post_install role * adding playbook * Migration of CICD server provisioning to Ansible * Adding nginx auth layer * Removing key name from registry * Refactoring and renaming * adding openshift-ansible's post install roles * removing deprecated files * Shell for role variable info * removing extra files * Add OpenStack SSH key parameter check * Replacing yum commands and normalizing comments * fixed README * Renaming template files with .j2 for clarity * Add OpenStack security group detection and creation resolves #106 * Change to using split to iterate and SSH rule create only once * Reorder instances names to sort by env_id * Change default_env_id of "testenv" to local env OS_USERNAME resolves #142 * Prepend 'casl' to default_env_id * Add connection test to OpenStack before proceeding * First cut at DNS ansible roles * Updated defaults and tasks for dns-server * Add subscription-manager support for Hosted or Satellite * Refactor role to dynamically determine rhsm_method * Removes rhsm_method * Renames rhsm_server to rhsm_satellite * Add additional pre_task checks (hosted + key) * Change conditionals from rhsm_method check to rhsm_satellite defined * Change repos disable/enable from key to if repos are defined * Update README and examples in inventory file * Fix bad syntax with extra 'and' in when using rhsm_pool * Refactor use of rhsm_password to prevent display to CLI * Cosmetic changes to task names and move yum clean all to prereqs * Remove vars_prompt, add info to README to re-enable and for ansible-vault * Add openstack pre_tasks and ansible_sudo when calling role * Add deprovision playbook using nova list with sanity checks - Add minimum length check for env_id - Add max_instances check - Remove dynamic openstack.py inventory - Add override to bypass checks * Refactor debug flag to be dry_run and other small changes - Removed debug statements and instead display on pause prompt - Moved to playbooks directory * Add ansible_sudo: true to subscription-manager task * This matches PR#133 enabling ansible_sudo: true when calling that role * Also changes max_instances check from >= to just > to allow 2 full default environments to be removed (6 max_instances) * Updated to fix broken/missing 'defaults'... * Add unique image logic and rename playbook to terminate.yml * Add OSE provision prerequisites - Install required packages - Update pacakges (moved from main.yml) - Install and disable firewalld - Install iptables-services and disable iptables - Verify and set hostname if needed * Add SELinux check and fail if not enforcing * Remove getenforce and firewall tasks and use facts - Uses Ansible collected facts to determine SELinux status - Adds ansible_sudo: true when calling role - Adds tag to role when calling it * Add docker role - Largely taken from cicd docker.yml - Changed to using a template for docker-storage-setup - Using variables for both DEV and VG defined in defaults - Using pvs command to check for use of DEV and VG before proceeding * Add org parameter to Satellite with user/pass * Fix typo in task name * Updated dns-server role based on feedback * Changes by JayKayy for a full provision of OpenShift on OpenStack * Role for disconnected git server * Added additional yum dependency and corrected spelling * Added example of disconnected git inventory file * Changes to allow runs from inside a container. Also allows for running upstream openshift-ansible installer * Reverting previous commit and making template adjustments * Subscription manager role should accomodate orgs with spaces * Fixing unescaped newline * Channging hard coded host groups to match openshift-ansible expected host groups. Importing byo playbook now instead of nested ansible run. Need to refactor how we generate hostnames to make it fit this. * Updated to run as root rather than cloud-user, for now... * Updated inventory template to include openshift_hostname and openshift_public_hostname * Wrapping in a script to tie the two playbooks together * Updating ose-provision with DNS workarounds / fixes * Removed spaces causing issues... * DNS fix to support OSEv3.2 * Add floating IP support when using Neutron * Updated to remove repos from playbook + fix typo * Cleande up hostname role to make it more generic * Image name for DNS server becomes configurable. * Updated inventory and template file to make cluster config optional * Removing temporary file * Loosen up the DNS server a bit to allow for ETL OSP installs * Re-implements original subscription-manager role invokation that was removed in PR# 168. * Enhanced provisioning script with better error checking, diretory awareness, and improved help output * Should be looking for generated inventory file in SCRIPTS_BASE_DIR * Add Neutron floating IP support for Issue #195 * Add check for and set_fact if Neutron is in use which is used by several tasks * This PR was originally longer and contained the now split off PR #197 * first attempt at securing the registry * Minor updates for ansible 2.1 compatibility * Updated CICD implementation to support ETL OSP env * Updated OSE inventory file with some clean-up * Add enhancements for for terminate playbook * Fixes Issue #206 * Add check for valid item when attempting to delete objects * Add debug on all variables when using dry_run * Changed default ansible_ssh_user to cloud-user in line with standard cloud guest image * Add count for ips and volumes to display since these may not always be the same as instance count * Enhance displayed warning/note message to include new counts * It is possible for an instance to not have a floating IP for whatever reason (such as manually deallocating or releasing the IP), in this case SSH will not work to the instance so it will not be included in the host group to attempt subscription manager unregister, but will still be deleted * It is possible that an instance will have a volume created but not attached. In this case as a precautionary measure I am excluding these unattached volumes from the deletion in case this was intentionally detached to preserve data. We can further discuss if this should be a parameter to override instead or if we need to change this behavior. * Excluded instances in ERROR state as they will most likely not delete. We can discuss if this should be parameterized instead. * Added prompt variable defaulted to true but can be set to false * Added unregister variable defaulted to true but can be set to false * Adding NFS support and fixing template labels so we get a router and registry out of the box. * testing changes * tested changes * fixing defaults and removing host from test playbook * adding clenaup test book and fixed typo * Allow passing of ansible extra-vars in provisioning script * Change --environment to --extra-vars and add usage. * added check for already secured registry and uses actualy openshift_common dependency * fixed readiness probe by adding logic for 3.1 vs 3.2 * Fix malformed file to address Issue #210 * Pulling out file paths into variables to account for containerized installs * fixed error message logic for already secured registry * added tasks to disable and re-enable deployment triggers, remove debug task * Fixes Issue #163 if rhsm_password is not defined * Adding a post-install playbook with secure-registry and ssh key sync. * Node storage now uses node specific storage var; search for generated inventory file sorts by timestamp not name * Initial commit exposing registry service * move registry_hostname to inventory * Updated env_id to be a sub-domain + make the logic a bit more flexible * Enabled default subdomain/'apps' * Updated inventory template file to include 'openshift_deployment_type' * Adding LDAP and HTPasswd examples for an auth provider to base inventory file * Fixing port number in LDAP example * Refactor OpenStack security group creation * Adds new openstack-security-groups role * Addresses Issue #211 and adds all instances to default group * Defines default security group variable with all groups/rules * Sets security group variables per type (master,node,nfs,dns) * Supports specifying no security group for a type (e.g. nfs) * Uses new Ansible 2.x modules * Refactor to playbook and split data structure out * Split single security group variable into one per type * Moves 'default' security group from role into variable * Moves default security group variables back to openshift-common role * Converts openstack-security-group role into playbook * Playbook called on every openstack-create invocation as before * Simplifies security group tasks and removes type bhecking * Iterate through seucrity groups and build a comma-separated list of groups * Add detection of non-Neutron env * Add UDP 8053 to default master security group * Adjusting docker role, adding support for logging/metrics, and updating client container * OpenShift Management Role * Fixing ansible impl to work with OSP9 and ansible 2.2 * Correcting formatting * Added process / contribution info * Updated default security group rules (#7) * Openstack heat (#2) * Adding a role to invoke openstack heat * Adding readme * Pulling parameters out to inventory file * start of end-to-end playbook * More enhancements and refactoring to make dynamic inventory the driver for an openshift install * Switching to variable substituted path to config.yaml playbook * Changes to allow defining of number of nodes/infranodes. * Added labels to inventory * Start of end-to-end functionality * Enhancements to support openstack heat provisioning * Updating inventory sample to remove some deprecation warnings * Working towards making the secure-registry role 'become' aware * Fixing node labels and removing secure-registry as it's no longer needed * No longer need insecure registry line, as installer will secure our registry * Adjusted dynamic inventory to filter by clusterid * Minor updates to dynamic inventory bug * Adding a refactored sample inventory directory * Refactoring playbooks for better directory structure, and to narrow down host groups * Adding volume mounts to heat template * Moving dns playbooks back to original location * Fixing incorrect file path * Cleaning up inventory samples * One more hostname to clean up * Changing var name * changed openshift-provision to openshift-prep * Adjusting current provision script to avoid breakage by new openstack-heat code * Updating PR Template with Team mention (#10) * Install playbook defaults to the assumption that casl-ansible and openshift-ansible are checked out to the same directory * Removing unnecessary task * Fixing two significant bugs in the HEAT deployment (#13) * Updated values in sample inventory (#17) * Adding documentation and docker containers so others can begin testin… (#16) * Adding documentation and docker containers so others can begin testing cluster provisioning * Making updates per comments by @oybed * Fixing formatting changes for links * Renaming openstack images to align with CoP naming (#18) * Defaulting the DNS instance to a small flavor (#20) * Nagios (#11) * First cut at the nagios work * Added NRPE service enabled * Updated implementation to be a bit more flexible * Updated logic to include checks for services * Added support for DNS and NFS checks * Updated templates and config files * Updated check_service script to simplify and avoid false negatives * Added support for OpenShift checks * Added README for the playbook * Updated README * DNS server should NOT run docker (#25) * Readme (#26) * Updated documentation and example inventory * Update README.md Added "hint" * Update README.md Fix numbering in the markdown * Update README.md * Added docker_volume_size to the sample inventory * Added rhsm_pool to the sample inventory * Updated README per comments * Ensure DNS configuration has wildcards set for infra nodes (#24) * Ensure DNS configuration has wildcards set for infra nodes * Updated to include all cluster hosts for DNS entries * Updated DNS server role + example playbook (#27) * Updated DNS server role + example playbook * Updated DNS server role + example playbook * Dns selinux (#28) * Updated DNS server role + example playbook * Updated DNS server role + example playbook * Updated for SELinux boolean * Openshift mgmt (#30) Added prune_projects to the openshift-management role along with Ansible tower support * Created initial CHANGELOG.md * Updating to development release of ansible 2.3.0 to pull down bug fixes in HEAT module (#21) * Workaround for Ansible 2.3 breakage (#31) * Added quotes where needed and fixed some other minor bugs (#33) * Fixing awk check (#34) * Updating client image to lock it to ansible 2.3 and install some addi… (#32) * Updating client image to lock it to ansible 2.3 and install some additional dependencies * First attempt at a docker-compose based solution * Renaming image * Stack refactor (#38) * Refactored openstack-stack role to: - Convert static heat template files to ansible templates - Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups - Some code cleanup * Deleting commentd out code and irrelevant plays * Refactored openstack-stack role to: - Convert static heat template files to ansible templates - Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups - Some code cleanup * Deleting commentd out code and irrelevant plays * Replacing stack parameters with jinja expressions * Updating sample inventory to work with latest dynamic inventory changes * updating inventory with host group mapping. making sync keys optional * Missing cluster_hosts group * Updating to add infra_hosts * Updating inventory per comments from oybed and sabre1041 * First attempt at a simple multi-master support (#39) * First attempt at a simple multi-master support * Removing unneeded inventory * adding default number of masters and lower number of nodes * Some fixes (#41) * Fix the sample inventory The `openstack_nameservers` variable needs to be a list of strings, we need to set the Openshift labels in OSv3.yml and we show an example of using the username/password/poll for RHEL subscriptions. * Update the READMEs This fixes some of the paths, explains that we need to pass `openstack_ssh_public_key` to the end-to-end playbook and includes the full Docker command since there is no `run.sh` script. Oh and Heat is not an acronym :). * Fixes to the readme and inventory * Use docker-compose * Correcting the sample inventory for an HA cluster (#40) * Correcting the sample inventory for an HA cluster * Adding node label mapping * Updating to mre generic IPs * Updating to OSP ocata repo, as there are some bugs with newton's channel (#44) * Use the correct variable name in create_users (#43) The user creation was failing, because it was looking for the `demo_users` variable while the samples put the data under `create_users`. * Upgrading jinja2 to work correctly with latest templates (#45) * Fix rpm deps (#46) * Upgrading jinja2 to work correctly with latest templates * Updated to solve rpm deps + other version issues * Clean-up * Updating control-host settings and env * Updating control-host settings and env * Updating README and names to align across all components * Setting the TERM var for better shell experience * Conditionally set the openshift_master_default_subdomain to avoid overriding it unecessary (#47) * Update README.md * Update CASL to use nsupdate for DNS records (#48) * Updated to use nsupdate for DNS records * Updated formatting of dict * Updating descriptive text * Support for external DNS config * Upgrading jinja2 to work correctly with latest templates * Latest update for nsupdate * Updated to use nsupdate for DNS records * Updated formatting of dict * Updating descriptive text * Support for external DNS config * Latest update for nsupdate * Updated to support external public/private DNS server(s) * Updated DNS server handling * Updated DNS server handling * Updated DNS server handling * Eliminated the from the sample inventories * Updated sample inventory to point to 2 separate DNS servers for private/public * Playbook clean-up * Adding 'python-dns' * splitting subscription manager calls to allow for a clean pre-install playbook * Move the openstack provisioning playbooks They'll live in playbooks/provisioning/openstack from now on. * Add a single provisioning playbook * Symlink roles to provisioning/openstack/roles * Add a sample inventory for openstack provisioning * Add license for openstack.py in inventory It's under the GPLv3+ while the rest of the repo is Apache 2. * Add readme * Move pre_tasks from to the openstack provisioner We should probably not pollute the role namespace with a name as common as "common". Moving the pre_task.yml to provisioners/openstack instead. * Add default values to provision-openstack.yml * Fix privileges in the pre-install playbook * Always let the openshift nodes access the DNS When `node_ingress_cidr` to limit the IP range for the DNS server, this can prevent the actual openshift nodes from accessing it as well. This commit makes the access from the `openstack_subnet_prefix` always pass through and uses `node_ingress_cidr` for additional access control. * Add a flat sec group for openstack provider Add a openstack_flat_secgroup, defaults to False. When set, merges sec rules for master, node, etcd, infra nodes into a single group. Less secure, but might help to mitigate quota limitations. Update docs. Use timeout 30s to mitigate the error: Timeout (12s) waiting for privilege escalation prompt. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> * Add ansible.cfg for openstack provider Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> * Drop atomic-openshift-utils, update docs for origin TODO use with when: ansible_distribution == 'CentOS' Also update docs for origin Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> * Gather facts for provision playbook Provision tasks use facts like ansible_hostname and few others. W/o gathering facts, those expire, and the provision playbook cannot be reapplied in order to update the existing heat stack. Refresh the facts cache by specifying gather_facts: true. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com> * Update sample inventory with the latest changes * Fix yamllint errors * Remove the extraneous DNS directory It's a CASL-specific helper, not necessary for the provisioning playbooks. * Fix flake8 errors with the openstack inventory
Diffstat (limited to 'roles')
-rw-r--r--roles/dns-server-detect/defaults/main.yml3
-rw-r--r--roles/dns-server-detect/tasks/main.yml36
-rw-r--r--roles/hostnames/tasks/main.yaml26
-rw-r--r--roles/hostnames/test/inv12
l---------roles/hostnames/test/roles1
-rw-r--r--roles/hostnames/test/test.retry3
-rw-r--r--roles/hostnames/test/test.yaml4
-rw-r--r--roles/hostnames/vars/main.yaml2
-rw-r--r--roles/hostnames/vars/records.yaml28
-rw-r--r--roles/openshift-prep/tasks/main.yml4
-rw-r--r--roles/openshift-prep/tasks/prerequisites.yml35
-rw-r--r--roles/openstack-stack/README.md9
-rw-r--r--roles/openstack-stack/defaults/main.yml12
-rw-r--r--roles/openstack-stack/tasks/main.yml41
-rw-r--r--roles/openstack-stack/templates/heat_stack.yaml.j2753
-rw-r--r--roles/openstack-stack/templates/heat_stack_server.yaml.j2170
-rw-r--r--roles/openstack-stack/templates/user_data.j213
l---------roles/openstack-stack/test/roles1
-rw-r--r--roles/openstack-stack/test/stack-create-test.yml16
-rw-r--r--roles/subscription-manager/README.md156
-rw-r--r--roles/subscription-manager/pre_tasks/pre_tasks.yml45
-rw-r--r--roles/subscription-manager/tasks/main.yml122
22 files changed, 1492 insertions, 0 deletions
diff --git a/roles/dns-server-detect/defaults/main.yml b/roles/dns-server-detect/defaults/main.yml
new file mode 100644
index 000000000..58bd861cd
--- /dev/null
+++ b/roles/dns-server-detect/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+
+external_nsupdate_keys: {}
diff --git a/roles/dns-server-detect/tasks/main.yml b/roles/dns-server-detect/tasks/main.yml
new file mode 100644
index 000000000..183c0a0ca
--- /dev/null
+++ b/roles/dns-server-detect/tasks/main.yml
@@ -0,0 +1,36 @@
+---
+- fail:
+ msg: 'Missing required private DNS server(s)'
+ when:
+ - external_nsupdate_keys['private'] is undefined
+ - hostvars[groups['dns'][0]] is undefined
+
+- fail:
+ msg: 'Missing required public DNS server(s)'
+ when:
+ - external_nsupdate_keys['public'] is undefined
+ - hostvars[groups['dns'][0]] is undefined
+
+- name: "Set the private DNS server to use the external value (if provided)"
+ set_fact:
+ private_dns_server: "{{ external_nsupdate_keys['private']['server'] }}"
+ when:
+ - external_nsupdate_keys['private'] is defined
+
+- name: "Set the private DNS server to use the provisioned value"
+ set_fact:
+ private_dns_server: "{{ hostvars[groups['dns'][0]].openstack.private_v4 }}"
+ when:
+ - private_dns_server is undefined
+
+- name: "Set the public DNS server to use the external value (if provided)"
+ set_fact:
+ public_dns_server: "{{ external_nsupdate_keys['public']['server'] }}"
+ when:
+ - external_nsupdate_keys['public'] is defined
+
+- name: "Set the public DNS server to use the provisioned value"
+ set_fact:
+ public_dns_server: "{{ hostvars[groups['dns'][0]].openstack.public_v4 }}"
+ when:
+ - public_dns_server is undefined
diff --git a/roles/hostnames/tasks/main.yaml b/roles/hostnames/tasks/main.yaml
new file mode 100644
index 000000000..c49852210
--- /dev/null
+++ b/roles/hostnames/tasks/main.yaml
@@ -0,0 +1,26 @@
+---
+- name: Setting Hostname Fact
+ set_fact:
+ new_hostname: "{{ custom_hostname | default(inventory_hostname_short) }}"
+
+- name: Setting FQDN Fact
+ set_fact:
+ new_fqdn: "{{ new_hostname }}.{{ full_dns_domain }}"
+
+- name: Setting hostname and DNS domain
+ hostname: name="{{ new_fqdn }}"
+
+- name: Check for cloud.cfg
+ stat: path=/etc/cloud/cloud.cfg
+ register: cloud_cfg
+
+- name: Prevent cloud-init updates of hostname/fqdn (if applicable)
+ lineinfile:
+ dest: /etc/cloud/cloud.cfg
+ state: present
+ regexp: "{{ item.regexp }}"
+ line: "{{ item.line }}"
+ with_items:
+ - { regexp: '^ - set_hostname', line: '# - set_hostname' }
+ - { regexp: '^ - update_hostname', line: '# - update_hostname' }
+ when: cloud_cfg.stat.exists == True
diff --git a/roles/hostnames/test/inv b/roles/hostnames/test/inv
new file mode 100644
index 000000000..ffbe6e03d
--- /dev/null
+++ b/roles/hostnames/test/inv
@@ -0,0 +1,12 @@
+[all:vars]
+dns_domain=example.com
+
+[openshift_masters]
+192.168.124.41 dns_private_ip=1.1.1.41 dns_public_ip=192.168.124.41
+192.168.124.117 dns_private_ip=1.1.1.117 dns_public_ip=192.168.124.117
+
+[openshift_nodes]
+192.168.124.40 dns_private_ip=1.1.1.40 dns_public_ip=192.168.124.40
+
+#[dns]
+#192.168.124.117 dns_private_ip=1.1.1.117
diff --git a/roles/hostnames/test/roles b/roles/hostnames/test/roles
new file mode 120000
index 000000000..e2b799b9d
--- /dev/null
+++ b/roles/hostnames/test/roles
@@ -0,0 +1 @@
+../../../roles/ \ No newline at end of file
diff --git a/roles/hostnames/test/test.retry b/roles/hostnames/test/test.retry
new file mode 100644
index 000000000..63fc08e4c
--- /dev/null
+++ b/roles/hostnames/test/test.retry
@@ -0,0 +1,3 @@
+192.168.124.117
+192.168.124.40
+192.168.124.41
diff --git a/roles/hostnames/test/test.yaml b/roles/hostnames/test/test.yaml
new file mode 100644
index 000000000..0c56aea51
--- /dev/null
+++ b/roles/hostnames/test/test.yaml
@@ -0,0 +1,4 @@
+---
+- hosts: all
+ roles:
+ - role: hostnames
diff --git a/roles/hostnames/vars/main.yaml b/roles/hostnames/vars/main.yaml
new file mode 100644
index 000000000..3eecb8dc4
--- /dev/null
+++ b/roles/hostnames/vars/main.yaml
@@ -0,0 +1,2 @@
+---
+counter: 1
diff --git a/roles/hostnames/vars/records.yaml b/roles/hostnames/vars/records.yaml
new file mode 100644
index 000000000..0cadc8181
--- /dev/null
+++ b/roles/hostnames/vars/records.yaml
@@ -0,0 +1,28 @@
+---
+- name: "Building Records"
+ set_fact:
+ dns_records_add:
+ - view: private
+ zone: example.com
+ entries:
+ - type: A
+ hostname: master1.example.com
+ ip: 172.16.15.94
+ - type: A
+ hostname: node1.example.com
+ ip: 172.16.15.86
+ - type: A
+ hostname: node2.example.com
+ ip: 172.16.15.87
+ - view: public
+ zone: example.com
+ entries:
+ - type: A
+ hostname: master1.example.com
+ ip: 10.3.10.116
+ - type: A
+ hostname: node1.example.com
+ ip: 10.3.11.46
+ - type: A
+ hostname: node2.example.com
+ ip: 10.3.12.6
diff --git a/roles/openshift-prep/tasks/main.yml b/roles/openshift-prep/tasks/main.yml
new file mode 100644
index 000000000..5e484e75f
--- /dev/null
+++ b/roles/openshift-prep/tasks/main.yml
@@ -0,0 +1,4 @@
+---
+# Starting Point for OpenShift Installation and Configuration
+- include: prerequisites.yml
+ tags: [prerequisites]
diff --git a/roles/openshift-prep/tasks/prerequisites.yml b/roles/openshift-prep/tasks/prerequisites.yml
new file mode 100644
index 000000000..60507636f
--- /dev/null
+++ b/roles/openshift-prep/tasks/prerequisites.yml
@@ -0,0 +1,35 @@
+---
+- name: "Cleaning yum repositories"
+ command: "yum clean all"
+
+- name: "Install required packages"
+ yum:
+ name: "{{ item }}"
+ state: latest
+ with_items:
+ - wget
+ - git
+ - net-tools
+ - bind-utils
+ - bridge-utils
+ - bash-completion
+ - vim-enhanced
+
+- name: "Update all packages (this can take a very long time)"
+ yum:
+ name: "*"
+ state: latest
+
+- name: "Verify hostname"
+ shell: hostnamectl status | awk "/Static hostname/"'{ print $3 }'
+ register: hostname_fqdn
+
+- name: "Set hostname if required"
+ hostname:
+ name: "{{ ansible_fqdn }}"
+ when: hostname_fqdn.stdout != ansible_fqdn
+
+- name: "Verify SELinux is enforcing"
+ fail:
+ msg: "SELinux is required for OpenShift and has been detected as '{{ ansible_selinux.config_mode }}'"
+ when: ansible_selinux.config_mode != "enforcing"
diff --git a/roles/openstack-stack/README.md b/roles/openstack-stack/README.md
new file mode 100644
index 000000000..509c9de6c
--- /dev/null
+++ b/roles/openstack-stack/README.md
@@ -0,0 +1,9 @@
+# Role openstack-stack
+
+Role for spinning up instances using OpenStack Heat.
+
+## To Test
+
+```
+ansible-playbook casl-ansible/roles/openstack-stack/test/stack-create-test.yml
+```
diff --git a/roles/openstack-stack/defaults/main.yml b/roles/openstack-stack/defaults/main.yml
new file mode 100644
index 000000000..2a4ef3a45
--- /dev/null
+++ b/roles/openstack-stack/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+dns_volume_size: 1
+ssh_ingress_cidr: 0.0.0.0/0
+node_ingress_cidr: 0.0.0.0/0
+master_ingress_cidr: 0.0.0.0/0
+lb_ingress_cidr: 0.0.0.0/0
+num_etcd: 0
+num_masters: 1
+num_nodes: 1
+num_dns: 1
+num_infra: 1
+etcd_volume_size: 2
diff --git a/roles/openstack-stack/tasks/main.yml b/roles/openstack-stack/tasks/main.yml
new file mode 100644
index 000000000..71c7bbe0d
--- /dev/null
+++ b/roles/openstack-stack/tasks/main.yml
@@ -0,0 +1,41 @@
+---
+- name: create HOT stack template prefix
+ register: stack_template_pre
+ tempfile:
+ state: directory
+ prefix: casl-ansible
+
+- name: set template paths
+ set_fact:
+ stack_template_path: "{{ stack_template_pre.path }}/stack.yaml"
+ server_template_path: "{{ stack_template_pre.path }}/server.yaml"
+ user_data_template_path: "{{ stack_template_pre.path }}/user-data"
+
+- name: generate HOT stack template from jinja2 template
+ template:
+ src: heat_stack.yaml.j2
+ dest: "{{ stack_template_path }}"
+
+- name: generate HOT server template from jinja2 template
+ template:
+ src: heat_stack_server.yaml.j2
+ dest: "{{ server_template_path }}"
+
+- name: generate user_data from jinja2 template
+ template:
+ src: user_data.j2
+ dest: "{{ user_data_template_path }}"
+
+- name: create stack
+ ignore_errors: False
+ register: stack_create
+ os_stack:
+ name: "{{ stack_name }}"
+ state: present
+ template: "{{ stack_template_path }}"
+ wait: yes
+
+- name: cleanup temp files
+ file:
+ path: "{{ stack_template_pre.path }}"
+ state: absent
diff --git a/roles/openstack-stack/templates/heat_stack.yaml.j2 b/roles/openstack-stack/templates/heat_stack.yaml.j2
new file mode 100644
index 000000000..c750865a5
--- /dev/null
+++ b/roles/openstack-stack/templates/heat_stack.yaml.j2
@@ -0,0 +1,753 @@
+heat_template_version: 2016-10-14
+
+description: OpenShift cluster
+
+parameters:
+
+outputs:
+
+ etcd_names:
+ description: Name of the etcds
+ value: { get_attr: [ etcd, name ] }
+
+ etcd_ips:
+ description: IPs of the etcds
+ value: { get_attr: [ etcd, private_ip ] }
+
+ etcd_floating_ips:
+ description: Floating IPs of the etcds
+ value: { get_attr: [ etcd, floating_ip ] }
+
+ master_names:
+ description: Name of the masters
+ value: { get_attr: [ masters, name ] }
+
+ master_ips:
+ description: IPs of the masters
+ value: { get_attr: [ masters, private_ip ] }
+
+ master_floating_ips:
+ description: Floating IPs of the masters
+ value: { get_attr: [ masters, floating_ip ] }
+
+ node_names:
+ description: Name of the nodes
+ value: { get_attr: [ compute_nodes, name ] }
+
+ node_ips:
+ description: IPs of the nodes
+ value: { get_attr: [ compute_nodes, private_ip ] }
+
+ node_floating_ips:
+ description: Floating IPs of the nodes
+ value: { get_attr: [ compute_nodes, floating_ip ] }
+
+ infra_names:
+ description: Name of the nodes
+ value: { get_attr: [ infra_nodes, name ] }
+
+ infra_ips:
+ description: IPs of the nodes
+ value: { get_attr: [ infra_nodes, private_ip ] }
+
+ infra_floating_ips:
+ description: Floating IPs of the nodes
+ value: { get_attr: [ infra_nodes, floating_ip ] }
+
+ dns_name:
+ description: Name of the DNS
+ value:
+ get_attr:
+ - dns
+ - name
+
+ dns_floating_ip:
+ description: Floating IP of the DNS
+ value:
+ get_attr:
+ - dns
+ - addresses
+ - str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ - 1
+ - addr
+
+resources:
+
+ net:
+ type: OS::Neutron::Net
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+
+ subnet:
+ type: OS::Neutron::Subnet
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-subnet
+ params:
+ cluster_id: {{ stack_name }}
+ network: { get_resource: net }
+ cidr:
+ str_replace:
+ template: subnet_24_prefix.0/24
+ params:
+ subnet_24_prefix: {{ subnet_prefix }}
+ allocation_pools:
+ - start:
+ str_replace:
+ template: subnet_24_prefix.3
+ params:
+ subnet_24_prefix: {{ subnet_prefix }}
+ end:
+ str_replace:
+ template: subnet_24_prefix.254
+ params:
+ subnet_24_prefix: {{ subnet_prefix }}
+ dns_nameservers:
+ {% for nameserver in dns_nameservers %}
+ - {{ nameserver }}
+ {% endfor %}
+
+ router:
+ type: OS::Neutron::Router
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-router
+ params:
+ cluster_id: {{ stack_name }}
+ external_gateway_info:
+ network: {{ external_network }}
+
+ interface:
+ type: OS::Neutron::RouterInterface
+ properties:
+ router_id: { get_resource: router }
+ subnet_id: { get_resource: subnet }
+
+# keypair:
+# type: OS::Nova::KeyPair
+# properties:
+# name:
+# str_replace:
+# template: openshift-ansible-cluster_id-keypair
+# params:
+# cluster_id: {{ stack_name }}
+# public_key: {{ ssh_public_key }}
+
+{% if openstack_flat_secgrp|bool %}
+ flat-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-flat-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id OpenShift cluster
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 4001
+ port_range_max: 4001
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 8443
+ port_range_max: 8444
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 53
+ port_range_max: 53
+ - direction: ingress
+ protocol: udp
+ port_range_min: 53
+ port_range_max: 53
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 8053
+ port_range_max: 8053
+ - direction: ingress
+ protocol: udp
+ port_range_min: 8053
+ port_range_max: 8053
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 24224
+ port_range_max: 24224
+ - direction: ingress
+ protocol: udp
+ port_range_min: 24224
+ port_range_max: 24224
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 2224
+ port_range_max: 2224
+ - direction: ingress
+ protocol: udp
+ port_range_min: 5404
+ port_range_max: 5405
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 9090
+ port_range_max: 9090
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 2379
+ port_range_max: 2380
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 10250
+ port_range_max: 10250
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: udp
+ port_range_min: 10250
+ port_range_max: 10250
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 10255
+ port_range_max: 10255
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: udp
+ port_range_min: 10255
+ port_range_max: 10255
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: udp
+ port_range_min: 4789
+ port_range_max: 4789
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 30000
+ port_range_max: 32767
+ remote_ip_prefix: {{ node_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 30000
+ port_range_max: 32767
+ remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 80
+ port_range_max: 80
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 443
+ port_range_max: 443
+{% else %}
+ master-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-master-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id OpenShift cluster master
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 4001
+ port_range_max: 4001
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 8443
+ port_range_max: 8444
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 53
+ port_range_max: 53
+ - direction: ingress
+ protocol: udp
+ port_range_min: 53
+ port_range_max: 53
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 8053
+ port_range_max: 8053
+ - direction: ingress
+ protocol: udp
+ port_range_min: 8053
+ port_range_max: 8053
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 24224
+ port_range_max: 24224
+ - direction: ingress
+ protocol: udp
+ port_range_min: 24224
+ port_range_max: 24224
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 2224
+ port_range_max: 2224
+ - direction: ingress
+ protocol: udp
+ port_range_min: 5404
+ port_range_max: 5405
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 9090
+ port_range_max: 9090
+
+ etcd-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-etcd-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id etcd cluster
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 2379
+ port_range_max: 2379
+ remote_mode: remote_group_id
+ remote_group_id: { get_resource: master-secgrp }
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 2380
+ port_range_max: 2380
+ remote_mode: remote_group_id
+
+ node-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-node-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id OpenShift cluster nodes
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 10250
+ port_range_max: 10250
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 10255
+ port_range_max: 10255
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: udp
+ port_range_min: 10255
+ port_range_max: 10255
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: udp
+ port_range_min: 4789
+ port_range_max: 4789
+ remote_mode: remote_group_id
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 30000
+ port_range_max: 32767
+ remote_ip_prefix: {{ node_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 30000
+ port_range_max: 32767
+ remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+
+ infra-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-infra-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id OpenShift infrastructure cluster nodes
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 80
+ port_range_max: 80
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 443
+ port_range_max: 443
+{% endif %}
+
+ dns-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name:
+ str_replace:
+ template: openshift-ansible-cluster_id-dns-secgrp
+ params:
+ cluster_id: {{ stack_name }}
+ description:
+ str_replace:
+ template: Security group for cluster_id cluster DNS
+ params:
+ cluster_id: {{ stack_name }}
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: udp
+ port_range_min: 53
+ port_range_max: 53
+ remote_ip_prefix: {{ node_ingress_cidr }}
+ - direction: ingress
+ protocol: udp
+ port_range_min: 53
+ port_range_max: 53
+ remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 53
+ port_range_max: 53
+ remote_ip_prefix: {{ node_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 53
+ port_range_max: 53
+ remote_ip_prefix: "{{ openstack_subnet_prefix }}.0/24"
+{% if num_masters is greaterthan 1 %}
+ lb-secgrp:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name: openshift-ansible-{{ stack_name }}-lb-secgrp
+ description: Security group for {{ stack_name }} cluster Load Balancer
+ rules:
+ - direction: ingress
+ protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: {{ ssh_ingress_cidr }}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: {{ openshift_master_api_port | default(8443) }}
+ port_range_max: {{ openshift_master_api_port | default(8443) }}
+ remote_ip_prefix: {{ lb_ingress_cidr }}
+ {% if openshift_master_console_port is defined and openshift_master_console_port is not equalto openshift_master_api_port %}
+ - direction: ingress
+ protocol: tcp
+ port_range_min: {{ openshift_master_console_port | default(8443) }}
+ port_range_max: {{ openshift_master_console_port | default(8443) }}
+ remote_ip_prefix: {{ lb_ingress_cidr }}
+ {% endif %}
+{% endif %}
+
+ etcd:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: {{ num_etcd }}
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: k8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: etcd
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: etcds
+ cluster_id: {{ stack_name }}
+ type: etcd
+ image: {{ openstack_image }}
+ flavor: {{ etcd_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+ - { get_resource: {% if openstack_flat_secgrp|bool %}flat-secgrp{% else %}etcd-secgrp{% endif %} }
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: {{ etcd_volume_size }}
+ depends_on:
+ - interface
+
+{% if num_masters is greaterthan 1 %}
+ loadbalancer:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: 1
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: k8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: lb
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: lb
+ cluster_id: {{ stack_name }}
+ type: lb
+ image: {{ openstack_image }}
+ flavor: {{ lb_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+ - { get_resource: lb-secgrp }
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: 5
+ depends_on:
+ - interface
+{% endif %}
+
+ masters:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: {{ num_masters }}
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: k8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: master
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: masters
+ cluster_id: {{ stack_name }}
+ type: master
+ image: {{ openstack_image }}
+ flavor: {{ master_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+{% if openstack_flat_secgrp|bool %}
+ - { get_resource: flat-secgrp }
+{% else %}
+ - { get_resource: master-secgrp }
+ - { get_resource: node-secgrp }
+{% if num_etcd is equalto 0 %}
+ - { get_resource: etcd-secgrp }
+{% endif %}
+{% endif %}
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: {{ master_volume_size }}
+ depends_on:
+ - interface
+
+ compute_nodes:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: {{ num_nodes }}
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: subtype-k8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: node
+ subtype: app
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: nodes
+ cluster_id: {{ stack_name }}
+ type: node
+ subtype: app
+ node_labels:
+ region: primary
+ image: {{ openstack_image }}
+ flavor: {{ node_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+ - { get_resource: {% if openstack_flat_secgrp|bool %}flat-secgrp{% else %}node-secgrp{% endif %} }
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: {{ app_volume_size }}
+ depends_on:
+ - interface
+
+ infra_nodes:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: {{ num_infra }}
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: subtypek8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: node
+ subtype: infra
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: infra
+ cluster_id: {{ stack_name }}
+ type: node
+ subtype: infra
+ node_labels:
+ region: infra
+ image: {{ openstack_image }}
+ flavor: {{ infra_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+{% if openstack_flat_secgrp|bool %}
+ - { get_resource: flat-secgrp }
+{% else %}
+ - { get_resource: node-secgrp }
+ - { get_resource: infra-secgrp }
+{% endif %}
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: {{ infra_volume_size }}
+ depends_on:
+ - interface
+
+ dns:
+ type: OS::Heat::ResourceGroup
+ properties:
+ count: {{ num_dns }}
+ resource_def:
+ type: server.yaml
+ properties:
+ name:
+ str_replace:
+ template: k8s_type-%index%.cluster_id
+ params:
+ cluster_id: {{ stack_name }}
+ k8s_type: dns
+ cluster_env: {{ public_dns_domain }}
+ cluster_id: {{ stack_name }}
+ group:
+ str_replace:
+ template: k8s_type.cluster_id
+ params:
+ k8s_type: dns
+ cluster_id: {{ stack_name }}
+ type: dns
+ image: {{ openstack_image }}
+ flavor: {{ dns_flavor }}
+ key_name: {{ ssh_public_key }}
+ net: { get_resource: net }
+ subnet: { get_resource: subnet }
+ secgrp:
+{% if openstack_flat_secgrp|bool %}
+ - { get_resource: flat-secgrp }
+{% else %}
+ - { get_resource: node-secgrp }
+{% endif %}
+ - { get_resource: dns-secgrp }
+ floating_network: {{ external_network }}
+ net_name:
+ str_replace:
+ template: openshift-ansible-cluster_id-net
+ params:
+ cluster_id: {{ stack_name }}
+ volume_size: {{ dns_volume_size }}
+ depends_on:
+ - interface
+
diff --git a/roles/openstack-stack/templates/heat_stack_server.yaml.j2 b/roles/openstack-stack/templates/heat_stack_server.yaml.j2
new file mode 100644
index 000000000..5851d3b9b
--- /dev/null
+++ b/roles/openstack-stack/templates/heat_stack_server.yaml.j2
@@ -0,0 +1,170 @@
+heat_template_version: 2016-10-14
+
+description: OpenShift cluster server
+
+parameters:
+
+ name:
+ type: string
+ label: Name
+ description: Name
+
+ group:
+ type: string
+ label: Host Group
+ description: The Primary Ansible Host Group
+ default: host
+
+ cluster_env:
+ type: string
+ label: Cluster environment
+ description: Environment of the cluster
+
+ cluster_id:
+ type: string
+ label: Cluster ID
+ description: Identifier of the cluster
+
+ type:
+ type: string
+ label: Type
+ description: Type master or node
+
+ subtype:
+ type: string
+ label: Sub-type
+ description: Sub-type compute or infra for nodes, default otherwise
+ default: default
+
+ key_name:
+ type: string
+ label: Key name
+ description: Key name of keypair
+
+ image:
+ type: string
+ label: Image
+ description: Name of the image
+
+ flavor:
+ type: string
+ label: Flavor
+ description: Name of the flavor
+
+ net:
+ type: string
+ label: Net ID
+ description: Net resource
+
+ net_name:
+ type: string
+ label: Net name
+ description: Net name
+
+ subnet:
+ type: string
+ label: Subnet ID
+ description: Subnet resource
+
+ secgrp:
+ type: comma_delimited_list
+ label: Security groups
+ description: Security group resources
+
+ floating_network:
+ type: string
+ label: Floating network
+ description: Network to allocate floating IP from
+
+ availability_zone:
+ type: string
+ description: The Availability Zone to launch the instance.
+ default: nova
+
+ volume_size:
+ type: number
+ description: Size of the volume to be created.
+ default: 1
+ constraints:
+ - range: { min: 1, max: 1024 }
+ description: must be between 1 and 1024 Gb.
+
+ node_labels:
+ type: json
+ description: OpenShift Node Labels
+ default: {"region": "default" }
+
+outputs:
+
+ name:
+ description: Name of the server
+ value: { get_attr: [ server, name ] }
+
+ private_ip:
+ description: Private IP of the server
+ value:
+ get_attr:
+ - server
+ - addresses
+ - { get_param: net_name }
+ - 0
+ - addr
+
+ floating_ip:
+ description: Floating IP of the server
+ value:
+ get_attr:
+ - server
+ - addresses
+ - { get_param: net_name }
+ - 1
+ - addr
+
+resources:
+
+ server:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: name }
+ key_name: { get_param: key_name }
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ networks:
+ - port: { get_resource: port }
+ user_data:
+ get_file: user-data
+ user_data_format: RAW
+ metadata:
+ group: { get_param: group }
+ environment: { get_param: cluster_env }
+ clusterid: { get_param: cluster_id }
+ host-type: { get_param: type }
+ sub-host-type: { get_param: subtype }
+ node_labels: { get_param: node_labels }
+
+ port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_param: net }
+ fixed_ips:
+ - subnet: { get_param: subnet }
+ security_groups: { get_param: secgrp }
+
+ floating-ip:
+ type: OS::Neutron::FloatingIP
+ properties:
+ floating_network: { get_param: floating_network }
+ port_id: { get_resource: port }
+
+ cinder_volume:
+ type: OS::Cinder::Volume
+ properties:
+ size: { get_param: volume_size }
+ availability_zone: { get_param: availability_zone }
+
+ volume_attachment:
+ type: OS::Cinder::VolumeAttachment
+ properties:
+ volume_id: { get_resource: cinder_volume }
+ instance_uuid: { get_resource: server }
+ mountpoint: /dev/sdb
diff --git a/roles/openstack-stack/templates/user_data.j2 b/roles/openstack-stack/templates/user_data.j2
new file mode 100644
index 000000000..eb65f7cec
--- /dev/null
+++ b/roles/openstack-stack/templates/user_data.j2
@@ -0,0 +1,13 @@
+#cloud-config
+disable_root: true
+
+system_info:
+ default_user:
+ name: openshift
+ sudo: ["ALL=(ALL) NOPASSWD: ALL"]
+
+write_files:
+ - path: /etc/sudoers.d/00-openshift-no-requiretty
+ permissions: 440
+ content: |
+ Defaults:openshift !requiretty
diff --git a/roles/openstack-stack/test/roles b/roles/openstack-stack/test/roles
new file mode 120000
index 000000000..e2b799b9d
--- /dev/null
+++ b/roles/openstack-stack/test/roles
@@ -0,0 +1 @@
+../../../roles/ \ No newline at end of file
diff --git a/roles/openstack-stack/test/stack-create-test.yml b/roles/openstack-stack/test/stack-create-test.yml
new file mode 100644
index 000000000..0fbf66f34
--- /dev/null
+++ b/roles/openstack-stack/test/stack-create-test.yml
@@ -0,0 +1,16 @@
+---
+- hosts: localhost
+ roles:
+ - role: openstack-stack
+ stack_name: test-stack
+ dns_domain: "{{ public_dns_domain }}"
+ dns_nameservers: "{{ public_dns_nameservers }}"
+ subnet_prefix: "{{ openstack_subnet_prefix }}"
+ ssh_public_key: "{{ openstack_ssh_public_key }}"
+ openstack_image: "{{ openstack_default_image_name }}"
+ etcd_flavor: "{{ openstack_default_flavor }}"
+ master_flavor: "{{ openstack_default_flavor }}"
+ node_flavor: "{{ openstack_default_flavor }}"
+ infra_flavor: "{{ openstack_default_flavor }}"
+ dns_flavor: "{{ openstack_default_flavor }}"
+ external_network: "{{ openstack_external_network_name }}"
diff --git a/roles/subscription-manager/README.md b/roles/subscription-manager/README.md
new file mode 100644
index 000000000..748de282c
--- /dev/null
+++ b/roles/subscription-manager/README.md
@@ -0,0 +1,156 @@
+# Red Hat Subscription Manager Ansible Role
+
+## Parameters
+
+This role depends on user specified variables. These can be set in the inventory file, group_vars or passed to the playbook from the CLI. No values are set by default which disables this role. The variables are:
+
+### rhsm_satellite
+
+Subscription Manager server hostname. If using a Satellite server set the FQDN here. If using RHSM Hosted this value must be left blank, none or false.
+
+Default: none
+
+### rhsm_username
+
+Subscription Manager username. Required for RHSM Hosted. Can be optionally used for Satellite, but it may be better to use **rhsm_activationkey** for this.
+
+Default: none
+
+### rhsm_password
+
+Subscription Manager password. Required for RHSM Hosted. Can be optionally used for Satellite, but it may be better to use **rhsm_activationkey** for this.
+
+NOTE: If this variable is specified on the command-line or set in a variable file it may leave your password exposed. For this reason you may perfer to use an Activation Key if using Satellite. For RHSM Hosted, your password must be specified. There are two ways to provide the password to the Ansible playbook without exposing it to prying eyes.
+
+1. The first method is to use a **vars_prompt** to collect the password up front one time for the playbook. Ansible will not display the password if the prompt is configured as **private** and the task will not display the password on the CLI. This is the a good method as it supports automating the task to every host with only one password entry. To enable **vars_prompt** add the following to the very top of your playbook after the **hosts** declaration and before any **pre_tasks** section:
+
+ ```
+ - hosts: localhost
+ # Add the following lines after a -hosts: declaration and before pre_tasks:
+ # Start of vars_prompt code block
+ vars_prompt:
+ - name: "rhsm_password"
+ prompt: "Subscription Manager password"
+ confirm: yes
+ private: yes
+ # End of vars_prompt code block
+ pre_tasks:
+ ```
+
+2. A second method is to use an encrypted file via **ansible-vault**. This does does not require modifying any code as the previous method, but does require more work to create and encrypt the file. To accomplish this, first create a file containing at least the **rhsm_password** variable (it is also possible to specify additional variables to encrypt them all as well):
+ 1. Create a file to contain the variable such as **secrets.yml**:
+
+ ```
+ ---
+ rhsm_password: "my_secret_password"
+ # other variables can optionally be placed here as well
+ ```
+
+ 2. Encrypt the file with **ansible-vault**:
+
+ ```
+ $ ansible-vault encrypt secrets.yml
+ Vault password:
+ Confirm Vault password:
+ Encryption successful
+ ```
+
+ 3. When executing **ansible-playbook** specify **--ask-vault-pass** to be prompted for the decryption password, and also specify the location of the **secrets.yml** as such:
+
+ ```
+ $ ansible-playbook --ask-vault-pass --extra-vars=@secrets.yml --extra-vars="rhsm_username=myusername" <other playbook options>
+ ```
+
+ NOTE: Optionally the file containing the encrypted variables can be decrypted with **ansible-vault** and the **--ask-vault-pass** option omitted to prevent any password prompting (for automated runs) and the file can be encrypted after the run. This can be used if an external system such as Jenkins would handle the decryption/encryption outside of Ansible.
+
+Default: none
+
+### rhsm_org
+
+Optional Subscription Manager Satellite Organization. Required for Satellite, ignored if using RHSM Hosted.
+
+Default: none
+
+### rhsm_activationkey
+
+Optional Subscription Manager Satellite Activation Key, use this instead of **rhsm_username** and **rhsm_password** if using Satellite to provide repositories and authentication in a key instead.
+
+Default: none
+
+### rhsm_pool
+
+Optional Subscription Manager pool, determine this by running **subscription-manager list --available** on a registered system. Valid for RHSM Hosted or Satellite. Specifying **rhsm_activationkey** will ignore this option.
+
+Default: none
+
+### rhsm_repos
+
+Optional list of repositories to enable. If left blank it is expected that the **rhsm_activationkey** will specify repos instead. If populated, a **subscription-manager repos --disable=\*** will be run and each of the specified repos explicitly enabled. Valid for RHSM Hosted or Satellite
+
+NOTE: If specifying this value in an inventory file as opposed to group_vars, be sure to define it as a proper list as such:
+
+rhsm_repos='["rhel-7-server-rpms", "rhel-7-server-ose-3.1-rpms", "rhel-7-server-extras-rpms"]'
+
+Default: none
+
+## Calling This Role
+Calling this role is done at both **pre_tasks** and **roles** sections of a playbook and optionally a **vars_prompt**.
+
+### vars_prompt
+Unfortunately **vars_prompt** can only be used at the play level before role tasks are executed, so this is the only place it can go. It also cannot be shown conditionally. For this reason it is not included in this role by default. A better method may be using a file containing the password variable encrypted with **ansible-vault**. See the **rhsm_password** section for more details.
+
+To Add a prompt to capture **rhsm_password**:
+
+```
+- hosts: localhost
+ # Add the following lines after a -hosts: declaration and before pre_tasks:
+ # Start of vars_prompt code block
+ vars_prompt:
+ - name: "rhsm_password"
+ prompt: "Subscription Manager password"
+ confirm: yes
+ private: yes
+ # End of vars_prompt code block
+ pre_tasks:
+```
+
+### pre-tasks
+
+A number of variable checks are performed before any tasks to ensure the proper parameters are set. To include these checks call the pre_task yaml before any roles:
+
+```
+pre_tasks:
+- include: roles/subscription-manager/pre_tasks/pre_tasks.yml
+```
+
+### roles
+
+The bulk of the work is performed in the main.yml for this role. The pre-task play will set a variable which can be checked to contitionally include this role as such:
+
+```
+roles:
+ - { role: subscription-manager, when: hostvars.localhost.rhsm_register, tags: 'subscription-manager' }
+```
+
+## Running Playbooks with this Role
+
+- To register to RHSM Hosted or Satellite with a username and plain text password (NOTE: This may retain your password in your CLI history):
+
+ ```
+ $ ansible-playbook --extra-vars="rhsm_username=vvaldez rhsm_password=my_secret_password <other playbook otions>"
+ ```
+
+- To register to RHSM Hosted or Satellite with username and an encrypted file containing the password:
+
+ ```
+ $ ansible-playbook --ask-vault-pass --extra-vars=@secrets.yml --extra-vars="rhsm_username=myusername" <other playbook options>
+
+ ```
+
+- To register to a Satellite server with an activation key:
+
+ ```
+ $ ansible-playbook --extra-vars="rhsm_satellite=satellite.example.com rhsm_org=example_org rhsm_activationkey=rhel-7-ose-3-1 <other playbook options>"
+
+ ```
+- To ignore any Subscription Manager activities, simply do not set any parameters.
diff --git a/roles/subscription-manager/pre_tasks/pre_tasks.yml b/roles/subscription-manager/pre_tasks/pre_tasks.yml
new file mode 100644
index 000000000..464670fc0
--- /dev/null
+++ b/roles/subscription-manager/pre_tasks/pre_tasks.yml
@@ -0,0 +1,45 @@
+---
+- name: "Set password fact"
+ set_fact:
+ rhsm_password: "{{ rhsm_password | default(None) }}"
+ no_log: true
+
+- name: "Initialize Subscription Manager fact"
+ set_fact:
+ rhsm_register: true
+
+- name: "Determine if Subscription Manager should be used"
+ set_fact:
+ rhsm_register: false
+ when:
+ - rhsm_satellite is undefined or rhsm_satellite is none or rhsm_satellite|trim == ''
+ - rhsm_username is undefined or rhsm_username is none or rhsm_username|trim == ''
+ - rhsm_password is undefined or rhsm_password is none or rhsm_password|trim == ''
+ - rhsm_org is undefined or rhsm_org is none or rhsm_org|trim == ''
+ - rhsm_activationkey is undefined or rhsm_activationkey is none or rhsm_activationkey|trim == ''
+ - rhsm_pool is undefined or rhsm_pool is none or rhsm_pool|trim == ''
+
+- name: "Validate Subscription Manager organization is set"
+ fail: msg="Cannot register to a Satellite server without a value for the Organization via 'rhsm_org'"
+ when:
+ - rhsm_org is undefined or rhsm_org is none or rhsm_org|trim == ''
+ - rhsm_satellite is defined
+ - rhsm_satellite is not none
+ - rhsm_satellite|trim != ''
+ - rhsm_register
+
+- name: "Validate Subscription Manager authentication is defined"
+ fail: msg="Cannot register without ('rhsm_username' and 'rhsm_password') or 'rhsm_activationkey' variables set. See the README.md for details on securely prompting for a password"
+ when:
+ - (rhsm_username is undefined or rhsm_username is none or rhsm_username|trim == '') or (rhsm_password is undefined or rhsm_password is none or rhsm_password|trim == '')
+ - rhsm_activationkey is undefined or rhsm_activationkey is none or rhsm_activationkey|trim == ''
+ - rhsm_register
+
+- name: "Validate activation key and Hosted are not requested together"
+ fail: msg="Cannot register to RHSM Hosted with 'rhsm_activationkey'"
+ when:
+ - rhsm_satellite is undefined or rhsm_satellite is none or rhsm_satellite|trim == ''
+ - rhsm_activationkey is defined
+ - rhsm_activationkey is not none
+ - rhsm_activationkey|trim != ''
+ - rhsm_register
diff --git a/roles/subscription-manager/tasks/main.yml b/roles/subscription-manager/tasks/main.yml
new file mode 100644
index 000000000..8c1ae697a
--- /dev/null
+++ b/roles/subscription-manager/tasks/main.yml
@@ -0,0 +1,122 @@
+---
+- name: "Initialize rhsm_password variable if vars_prompt was used"
+ set_fact:
+ rhsm_password: "{{ hostvars.localhost.rhsm_password }}"
+ when:
+ - rhsm_password is not defined or rhsm_password is none or rhsm_password|trim == ''
+
+- name: "Initializing Subscription Manager authentication method"
+ set_fact:
+ rhsm_authentication: false
+
+# 'rhsm_activationkey' will take precedence even if 'rhsm_username' and 'rhsm_password' are also set
+- name: "Setting Subscription Manager Activation Key Fact"
+ set_fact:
+ rhsm_authentication: "key"
+ when:
+ - rhsm_activationkey is defined
+ - rhsm_activationkey is not none
+ - rhsm_activationkey|trim != ''
+ - not rhsm_authentication
+
+# If 'rhsm_username' and 'rhsm_password' are set but not 'rhsm_activationkey', set 'rhsm_authentication' to password
+- name: "Setting Subscription Manager Username and Password Fact"
+ set_fact:
+ rhsm_authentication: "password"
+ when:
+ - rhsm_username is defined
+ - rhsm_username is not none
+ - rhsm_username|trim != ''
+ - rhsm_password is defined
+ - rhsm_password is not none
+ - rhsm_password|trim != ''
+ - not rhsm_authentication
+
+- name: "Initializing registration status"
+ set_fact:
+ registered: false
+
+- name: "Checking subscription status (a failure means it is not registered and will be)"
+ command: "/usr/bin/subscription-manager status"
+ ignore_errors: yes
+ changed_when: no
+ register: check_if_registered
+
+- name: "Set registration fact if system is already registered"
+ set_fact:
+ registered: true
+ when: check_if_registered.rc == 0
+
+- name: "Cleaning any old subscriptions"
+ command: "/usr/bin/subscription-manager clean"
+ when:
+ - not registered
+ - rhsm_authentication is defined
+
+- name: "Install Satellite certificate"
+ command: "rpm -Uvh --force http://{{ rhsm_satellite }}/pub/katello-ca-consumer-latest.noarch.rpm"
+ when:
+ - not registered
+ - rhsm_satellite is defined
+ - rhsm_satellite is not none
+ - rhsm_satellite|trim != ''
+
+- name: "Register to Satellite using activation key"
+ command: "/usr/bin/subscription-manager register --activationkey={{ rhsm_activationkey }} --org='{{ rhsm_org }}'"
+ when:
+ - not registered
+ - rhsm_authentication == 'key'
+ - rhsm_satellite is defined
+ - rhsm_satellite is not none
+ - rhsm_satellite|trim != ''
+
+# This can apply to either Hosted or Satellite
+- name: "Register using username and password"
+ command: "/usr/bin/subscription-manager register --username={{ rhsm_username }} --password={{ rhsm_password }}"
+ no_log: true
+ when:
+ - not registered
+ - rhsm_authentication == "password"
+ - rhsm_org is not defined or rhsm_org is none or rhsm_org|trim == ''
+
+# This can apply to either Hosted or Satellite
+- name: "Register using username, password and organization"
+ command: "/usr/bin/subscription-manager register --username={{ rhsm_username }} --password={{ rhsm_password }} --org={{ rhsm_org }}"
+ no_log: true
+ when:
+ - not registered
+ - rhsm_authentication == "password"
+ - rhsm_org is defined
+ - rhsm_org is not none
+ - rhsm_org|trim != ''
+
+- name: "Auto-attach to Subscription Manager Pool"
+ command: "/usr/bin/subscription-manager attach --auto"
+ when:
+ - not registered
+ - rhsm_pool is undefined or rhsm_pool is none or rhsm_pool|trim == ''
+
+- name: "Attach to a specific pool"
+ command: "/usr/bin/subscription-manager attach --pool={{ rhsm_pool }}"
+ when:
+ - rhsm_pool is defined
+ - rhsm_pool is not none
+ - rhsm_pool|trim != ''
+ - not registered
+
+- name: "Disable all repositories"
+ command: "/usr/bin/subscription-manager repos --disable=*"
+ when:
+ - not registered
+ - rhsm_repos is defined
+ - rhsm_repos is not none
+ - rhsm_repos|trim != ''
+
+- name: "Enable specified repositories"
+ command: "/usr/bin/subscription-manager repos --enable={{ item }}"
+ with_items: "{{ rhsm_repos }}"
+ when:
+ - not registered
+ - rhsm_repos is defined
+ - rhsm_repos is not none
+ - rhsm_repos|trim != ''