diff options
author | Jason DeTiberus <jdetiber@redhat.com> | 2015-04-01 15:09:19 -0400 |
---|---|---|
committer | Jason DeTiberus <jdetiber@redhat.com> | 2015-04-14 23:29:16 -0400 |
commit | 6a4b7a5eb6c4b5e747bab795e2428d7c3992f559 (patch) | |
tree | 2519948f1eb8c372192ed4fd8805adc71da8433d /playbooks/gce/openshift-cluster | |
parent | c85e91fdca031eba06481a24f74aa076ae9a4d38 (diff) | |
download | openshift-6a4b7a5eb6c4b5e747bab795e2428d7c3992f559.tar.gz openshift-6a4b7a5eb6c4b5e747bab795e2428d7c3992f559.tar.bz2 openshift-6a4b7a5eb6c4b5e747bab795e2428d7c3992f559.tar.xz openshift-6a4b7a5eb6c4b5e747bab795e2428d7c3992f559.zip |
Configuration updates for latest builds and major refactor
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
Diffstat (limited to 'playbooks/gce/openshift-cluster')
-rw-r--r-- | playbooks/gce/openshift-cluster/config.yml | 37 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/launch.yml | 72 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/list.yml | 15 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/tasks/launch_instances.yml (renamed from playbooks/gce/openshift-cluster/launch_instances.yml) | 26 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/terminate.yml | 22 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/update.yml | 25 | ||||
-rw-r--r-- | playbooks/gce/openshift-cluster/vars.yml | 14 |
7 files changed, 126 insertions, 85 deletions
diff --git a/playbooks/gce/openshift-cluster/config.yml b/playbooks/gce/openshift-cluster/config.yml new file mode 100644 index 000000000..8b8490246 --- /dev/null +++ b/playbooks/gce/openshift-cluster/config.yml @@ -0,0 +1,37 @@ +--- +# TODO: fix firewall related bug with GCE and origin, since GCE is overriding +# /etc/sysconfig/iptables +- name: Populate oo_masters_to_config host group + hosts: localhost + gather_facts: no + vars_files: + - vars.yml + tasks: + - name: Evaluate oo_masters_to_config + add_host: + name: "{{ item }}" + groups: oo_masters_to_config + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-master"] | default([]) + - name: Evaluate oo_nodes_to_config + add_host: + name: "{{ item }}" + groups: oo_nodes_to_config + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-node"] | default([]) + - name: Evaluate oo_first_master + add_host: + name: "{{ groups['tag_env-host-type-' ~ cluster_id ~ '-openshift-master'][0] }}" + groups: oo_first_master + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + when: "'tag_env-host-type-{{ cluster_id }}-openshift-master' in groups" + +- include: ../../common/openshift-cluster/config.yml + vars: + openshift_cluster_id: "{{ cluster_id }}" + openshift_debug_level: 4 + openshift_deployment_type: "{{ deployment_type }}" + openshift_hostname: "{{ gce_private_ip }}" diff --git a/playbooks/gce/openshift-cluster/launch.yml b/playbooks/gce/openshift-cluster/launch.yml index 14cdd2537..34a5a0b94 100644 --- a/playbooks/gce/openshift-cluster/launch.yml +++ b/playbooks/gce/openshift-cluster/launch.yml @@ -4,59 +4,25 @@ connection: local gather_facts: no vars_files: - - vars.yml + - vars.yml tasks: - - set_fact: k8s_type="master" - - - name: Generate master instance names(s) - set_fact: scratch={{ cluster_id }}-{{ k8s_type }}-{{ '%05x' |format( 1048576 |random) }} - register: master_names_output - with_sequence: start=1 end={{ num_masters }} - - # These set_fact's cannot be combined - - set_fact: - master_names_string: "{% for item in master_names_output.results %}{{ item.ansible_facts.scratch }} {% endfor %}" - - - set_fact: - master_names: "{{ master_names_string.strip().split(' ') }}" - - - include: launch_instances.yml - vars: - instances: "{{ master_names }}" - cluster: "{{ cluster_id }}" - type: "{{ k8s_type }}" - - - set_fact: k8s_type="node" - - - name: Generate node instance names(s) - set_fact: scratch={{ cluster_id }}-{{ k8s_type }}-{{ '%05x' |format( 1048576 |random) }} - register: node_names_output - with_sequence: start=1 end={{ num_nodes }} - - # These set_fact's cannot be combined - - set_fact: - node_names_string: "{% for item in node_names_output.results %}{{ item.ansible_facts.scratch }} {% endfor %}" - - - set_fact: - node_names: "{{ node_names_string.strip().split(' ') }}" - - - include: launch_instances.yml - vars: - instances: "{{ node_names }}" - cluster: "{{ cluster_id }}" - type: "{{ k8s_type }}" - -- hosts: "tag_env-{{ cluster_id }}" - roles: - - openshift_repos - - os_update_latest - -- include: ../openshift-master/config.yml - vars: - oo_host_group_exp: "groups[\"tag_env-host-type-{{ cluster_id }}-openshift-master\"]" - -- include: ../openshift-node/config.yml - vars: - oo_host_group_exp: "groups[\"tag_env-host-type-{{ cluster_id }}-openshift-node\"]" + - fail: msg="Deployment type not supported for libvirt provider yet" + when: deployment_type == 'enterprise' + + - include: ../../common/openshift-cluster/set_master_launch_facts_tasks.yml + - include: tasks/launch_instances.yml + vars: + instances: "{{ master_names }}" + cluster: "{{ cluster_id }}" + type: "{{ k8s_type }}" + + - include: ../../common/openshift-cluster/set_node_launch_facts_tasks.yml + - include: tasks/launch_instances.yml + vars: + instances: "{{ node_names }}" + cluster: "{{ cluster_id }}" + type: "{{ k8s_type }}" + +- include: update.yml - include: list.yml diff --git a/playbooks/gce/openshift-cluster/list.yml b/playbooks/gce/openshift-cluster/list.yml index 1124b0ea3..bab2fb9f8 100644 --- a/playbooks/gce/openshift-cluster/list.yml +++ b/playbooks/gce/openshift-cluster/list.yml @@ -2,16 +2,23 @@ - name: Generate oo_list_hosts group hosts: localhost gather_facts: no + vars_files: + - vars.yml tasks: - set_fact: scratch_group=tag_env-{{ cluster_id }} when: cluster_id != '' - set_fact: scratch_group=all - when: scratch_group is not defined - - add_host: name={{ item }} groups=oo_list_hosts - with_items: groups[scratch_group] | difference(['localhost']) | difference(groups.status_terminated) + when: cluster_id == '' + - add_host: + name: "{{ item }}" + groups: oo_list_hosts + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated) - name: List Hosts hosts: oo_list_hosts gather_facts: no tasks: - - debug: msg="public:{{hostvars[inventory_hostname].gce_public_ip}} private:{{hostvars[inventory_hostname].gce_private_ip}}" + - debug: + msg: "public ip:{{ hostvars[inventory_hostname].gce_public_ip }} private ip:{{ hostvars[inventory_hostname].gce_private_ip }} deployment-type: {{ hostvars[inventory_hostname].group_names | oo_get_deployment_type_from_groups }}" diff --git a/playbooks/gce/openshift-cluster/launch_instances.yml b/playbooks/gce/openshift-cluster/tasks/launch_instances.yml index b4f33bd87..a68edefae 100644 --- a/playbooks/gce/openshift-cluster/launch_instances.yml +++ b/playbooks/gce/openshift-cluster/tasks/launch_instances.yml @@ -2,41 +2,39 @@ # TODO: when we are ready to go to ansible 1.9+ support only, we can update to # the gce task to use the disk_auto_delete parameter to avoid having to delete # the disk as a separate step on termination - -- set_fact: - machine_type: "{{ lookup('env', 'gce_machine_type') |default('n1-standard-1', true) }}" - machine_image: "{{ lookup('env', 'gce_machine_image') |default('libra-rhel7', true) }}" - - name: Launch instance(s) gce: instance_names: "{{ instances }}" - machine_type: "{{ machine_type }}" - image: "{{ machine_image }}" + machine_type: "{{ lookup('env', 'gce_machine_type') | default('n1-standard-1', true) }}" + image: "{{ lookup('env', 'gce_machine_image') | default(deployment_vars[deployment_type].image, true) }}" service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}" pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}" project_id: "{{ lookup('env', 'gce_project_id') }}" tags: - - "created-by-{{ lookup('env', 'LOGNAME') |default(cluster, true) }}" - - "env-{{ cluster }}" - - "host-type-{{ type }}" - - "env-host-type-{{ cluster }}-openshift-{{ type }}" + - created-by-{{ lookup('env', 'LOGNAME') |default(cluster, true) }} + - env-{{ cluster }} + - host-type-{{ type }} + - env-host-type-{{ cluster }}-openshift-{{ type }} + - deployment-type-{{ deployment_type }} register: gce - name: Add new instances to groups and set variables needed add_host: hostname: "{{ item.name }}" ansible_ssh_host: "{{ item.public_ip }}" + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" groups: "{{ item.tags | oo_prepend_strings_in_list('tag_') | join(',') }}" gce_public_ip: "{{ item.public_ip }}" gce_private_ip: "{{ item.private_ip }}" with_items: gce.instance_data - name: Wait for ssh - wait_for: "port=22 host={{ item.public_ip }}" + wait_for: port=22 host={{ item.public_ip }} with_items: gce.instance_data -- name: Wait for root user setup - command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null root@{{ item.public_ip }} echo root user is setup" +- name: Wait for user setup + command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null {{ hostvars[item.name].ansible_ssh_user }}@{{ item.public_ip }} echo {{ hostvars[item.name].ansible_ssh_user }} user is setup" register: result until: result.rc == 0 retries: 20 diff --git a/playbooks/gce/openshift-cluster/terminate.yml b/playbooks/gce/openshift-cluster/terminate.yml index 0281ae953..abe6a4c95 100644 --- a/playbooks/gce/openshift-cluster/terminate.yml +++ b/playbooks/gce/openshift-cluster/terminate.yml @@ -1,20 +1,34 @@ --- - name: Terminate instance(s) hosts: localhost - + gather_facts: no vars_files: - - vars.yml + - vars.yml + tasks: + - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-node + - add_host: + name: "{{ item }}" + groups: oo_nodes_to_terminate + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated) + + - set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-master + - add_host: + name: "{{ item }}" + groups: oo_masters_to_terminate + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated) - include: ../openshift-node/terminate.yml vars: - oo_host_group_exp: 'groups["tag_env-host-type-{{ cluster_id }}-openshift-node"]' gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}" gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}" gce_project_id: "{{ lookup('env', 'gce_project_id') }}" - include: ../openshift-master/terminate.yml vars: - oo_host_group_exp: 'groups["tag_env-host-type-{{ cluster_id }}-openshift-master"]' gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}" gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}" gce_project_id: "{{ lookup('env', 'gce_project_id') }}" diff --git a/playbooks/gce/openshift-cluster/update.yml b/playbooks/gce/openshift-cluster/update.yml index 973e4c3ef..9ebf39a13 100644 --- a/playbooks/gce/openshift-cluster/update.yml +++ b/playbooks/gce/openshift-cluster/update.yml @@ -1,13 +1,18 @@ --- -- hosts: "tag_env-{{ cluster_id }}" - roles: - - openshift_repos - - os_update_latest +- name: Populate oo_hosts_to_update group + hosts: localhost + gather_facts: no + vars_files: + - vars.yml + tasks: + - name: Evaluate oo_hosts_to_update + add_host: + name: "{{ item }}" + groups: oo_hosts_to_update + ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}" + ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}" + with_items: groups["tag_env-host-type-{{ cluster_id }}-openshift-master"] | union(groups["tag_env-host-type-{{ cluster_id }}-openshift-node"]) | default([]) -- include: ../openshift-master/config.yml - vars: - oo_host_group_exp: "groups[\"tag_env-host-type-{{ cluster_id }}-openshift-master\"]" +- include: ../../common/openshift-cluster/update_repos_and_packages.yml -- include: ../openshift-node/config.yml - vars: - oo_host_group_exp: "groups[\"tag_env-host-type-{{ cluster_id }}-openshift-node\"]" +- include: config.yml diff --git a/playbooks/gce/openshift-cluster/vars.yml b/playbooks/gce/openshift-cluster/vars.yml index ed97d539c..ae33083b9 100644 --- a/playbooks/gce/openshift-cluster/vars.yml +++ b/playbooks/gce/openshift-cluster/vars.yml @@ -1 +1,15 @@ --- +deployment_vars: + origin: + image: centos-7 + ssh_user: + sudo: yes + online: + image: libra-rhel7 + ssh_user: root + sudo: no + enterprise: + image: rhel-7 + ssh_user: + sudo: yes + |