diff options
20 files changed, 338 insertions, 184 deletions
diff --git a/.tito/packages/openshift-ansible b/.tito/packages/openshift-ansible index d6dd5a3c8..f4ea673a5 100644 --- a/.tito/packages/openshift-ansible +++ b/.tito/packages/openshift-ansible @@ -1 +1 @@ -3.9.0-0.23.0 ./ +3.9.0-0.24.0 ./ diff --git a/openshift-ansible.spec b/openshift-ansible.spec index 719e54eb9..ed730bb7f 100644 --- a/openshift-ansible.spec +++ b/openshift-ansible.spec @@ -10,7 +10,7 @@ Name: openshift-ansible Version: 3.9.0 -Release: 0.23.0%{?dist} +Release: 0.24.0%{?dist} Summary: Openshift and Atomic Enterprise Ansible License: ASL 2.0 URL: https://github.com/openshift/openshift-ansible @@ -204,6 +204,50 @@ Atomic OpenShift Utilities includes %changelog +* Wed Jan 24 2018 Jenkins CD Merge Bot <smunilla@redhat.com> 3.9.0-0.24.0 +- Update CF 4.6 Beta templates in openshift_management directory + (simaishi@redhat.com) +- installer: increase content width for commands, which may output URLs + (vrutkovs@redhat.com) +- Only rollout console if config changed (spadgett@redhat.com) +- Protect master installed version during node upgrades (mgugino@redhat.com) +- [1506866] Update haproxy.cfg.j2 (rteague@redhat.com) +- Split control plane and component install in deploy_cluster + (ccoleman@redhat.com) +- Add clusterResourceOverridesEnabled to console config (spadgett@redhat.com) +- [1537105] Add openshift_facts to flannel role (rteague@redhat.com) +- PyYAML is required by openshift_facts on nodes (ccoleman@redhat.com) +- Move origin-gce roles and playbooks into openshift-ansible + (ccoleman@redhat.com) +- Directly select the ansible version (ccoleman@redhat.com) +- use non-deprecated REGISTRY_OPENSHIFT_SERVER_ADDR variable to set the + registry hostname (bparees@redhat.com) +- update Dockerfile to add boto3 dependency (jdiaz@redhat.com) +- Lowercase node names when creating certificates (vrutkovs@redhat.com) +- NFS Storage: make sure openshift_hosted_*_storage_nfs_directory are quoted + (vrutkovs@redhat.com) +- Fix etcd scaleup playbook (mgugino@redhat.com) +- Bug 1524805- ServiceCatalog now works disconnected (fabian@fabianism.us) +- [1506750] Ensure proper hostname check override (rteague@redhat.com) +- failed_when lists are implicitely ANDs, not ORs (vrutkovs@redhat.com) +- un-hardcode default subnet az (jdiaz@redhat.com) +- Ensure that node names are lowerecased before matching (sdodson@redhat.com) +- Bug 1534020 - Only set logging and metrics URLs if console config map exists + (spadgett@redhat.com) +- Add templates to v3.9 (simaishi@redhat.com) +- Use Beta repo path (simaishi@redhat.com) +- CF 4.6 templates (simaishi@redhat.com) +- Add ability to mount volumes into system container nodes (mgugino@redhat.com) +- Fix to master-internal elb scheme (mazzystr@gmail.com) +- Allow 5 etcd hosts (sdodson@redhat.com) +- Remove unused symlink (sdodson@redhat.com) +- docker_creds: fix python3 exception (gscrivan@redhat.com) +- docker_creds: fix python3 exception (gscrivan@redhat.com) +- docker: use image from CentOS and Fedora registries (gscrivan@redhat.com) +- crio: use Docker and CentOS registries for the image (gscrivan@redhat.com) +- The provision_install file ends in yml not yaml! Ansible requirement + clarification. (mbruzek@gmail.com) + * Tue Jan 23 2018 Jenkins CD Merge Bot <smunilla@redhat.com> 3.9.0-0.23.0 - docker_image_availability: enable skopeo to use proxies (lmeyer@redhat.com) - Install base_packages earlier (mgugino@redhat.com) diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml index e89f06f17..080372c81 100644 --- a/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml @@ -310,13 +310,8 @@ - import_role: name: openshift_node tasks_from: upgrade.yml - - name: Set node schedulability - oc_adm_manage_node: - node: "{{ openshift.node.nodename | lower }}" - schedulable: True - delegate_to: "{{ groups.oo_first_master.0 }}" - retries: 10 - delay: 5 - register: node_schedulable - until: node_schedulable is succeeded - when: node_unschedulable is changed + - import_role: + name: openshift_manage_node + tasks_from: config.yml + vars: + openshift_master_host: "{{ groups.oo_first_master.0 }}" diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml index 850442b3b..915fae9fd 100644 --- a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml @@ -50,16 +50,11 @@ - import_role: name: openshift_node tasks_from: upgrade.yml - - name: Set node schedulability - oc_adm_manage_node: - node: "{{ openshift.node.nodename | lower }}" - schedulable: True - delegate_to: "{{ groups.oo_first_master.0 }}" - retries: 10 - delay: 5 - register: node_schedulable - until: node_schedulable is succeeded - when: node_unschedulable is changed + - import_role: + name: openshift_manage_node + tasks_from: config.yml + vars: + openshift_master_host: "{{ groups.oo_first_master.0 }}" - name: Re-enable excluders hosts: oo_nodes_to_upgrade:!oo_masters_to_config diff --git a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py index da7e7b1da..a38b95c1d 100644 --- a/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py +++ b/roles/installer_checkpoint/callback_plugins/installer_checkpoint.py @@ -127,6 +127,10 @@ class CallbackModule(CallbackBase): self._display.display( '\tThis phase can be restarted by running: {}'.format( phase_attributes[phase]['playbook'])) + if 'message' in stats.custom['_run'][phase]: + self._display.display( + '\t{}'.format( + stats.custom['_run'][phase]['message'])) self._display.display("", screen_only=True) diff --git a/roles/lib_utils/library/swapoff.py b/roles/lib_utils/library/swapoff.py new file mode 100644 index 000000000..925eeb17d --- /dev/null +++ b/roles/lib_utils/library/swapoff.py @@ -0,0 +1,137 @@ +#!/usr/bin/env python +# pylint: disable=missing-docstring +# +# Copyright 2017 Red Hat, Inc. and/or its affiliates +# and other contributors as indicated by the @author tags. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import subprocess + +from ansible.module_utils.basic import AnsibleModule + + +DOCUMENTATION = ''' +--- +module: swapoff + +short_description: Disable swap and comment from /etc/fstab + +version_added: "2.4" + +description: + - This module disables swap and comments entries from /etc/fstab + +author: + - "Michael Gugino <mgugino@redhat.com>" +''' + +EXAMPLES = ''' +# Pass in a message +- name: Disable Swap + swapoff: {} +''' + + +def check_swap_in_fstab(module): + '''Check for uncommented swap entries in fstab''' + res = subprocess.call(['grep', '^[^#].*swap', '/etc/fstab']) + + if res == 2: + # rc 2 == cannot open file. + result = {'failed': True, + 'changed': False, + 'msg': 'unable to read /etc/fstab', + 'state': 'unknown'} + module.fail_json(**result) + elif res == 1: + # No grep match, fstab looks good. + return False + elif res == 0: + # There is an uncommented entry for fstab. + return True + else: + # Some other grep error code, we shouldn't get here. + result = {'failed': True, + 'changed': False, + 'msg': 'unknow problem with grep "^[^#].*swap" /etc/fstab ', + 'state': 'unknown'} + module.fail_json(**result) + + +def check_swapon_status(module): + '''Check if swap is actually in use.''' + try: + res = subprocess.check_output(['swapon', '--show']) + except subprocess.CalledProcessError: + # Some other grep error code, we shouldn't get here. + result = {'failed': True, + 'changed': False, + 'msg': 'unable to execute swapon --show', + 'state': 'unknown'} + module.fail_json(**result) + return 'NAME' in str(res) + + +def comment_swap_fstab(module): + '''Comment out swap lines in /etc/fstab''' + res = subprocess.call(['sed', '-i.bak', 's/^[^#].*swap.*/#&/', '/etc/fstab']) + if res: + result = {'failed': True, + 'changed': False, + 'msg': 'sed failed to comment swap in /etc/fstab', + 'state': 'unknown'} + module.fail_json(**result) + + +def run_swapoff(module, changed): + '''Run swapoff command''' + res = subprocess.call(['swapoff', '--all']) + if res: + result = {'failed': True, + 'changed': changed, + 'msg': 'swapoff --all returned {}'.format(str(res)), + 'state': 'unknown'} + module.fail_json(**result) + + +def run_module(): + '''Run this module''' + module = AnsibleModule( + supports_check_mode=False, + argument_spec={} + ) + changed = False + + swap_fstab_res = check_swap_in_fstab(module) + swap_is_inuse_res = check_swapon_status(module) + + if swap_fstab_res: + comment_swap_fstab(module) + changed = True + + if swap_is_inuse_res: + run_swapoff(module, changed) + changed = True + + result = {'changed': changed} + + module.exit_json(**result) + + +def main(): + run_module() + + +if __name__ == '__main__': + main() diff --git a/roles/openshift_facts/library/openshift_facts.py b/roles/openshift_facts/library/openshift_facts.py index 26f0525e9..d6d31effd 100755 --- a/roles/openshift_facts/library/openshift_facts.py +++ b/roles/openshift_facts/library/openshift_facts.py @@ -1430,9 +1430,6 @@ class OpenShiftFacts(object): dynamic_provisioning_enabled=True, max_requests_inflight=500) - if 'node' in roles: - defaults['node'] = dict(labels={}) - if 'cloudprovider' in roles: defaults['cloudprovider'] = dict(kind=None) diff --git a/roles/openshift_hosted/defaults/main.yml b/roles/openshift_hosted/defaults/main.yml index f40085976..610de4f91 100644 --- a/roles/openshift_hosted/defaults/main.yml +++ b/roles/openshift_hosted/defaults/main.yml @@ -109,3 +109,5 @@ openshift_push_via_dns: False # NOTE: settting openshift_docker_hosted_registry_insecure may affect other roles openshift_hosted_docker_registry_insecure_default: "{{ openshift_docker_hosted_registry_insecure | default(False) }}" openshift_hosted_docker_registry_insecure: "{{ openshift_hosted_docker_registry_insecure_default }}" + +openshift_hosted_registry_storage_azure_blob_realm: core.windows.net diff --git a/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml b/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml index 6bce13d1d..879459cf6 100644 --- a/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml +++ b/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml @@ -1,91 +1,113 @@ --- -# Disable external communication for {{ _cluster_component }} -- name: Disable external communication for logging-{{ _cluster_component }} - oc_service: - state: present - name: "logging-{{ _cluster_component }}" - namespace: "{{ openshift_logging_elasticsearch_namespace }}" - selector: - component: "{{ _cluster_component }}" - provider: openshift - connection: blocked - labels: - logging-infra: 'support' - ports: - - port: 9200 - targetPort: "restapi" - when: - - full_restart_cluster | bool - ## get all pods for the cluster - command: > oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[?(@.status.phase==\"Running\")].metadata.name} register: _cluster_pods -- name: "Disable shard balancing for logging-{{ _cluster_component }} cluster" - command: > - oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }' - register: _disable_output - changed_when: "'\"acknowledged\":true' in _disable_output.stdout" +### Check for cluster state before making changes -- if its red then we don't want to continue +- name: "Checking current health for {{ _es_node }} cluster" + shell: > + oc exec "{{ _cluster_pods.stdout.split(' ')[0] }}" -c elasticsearch -n "{{ openshift_logging_elasticsearch_namespace }}" -- es_cluster_health + register: _pod_status when: _cluster_pods.stdout_lines | count > 0 -# Flush ES -- name: "Flushing for logging-{{ _cluster_component }} cluster" - command: > - oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_flush/synced' - register: _flush_output - changed_when: "'\"acknowledged\":true' in _flush_output.stdout" - when: +- when: + - _pod_status.stdout is defined + - (_pod_status.stdout | from_json)['status'] in ['red'] + block: + - name: Set Logging message to manually restart + run_once: true + set_stats: + data: + installer_phase_logging: + message: "Cluster logging-{{ _cluster_component }} was in a red state and will not be automatically restarted. Please see documentation regarding doing a {{ 'full' if full_restart_cluster | bool else 'rolling'}} cluster restart." + + - debug: msg="Cluster logging-{{ _cluster_component }} was in a red state and will not be automatically restarted. Please see documentation regarding doing a {{ 'full' if full_restart_cluster | bool else 'rolling'}} cluster restart." + +- when: _pod_status.stdout is undefined or (_pod_status.stdout | from_json)['status'] in ['green', 'yellow'] + block: + # Disable external communication for {{ _cluster_component }} + - name: Disable external communication for logging-{{ _cluster_component }} + oc_service: + state: present + name: "logging-{{ _cluster_component }}" + namespace: "{{ openshift_logging_elasticsearch_namespace }}" + selector: + component: "{{ _cluster_component }}" + provider: openshift + connection: blocked + labels: + logging-infra: 'support' + ports: + - port: 9200 + targetPort: "restapi" + when: + - full_restart_cluster | bool + + - name: "Disable shard balancing for logging-{{ _cluster_component }} cluster" + command: > + oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }' + register: _disable_output + changed_when: "'\"acknowledged\":true' in _disable_output.stdout" + when: _cluster_pods.stdout_lines | count > 0 + + # Flush ES + - name: "Flushing for logging-{{ _cluster_component }} cluster" + command: > + oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_flush/synced' + register: _flush_output + changed_when: "'\"acknowledged\":true' in _flush_output.stdout" + when: - _cluster_pods.stdout_lines | count > 0 - full_restart_cluster | bool -- command: > - oc get dc -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} - register: _cluster_dcs + - command: > + oc get dc -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} + register: _cluster_dcs -## restart all dcs for full restart -- name: "Restart ES node {{ _es_node }}" - include_tasks: restart_es_node.yml - with_items: "{{ _cluster_dcs }}" - loop_control: - loop_var: _es_node - when: + ## restart all dcs for full restart + - name: "Restart ES node {{ _es_node }}" + include_tasks: restart_es_node.yml + with_items: "{{ _cluster_dcs }}" + loop_control: + loop_var: _es_node + when: - full_restart_cluster | bool -## restart the node if it's dc is in the list of nodes to restart? -- name: "Restart ES node {{ _es_node }}" - include_tasks: restart_es_node.yml - with_items: "{{ _restart_logging_nodes }}" - loop_control: - loop_var: _es_node - when: + ## restart the node if it's dc is in the list of nodes to restart? + - name: "Restart ES node {{ _es_node }}" + include_tasks: restart_es_node.yml + with_items: "{{ _restart_logging_nodes }}" + loop_control: + loop_var: _es_node + when: - not full_restart_cluster | bool - _es_node in _cluster_dcs.stdout -## we may need a new first pod to run against -- fetch them all again -- command: > - oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[?(@.status.phase==\"Running\")].metadata.name} - register: _cluster_pods + ## we may need a new first pod to run against -- fetch them all again + - command: > + oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[?(@.status.phase==\"Running\")].metadata.name} + register: _cluster_pods -- name: "Enable shard balancing for logging-{{ _cluster_component }} cluster" - command: > - oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "all" } }' - register: _enable_output - changed_when: "'\"acknowledged\":true' in _enable_output.stdout" + - name: "Enable shard balancing for logging-{{ _cluster_component }} cluster" + command: > + oc exec {{ _cluster_pods.stdout.split(' ')[0] }} -c elasticsearch -n {{ openshift_logging_elasticsearch_namespace }} -- {{ __es_local_curl }} -XPUT 'https://localhost:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable" : "all" } }' + register: _enable_output + changed_when: "'\"acknowledged\":true' in _enable_output.stdout" -# Reenable external communication for {{ _cluster_component }} -- name: Reenable external communication for logging-{{ _cluster_component }} - oc_service: - state: present - name: "logging-{{ _cluster_component }}" - namespace: "{{ openshift_logging_elasticsearch_namespace }}" - selector: - component: "{{ _cluster_component }}" - provider: openshift - labels: - logging-infra: 'support' - ports: + # Reenable external communication for {{ _cluster_component }} + - name: Reenable external communication for logging-{{ _cluster_component }} + oc_service: + state: present + name: "logging-{{ _cluster_component }}" + namespace: "{{ openshift_logging_elasticsearch_namespace }}" + selector: + component: "{{ _cluster_component }}" + provider: openshift + labels: + logging-infra: 'support' + ports: - port: 9200 targetPort: "restapi" - when: + when: - full_restart_cluster | bool diff --git a/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml b/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml index 6d0df40c8..fe15e40fd 100644 --- a/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml +++ b/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml @@ -26,12 +26,12 @@ - name: "Waiting for ES to be ready for {{ _es_node }}" shell: > - oc exec "{{ _pod }}" -c elasticsearch -n "{{ openshift_logging_elasticsearch_namespace }}" -- {{ __es_local_curl }} https://localhost:9200/_cat/health | cut -d' ' -f4 + oc exec "{{ _pod }}" -c elasticsearch -n "{{ openshift_logging_elasticsearch_namespace }}" -- es_cluster_health with_items: "{{ _pods.stdout.split(' ') }}" loop_control: loop_var: _pod register: _pod_status - until: _pod_status.stdout in ['green', 'yellow'] + until: (_pod_status.stdout | from_json)['status'] in ['green', 'yellow'] retries: 60 delay: 5 changed_when: false diff --git a/roles/openshift_manage_node/defaults/main.yml b/roles/openshift_manage_node/defaults/main.yml index f0e728a3f..00e04b9f2 100644 --- a/roles/openshift_manage_node/defaults/main.yml +++ b/roles/openshift_manage_node/defaults/main.yml @@ -4,3 +4,6 @@ openshift_manage_node_is_master: False # Default is to be schedulable except for master nodes. l_openshift_manage_schedulable: "{{ openshift_schedulable | default(not openshift_manage_node_is_master) }}" + +openshift_master_node_labels: + node-role.kubernetes.io/master: 'true' diff --git a/roles/openshift_manage_node/tasks/config.yml b/roles/openshift_manage_node/tasks/config.yml new file mode 100644 index 000000000..4f00351b5 --- /dev/null +++ b/roles/openshift_manage_node/tasks/config.yml @@ -0,0 +1,27 @@ +--- +- name: Set node schedulability + oc_adm_manage_node: + node: "{{ openshift.node.nodename | lower }}" + schedulable: "{{ 'true' if l_openshift_manage_schedulable | bool else 'false' }}" + retries: 10 + delay: 5 + register: node_schedulable + until: node_schedulable is succeeded + when: "'nodename' in openshift.node" + delegate_to: "{{ openshift_master_host }}" + +- name: Label nodes + oc_label: + name: "{{ openshift.node.nodename }}" + kind: node + state: add + labels: "{{ l_all_labels | lib_utils_oo_dict_to_list_of_dict }}" + namespace: default + when: + - "'nodename' in openshift.node" + - l_all_labels != {} + delegate_to: "{{ openshift_master_host }}" + vars: + l_node_labels: "{{ openshift_node_labels | default({}) }}" + l_master_labels: "{{ ('oo_masters_to_config' in group_names) | ternary(openshift_master_node_labels, {}) }}" + l_all_labels: "{{ l_node_labels | combine(l_master_labels) }}" diff --git a/roles/openshift_manage_node/tasks/main.yml b/roles/openshift_manage_node/tasks/main.yml index 9251d380b..154e2b45f 100644 --- a/roles/openshift_manage_node/tasks/main.yml +++ b/roles/openshift_manage_node/tasks/main.yml @@ -34,25 +34,4 @@ when: "'nodename' in openshift.node" delegate_to: "{{ openshift_master_host }}" -- name: Set node schedulability - oc_adm_manage_node: - node: "{{ openshift.node.nodename | lower }}" - schedulable: "{{ 'true' if l_openshift_manage_schedulable | bool else 'false' }}" - retries: 10 - delay: 5 - register: node_schedulable - until: node_schedulable is succeeded - when: "'nodename' in openshift.node" - delegate_to: "{{ openshift_master_host }}" - -- name: Label nodes - oc_label: - name: "{{ openshift.node.nodename }}" - kind: node - state: add - labels: "{{ openshift_node_labels | lib_utils_oo_dict_to_list_of_dict }}" - namespace: default - when: - - "'nodename' in openshift.node" - - openshift_node_labels | default({}) != {} - delegate_to: "{{ openshift_master_host }}" +- include_tasks: config.yml diff --git a/roles/openshift_node/tasks/main.yml b/roles/openshift_node/tasks/main.yml index 754ecacaf..f56f24e12 100644 --- a/roles/openshift_node/tasks/main.yml +++ b/roles/openshift_node/tasks/main.yml @@ -14,33 +14,11 @@ #### Disable SWAP ##### # https://docs.openshift.com/container-platform/3.4/admin_guide/overcommit.html#disabling-swap-memory -- name: Check for swap usage - command: grep "^[^#].*swap" /etc/fstab - # grep: match any lines which don't begin with '#' and contain 'swap' - changed_when: false - failed_when: false - register: swap_result - -- when: - - swap_result.stdout_lines | length > 0 - - openshift_disable_swap | default(true) | bool - block: - - name: Disable swap - command: swapoff --all - - - name: Remove swap entries from /etc/fstab - replace: - dest: /etc/fstab - regexp: '(^[^#].*swap.*)' - replace: '# \1' - backup: yes - - - name: Add notice about disabling swap - lineinfile: - dest: /etc/fstab - line: '# OpenShift-Ansible Installer disabled swap per overcommit guidelines' - state: present -#### End Disable Swap Block #### +# swapoff is a custom module in lib_utils that comments out swap entries in +# /etc/fstab and runs swapoff -a, if necessary. +- name: Disable swap + swapoff: {} + when: openshift_disable_swap | default(true) | bool - name: include node installer include_tasks: install.yml diff --git a/roles/openshift_node/tasks/upgrade/config_changes.yml b/roles/openshift_node/tasks/upgrade/config_changes.yml index dd9183382..15ac76f7d 100644 --- a/roles/openshift_node/tasks/upgrade/config_changes.yml +++ b/roles/openshift_node/tasks/upgrade/config_changes.yml @@ -27,28 +27,12 @@ path: "/var/lib/cni/networks/openshift-sdn/" state: absent -# Disable Swap Block (pre) -- block: - - name: Remove swap entries from /etc/fstab - replace: - dest: /etc/fstab - regexp: '(^[^#].*swap.*)' - replace: '# \1' - backup: yes - - - name: Add notice about disabling swap - lineinfile: - dest: /etc/fstab - line: '# OpenShift-Ansible Installer disabled swap per overcommit guidelines' - state: present - - - name: Disable swap - command: swapoff --all - - when: - - openshift_node_upgrade_swap_result | default(False) | bool - - openshift_disable_swap | default(true) | bool -# End Disable Swap Block +# https://docs.openshift.com/container-platform/3.4/admin_guide/overcommit.html#disabling-swap-memory +# swapoff is a custom module in lib_utils that comments out swap entries in +# /etc/fstab and runs swapoff -a, if necessary. +- name: Disable swap + swapoff: {} + when: openshift_disable_swap | default(true) | bool - name: Apply 3.6 dns config changes yedit: diff --git a/roles/openshift_node/tasks/upgrade_pre.yml b/roles/openshift_node/tasks/upgrade_pre.yml index 3ae7dc6b6..aa1a75100 100644 --- a/roles/openshift_node/tasks/upgrade_pre.yml +++ b/roles/openshift_node/tasks/upgrade_pre.yml @@ -41,16 +41,3 @@ vars: openshift_version: "{{ openshift_pkg_version | default('') }}" when: not openshift_is_containerized | bool - -# https://docs.openshift.com/container-platform/3.4/admin_guide/overcommit.html#disabling-swap-memory -- name: Check for swap usage - command: grep "^[^#].*swap" /etc/fstab - # grep: match any lines which don't begin with '#' and contain 'swap' - changed_when: false - failed_when: false - register: swap_result - -# Set this fact here so we can use it during the next play, which is serial. -- name: set_fact swap_result - set_fact: - openshift_node_upgrade_swap_result: "{{ swap_result.stdout_lines | length > 0 | bool }}" diff --git a/roles/openshift_node/templates/node.service.j2 b/roles/openshift_node/templates/node.service.j2 index 777f4a449..7405cfd73 100644 --- a/roles/openshift_node/templates/node.service.j2 +++ b/roles/openshift_node/templates/node.service.j2 @@ -6,7 +6,7 @@ After=ovsdb-server.service After=ovs-vswitchd.service Wants={{ openshift_docker_service_name }}.service Documentation=https://github.com/openshift/origin -Requires=dnsmasq.service +Wants=dnsmasq.service After=dnsmasq.service {% if openshift_use_crio | bool %}Wants=cri-o.service{% endif %} diff --git a/roles/openshift_node/templates/openshift.docker.node.service b/roles/openshift_node/templates/openshift.docker.node.service index ae7b147a6..23823e3e5 100644 --- a/roles/openshift_node/templates/openshift.docker.node.service +++ b/roles/openshift_node/templates/openshift.docker.node.service @@ -13,7 +13,7 @@ After=ovs-vswitchd.service Wants={{ openshift_service_type }}-master.service Requires={{ openshift_service_type }}-node-dep.service After={{ openshift_service_type }}-node-dep.service -Requires=dnsmasq.service +Wants=dnsmasq.service After=dnsmasq.service [Service] diff --git a/roles/openshift_prometheus/README.md b/roles/openshift_prometheus/README.md index 1ebeacabf..6079e6016 100644 --- a/roles/openshift_prometheus/README.md +++ b/roles/openshift_prometheus/README.md @@ -31,7 +31,7 @@ For default values, see [`defaults/main.yaml`](defaults/main.yaml). e.g ``` -openshift_prometheus_args=['--storage.tsdb.retention=6h', '--storage.tsdb.min-block-duration=5s', '--storage.tsdb.max-block-duration=6m'] +openshift_prometheus_args=['--storage.tsdb.retention=6h', '--query.timeout=2m'] ``` ## PVC related variables diff --git a/roles/openshift_prometheus/defaults/main.yaml b/roles/openshift_prometheus/defaults/main.yaml index e30108d2c..1b21c4739 100644 --- a/roles/openshift_prometheus/defaults/main.yaml +++ b/roles/openshift_prometheus/defaults/main.yaml @@ -14,7 +14,7 @@ openshift_prometheus_node_selector: {"region":"infra"} openshift_prometheus_additional_rules_file: null #prometheus application arguments -openshift_prometheus_args: ['--storage.tsdb.retention=6h', '--storage.tsdb.min-block-duration=2m'] +openshift_prometheus_args: ['--storage.tsdb.retention=6h'] # storage # One of ['emptydir', 'pvc'] |