diff options
213 files changed, 2769 insertions, 946 deletions
diff --git a/.tito/packages/openshift-ansible b/.tito/packages/openshift-ansible index 252f0b950..200f8d7f3 100644 --- a/.tito/packages/openshift-ansible +++ b/.tito/packages/openshift-ansible @@ -1 +1 @@ -3.6.59-1 ./ +3.6.68-1 ./ diff --git a/.travis.yml b/.travis.yml index 245202139..1c549cec9 100644 --- a/.travis.yml +++ b/.travis.yml @@ -13,6 +13,7 @@ python:    - "3.5"  install: +  - pip install --upgrade pip    - pip install tox-travis coveralls  script: @@ -29,17 +29,34 @@ tito build --rpm  To build a container image of `openshift-ansible` using standalone **Docker**:          cd openshift-ansible -        docker build -t openshift/openshift-ansible . +        docker build -f images/installer/Dockerfile -t openshift/openshift-ansible . -Alternatively this can be built using on **OpenShift** using a [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) with this command: +### Building on OpenShift + +To build an openshift-ansible image using an **OpenShift** [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) the straightforward command would be:          oc new-build docker.io/aweiteka/playbook2image~https://github.com/openshift/openshift-ansible -The progress of the build can be monitored with: +However: because the `Dockerfile` for this repository is not in the top level directory, and because we can't change the build context to the `images/installer` path as it would cause the build to fail, the `oc new-app` command above will create a build configuration using the *source to image* strategy, which is the default approach of the [playbook2image](https://github.com/openshift/playbook2image) base image. This does build an image successfully, but unfortunately the resulting image will be missing some customizations that are handled by the [Dockerfile](images/installer/Dockerfile) in this repo. + +At the time of this writing there is no straightforward option to [set the dockerfilePath](https://docs.openshift.org/latest/dev_guide/builds/build_strategies.html#dockerfile-path) of a `docker` build strategy with `oc new-build`. The alternatives to achieve this are: + +- Use the simple `oc new-build` command above to generate the BuildConfig and ImageStream objects, and then manually edit the generated build configuration to change its strategy to `dockerStrategy` and set `dockerfilePath` to `images/installer/Dockerfile`. + +- Download and pass the `Dockerfile` to `oc new-build` with the `-D` option: + +``` +curl -s https://raw.githubusercontent.com/openshift/openshift-ansible/master/images/installer/Dockerfile | +     oc new-build -D - \ +        --docker-image=docker.io/aweiteka/playbook2image \ +	    https://github.com/openshift/openshift-ansible +``` + +Once a build is started, the progress of the build can be monitored with:          oc logs -f bc/openshift-ansible -Once built, the image will be visible in the Image Stream created by the same command: +Once built, the image will be visible in the Image Stream created by `oc new-app`:          oc describe imagestream openshift-ansible diff --git a/README_CONTAINER_IMAGE.md b/README_CONTAINER_IMAGE.md index b78073100..e8e6efb79 100644 --- a/README_CONTAINER_IMAGE.md +++ b/README_CONTAINER_IMAGE.md @@ -1,6 +1,6 @@  # Containerized openshift-ansible to run playbooks -The [Dockerfile](Dockerfile) in this repository uses the [playbook2image](https://github.com/aweiteka/playbook2image) source-to-image base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions. +The [Dockerfile](images/installer/Dockerfile) in this repository uses the [playbook2image](https://github.com/openshift/playbook2image) source-to-image base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.  The image is designed to **run as a non-root user**. The container's UID is mapped to the username `default` at runtime. Therefore, the container's environment reflects that user's settings, and the configuration should match that. For example `$HOME` is `/opt/app-root/src`, so ssh keys are expected to be under `/opt/app-root/src/.ssh`. If you ran a container as `root` you would have to adjust the container's configuration accordingly, e.g. by placing ssh keys under `/root/.ssh` instead. Nevertheless, the expectation is that containers will be run as non-root; for example, this container image can be run inside OpenShift under the default `restricted` [security context constraint](https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints). @@ -8,7 +8,7 @@ The image is designed to **run as a non-root user**. The container's UID is mapp  ## Usage -The `playbook2image` base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/aweiteka/playbook2image) documentation. +The `playbook2image` base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/openshift/playbook2image) documentation.  At the very least, when running a container you must specify: diff --git a/bin/cluster b/bin/cluster index b9b2ab15f..f77eb36ad 100755 --- a/bin/cluster +++ b/bin/cluster @@ -1,5 +1,4 @@  #!/usr/bin/env python2 -# vim: expandtab:tabstop=4:shiftwidth=4  import argparse  import ConfigParser diff --git a/docs/best_practices_guide.adoc b/docs/best_practices_guide.adoc index 4ecd535e4..e66c5addb 100644 --- a/docs/best_practices_guide.adoc +++ b/docs/best_practices_guide.adoc @@ -14,25 +14,6 @@ This guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119].  == Python -=== Python Source Files - -''' -[[Python-source-files-MUST-contain-the-following-vim-mode-line]] -[cols="2v,v"] -|=== -| <<Python-source-files-MUST-contain-the-following-vim-mode-line, Rule>> -| Python source files MUST contain the following vim mode line. -|=== - -[source] ----- -# vim: expandtab:tabstop=4:shiftwidth=4 ----- - -Since most developers contributing to this repository use vim, this rule helps to promote consistency. - -If mode lines for other editors are needed, please open a GitHub issue. -  === Method Signatures  ''' diff --git a/docs/pull_requests.md b/docs/pull_requests.md index 953563fb2..fcc3e275c 100644 --- a/docs/pull_requests.md +++ b/docs/pull_requests.md @@ -43,6 +43,15 @@ simplifying the workflow towards a single infrastructure in the future.    job is also posted to the Pull Request as comments and summarized at the    bottom of the Pull Request page. +### Fedora tests + +There are a set of tests that run on Fedora infrastructure. They are started +automatically with every pull request. + +They are implemented using the [`redhat-ci` framework](https://github.com/jlebon/redhat-ci). + +To re-run tests, write a comment containing `bot, retest this please`. +  ## Triggering merge  After a PR is properly reviewed and a set of diff --git a/filter_plugins/oo_filters.py b/filter_plugins/oo_filters.py index d61184c48..8b279981d 100644 --- a/filter_plugins/oo_filters.py +++ b/filter_plugins/oo_filters.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  # pylint: disable=too-many-lines  """  Custom filters for use in openshift-ansible diff --git a/filter_plugins/openshift_node.py b/filter_plugins/openshift_node.py index 8c7302052..cad95ea6d 100644 --- a/filter_plugins/openshift_node.py +++ b/filter_plugins/openshift_node.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Custom filters for use in openshift-node  ''' diff --git a/filter_plugins/openshift_version.py b/filter_plugins/openshift_version.py index 1403e9dcc..809e82488 100644 --- a/filter_plugins/openshift_version.py +++ b/filter_plugins/openshift_version.py @@ -1,7 +1,5 @@  #!/usr/bin/python -  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  """  Custom version comparison filters for use in openshift-ansible  """ diff --git a/hack/build-images.sh b/hack/build-images.sh index f6210e239..3e9896caa 100755 --- a/hack/build-images.sh +++ b/hack/build-images.sh @@ -10,7 +10,7 @@ source_root=$(dirname "${0}")/..  prefix="openshift/openshift-ansible"  version="latest"  verbose=false -options="" +options="-f images/installer/Dockerfile"  help=false  for args in "$@" diff --git a/images/installer/Dockerfile b/images/installer/Dockerfile index 1df887f32..f6af018ca 100644 --- a/images/installer/Dockerfile +++ b/images/installer/Dockerfile @@ -46,6 +46,6 @@ ADD . /tmp/src  RUN /usr/libexec/s2i/assemble  # Add files for running as a system container -COPY system-container/root / +COPY images/installer/system-container/root /  CMD [ "/usr/libexec/s2i/run" ] diff --git a/inventory/byo/hosts.byo.native-glusterfs.example b/inventory/byo/hosts.byo.native-glusterfs.example new file mode 100644 index 000000000..2dbb57d40 --- /dev/null +++ b/inventory/byo/hosts.byo.native-glusterfs.example @@ -0,0 +1,51 @@ +# This is an example of a bring your own (byo) host inventory for a cluster +# with natively hosted, containerized GlusterFS storage. +# +# This inventory may be used with the byo/config.yml playbook to deploy a new +# cluster with GlusterFS storage, which will use that storage to create a +# volume that will provide backend storage for a hosted Docker registry. +# +# This inventory may also be used with byo/openshift-glusterfs/config.yml to +# deploy GlusterFS storage on an existing cluster. With this playbook, the +# registry backend volume will be created but the administrator must then +# either deploy a hosted registry or change an existing hosted registry to use +# that volume. +# +# There are additional configuration parameters that can be specified to +# control the deployment and state of a GlusterFS cluster. Please see the +# documentation in playbooks/byo/openshift-glusterfs/README.md and +# roles/openshift_storage_glusterfs/README.md for additional details. + +[OSEv3:children] +masters +nodes +# Specify there will be GlusterFS nodes +glusterfs + +[OSEv3:vars] +ansible_ssh_user=root +deployment_type=origin +# Specify that we want to use GlusterFS storage for a hosted registry +openshift_hosted_registry_storage_kind=glusterfs + +[masters] +master  node=True storage=True master=True + +[nodes] +master  node=True storage=True master=True openshift_schedulable=False +# A hosted registry, by default, will only be deployed on nodes labeled +# "region=infra". +node0   node=True openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True +node1   node=True openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True +node2   node=True openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True + +# Specify the glusterfs group, which contains the nodes that will host +# GlusterFS storage pods. At a minimum, each node must have a +# "glusterfs_devices" variable defined. This variable is a list of block +# devices the node will have access to that is intended solely for use as +# GlusterFS storage. These block devices must be bare (e.g. have no data, not +# be marked as LVM PVs), and will be formatted. +[glusterfs] +node0  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]' +node1  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]' +node2  glusterfs_devices='[ "/dev/vdb", "/dev/vdc", "/dev/vdd" ]' diff --git a/inventory/byo/hosts.origin.example b/inventory/byo/hosts.origin.example index db6ac12ce..6641d6dc8 100644 --- a/inventory/byo/hosts.origin.example +++ b/inventory/byo/hosts.origin.example @@ -30,17 +30,17 @@ openshift_deployment_type=origin  # use this to lookup the latest exact version of the container images, which is the tag actually used to configure  # the cluster. For RPM installations we just verify the version detected in your configured repos matches this  # release. -openshift_release=v1.5 +openshift_release=v3.6  # Specify an exact container image tag to install or configure.  # WARNING: This value will be used for all hosts in containerized environments, even those that have another version installed.  # This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up. -#openshift_image_tag=v1.5.0 +#openshift_image_tag=v3.6.0  # Specify an exact rpm version to install or configure.  # WARNING: This value will be used for all hosts in RPM based environments, even those that have another version installed.  # This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up. -#openshift_pkg_version=-1.5.0 +#openshift_pkg_version=-3.6.0  # Install the openshift examples  #openshift_install_examples=true @@ -345,7 +345,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #  selector: type=router1  #  images: "openshift3/ose-${component}:${version}"  #  edits: [] -#  certificates: +#  certificate:  #    certfile: /path/to/certificate/abc.crt  #    keyfile: /path/to/certificate/abc.key  #    cafile: /path/to/certificate/ca.crt @@ -359,7 +359,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #  serviceaccount: router  #  selector: type=router2  #  images: "openshift3/ose-${component}:${version}" -#  certificates: +#  certificate:  #    certfile: /path/to/certificate/xyz.crt  #    keyfile: /path/to/certificate/xyz.key  #    cafile: /path/to/certificate/ca.crt @@ -438,9 +438,6 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57  #openshift_hosted_registry_storage_volume_size=10Gi  # -# Native GlusterFS Registry Storage -#openshift_hosted_registry_storage_kind=glusterfs -#  # AWS S3  # S3 bucket must already exist.  #openshift_hosted_registry_storage_kind=object @@ -571,7 +568,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_hosted_logging_elasticsearch_cluster_size=1  # Configure the prefix and version for the component images  #openshift_hosted_logging_deployer_prefix=docker.io/openshift/origin- -#openshift_hosted_logging_deployer_version=1.5.0 +#openshift_hosted_logging_deployer_version=3.6.0  # Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')  # os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' @@ -773,7 +770,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_node_env_vars={"ENABLE_HTTP2": "true"}  # Enable API service auditing, available as of 1.3 -#openshift_master_audit_config={"basicAuditEnabled": true} +#openshift_master_audit_config={"enabled": true}  # Enable origin repos that point at Centos PAAS SIG, defaults to true, only used  # by deployment_type=origin diff --git a/inventory/byo/hosts.ose.example b/inventory/byo/hosts.ose.example index 42097a593..e57b831f3 100644 --- a/inventory/byo/hosts.ose.example +++ b/inventory/byo/hosts.ose.example @@ -30,17 +30,17 @@ openshift_deployment_type=openshift-enterprise  # use this to lookup the latest exact version of the container images, which is the tag actually used to configure  # the cluster. For RPM installations we just verify the version detected in your configured repos matches this  # release. -openshift_release=v3.5 +openshift_release=v3.6  # Specify an exact container image tag to install or configure.  # WARNING: This value will be used for all hosts in containerized environments, even those that have another version installed.  # This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up. -#openshift_image_tag=v3.5.0 +#openshift_image_tag=v3.6.0  # Specify an exact rpm version to install or configure.  # WARNING: This value will be used for all hosts in RPM based environments, even those that have another version installed.  # This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up. -#openshift_pkg_version=-3.5.0 +#openshift_pkg_version=-3.6.0  # Install the openshift examples  #openshift_install_examples=true @@ -345,7 +345,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #  selector: type=router1  #  images: "openshift3/ose-${component}:${version}"  #  edits: [] -#  certificates: +#  certificate:  #    certfile: /path/to/certificate/abc.crt  #    keyfile: /path/to/certificate/abc.key  #    cafile: /path/to/certificate/ca.crt @@ -359,7 +359,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #  serviceaccount: router  #  selector: type=router2  #  images: "openshift3/ose-${component}:${version}" -#  certificates: +#  certificate:  #    certfile: /path/to/certificate/xyz.crt  #    keyfile: /path/to/certificate/xyz.key  #    cafile: /path/to/certificate/ca.crt @@ -438,9 +438,6 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57  #openshift_hosted_registry_storage_volume_size=10Gi  # -# Native GlusterFS Registry Storage -#openshift_hosted_registry_storage_kind=glusterfs -#  # AWS S3  #  # S3 bucket must already exist. @@ -572,7 +569,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_hosted_logging_elasticsearch_cluster_size=1  # Configure the prefix and version for the component images  #openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/ -#openshift_hosted_logging_deployer_version=3.5.0 +#openshift_hosted_logging_deployer_version=3.6.0  # Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')  # os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant' @@ -774,7 +771,7 @@ openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true',  #openshift_node_env_vars={"ENABLE_HTTP2": "true"}  # Enable API service auditing, available as of 3.2 -#openshift_master_audit_config={"basicAuditEnabled": true} +#openshift_master_audit_config={"enabled": true}  # Validity of the auto-generated OpenShift certificates in days.  # See also openshift_hosted_registry_cert_expire_days above. diff --git a/library/kubeclient_ca.py b/library/kubeclient_ca.py index 163624a76..a89a5574f 100644 --- a/library/kubeclient_ca.py +++ b/library/kubeclient_ca.py @@ -1,7 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4 -  ''' kubeclient_ca ansible module '''  import base64 diff --git a/library/modify_yaml.py b/library/modify_yaml.py index 8706e80c2..9b8f9ba33 100755 --- a/library/modify_yaml.py +++ b/library/modify_yaml.py @@ -1,7 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4 -  ''' modify_yaml ansible module '''  import yaml diff --git a/lookup_plugins/oo_option.py b/lookup_plugins/oo_option.py index 7909d0092..4581cb6b8 100644 --- a/lookup_plugins/oo_option.py +++ b/lookup_plugins/oo_option.py @@ -1,7 +1,5 @@  #!/usr/bin/env python2  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4 -  '''  oo_option lookup plugin for openshift-ansible diff --git a/openshift-ansible.spec b/openshift-ansible.spec index fe845df03..19e6356e7 100644 --- a/openshift-ansible.spec +++ b/openshift-ansible.spec @@ -9,7 +9,7 @@  %global __requires_exclude ^/usr/bin/ansible-playbook$  Name:           openshift-ansible -Version:        3.6.59 +Version:        3.6.68  Release:        1%{?dist}  Summary:        Openshift and Atomic Enterprise Ansible  License:        ASL 2.0 @@ -274,6 +274,75 @@ Atomic OpenShift Utilities includes  %changelog +* Sat May 13 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.68-1 +- Updating registry-console image version during a post_control_plane upgrade +  (ewolinet@redhat.com) +- Remove userland-proxy-path from daemon.json (smilner@redhat.com) +- Fix whistespace issues in custom template (smilner@redhat.com) +- Always add proxy items to atomic.conf (smilner@redhat.com) +- Move container-engine systemd environment to updated location +  (smilner@redhat.com) +- doc: Add link to daemon.json upstream doc (smilner@redhat.com) +- Remove unused daemon.json keys (smilner@redhat.com) +- bug 1448860. Change recovery_after_nodes to match node_quorum +  (jcantril@redhat.com) +- bug 1441369. Kibana memory limits bug 1439451. Kibana crash +  (jcantril@redhat.com) +- Extend repoquery command (of lib_utils role) to ignore excluders +  (jchaloup@redhat.com) +- lower case in /etc/daemon.json and correct block-registry (ghuang@redhat.com) +- Fix for yedit custom separators (mwoodson@redhat.com) +- Updating 3.6 enterprise registry-console template image version +  (ewolinet@redhat.com) +- Default to iptables on master (sdodson@redhat.com) +- Rename blocked-registries to block-registries (smilner@redhat.com) +- Ensure true is lowercase in daemon.json (smilner@redhat.com) +- use docker_log_driver and /etc/docker/daemon.json to determine log driver +  (rmeggins@redhat.com) +- Temporarily revert to OSEv3 host group usage (rteague@redhat.com) +- Add service file templates for master and node (smilner@redhat.com) +- Update systemd units to use proper container service name +  (smilner@redhat.com) +- polish etcd_common role (jchaloup@redhat.com) +- Note existence of Fedora tests and how to rerun (rhcarvalho@gmail.com) +- Fix for OpenShift SDN Check (vincent.schwarzer@yahoo.de) +- Updating oc_obj to use get instead of getattr (ewolinet@redhat.com) +- Updating size suffix for metrics in role (ewolinet@redhat.com) +- GlusterFS: Allow swapping an existing registry's backend storage +  (jarrpa@redhat.com) +- GlusterFS: Allow for a separate registry-specific playbook +  (jarrpa@redhat.com) +- GlusterFS: Improve role documentation (jarrpa@redhat.com) +- hosted_registry: Get correct pod selector for GlusterFS storage +  (jarrpa@redhat.com) +- hosted registry: Fix typo (jarrpa@redhat.com) +- run excluders over selected set of hosts during control_plane/node upgrade +  (jchaloup@redhat.com) +- Reserve kubernetes and 'kubernetes-' prefixed namespaces +  (jliggitt@redhat.com) +- oc_volume: Add missing parameter documentation (jarrpa@redhat.com) + +* Wed May 10 2017 Scott Dodson <sdodson@redhat.com> 3.6.67-1 +- byo: correct option name (gscrivan@redhat.com) +- Fail if rpm version != docker image version (jchaloup@redhat.com) +- Perform package upgrades in one transaction (sdodson@redhat.com) +- Properly fail if OpenShift RPM version is undefined (rteague@redhat.com) + +* Wed May 10 2017 Scott Dodson <sdodson@redhat.com> 3.6.66-1 +- Fix issue with Travis-CI using old pip version (rteague@redhat.com) +- Remove vim configuration from Python files (rhcarvalho@gmail.com) +- Use local variables for daemon.json template (smilner@redhat.com) +- Fix additional master cert & client config creation. (abutcher@redhat.com) + +* Tue May 09 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.62-1 +-  + +* Tue May 09 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.61-1 +-  + +* Mon May 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.60-1 +-  +  * Mon May 08 2017 Jenkins CD Merge Bot <tdawson@redhat.com> 3.6.59-1  - Updating logging and metrics to restart api, ha and controllers when updating    master config (ewolinet@redhat.com) diff --git a/playbooks/adhoc/grow_docker_vg/filter_plugins/grow_docker_vg_filters.py b/playbooks/adhoc/grow_docker_vg/filter_plugins/grow_docker_vg_filters.py index daff68fbe..cacd0b0f3 100644 --- a/playbooks/adhoc/grow_docker_vg/filter_plugins/grow_docker_vg_filters.py +++ b/playbooks/adhoc/grow_docker_vg/filter_plugins/grow_docker_vg_filters.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Custom filters for use in openshift-ansible  ''' diff --git a/playbooks/adhoc/uninstall.yml b/playbooks/adhoc/uninstall.yml index beaf20b07..1c8257162 100644 --- a/playbooks/adhoc/uninstall.yml +++ b/playbooks/adhoc/uninstall.yml @@ -305,8 +305,15 @@    - shell: systemctl daemon-reload      changed_when: False +  - name: restart container-engine +    service: name=container-engine state=restarted +    ignore_errors: true +    register: container_engine +    - name: restart docker      service: name=docker state=restarted +    ignore_errors: true +    when: not (container_engine | changed)    - name: restart NetworkManager      service: name=NetworkManager state=restarted diff --git a/playbooks/byo/openshift-cluster/cluster_hosts.yml b/playbooks/byo/openshift-cluster/cluster_hosts.yml index 268a65415..9d086b7b6 100644 --- a/playbooks/byo/openshift-cluster/cluster_hosts.yml +++ b/playbooks/byo/openshift-cluster/cluster_hosts.yml @@ -15,6 +15,8 @@ g_nfs_hosts: "{{ groups.nfs | default([]) }}"  g_glusterfs_hosts: "{{ groups.glusterfs | default([]) }}" +g_glusterfs_registry_hosts: "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}" +  g_all_hosts: "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts)                   | union(g_lb_hosts) | union(g_nfs_hosts)                   | union(g_new_node_hosts)| union(g_new_master_hosts) diff --git a/playbooks/byo/openshift-glusterfs/README.md b/playbooks/byo/openshift-glusterfs/README.md new file mode 100644 index 000000000..f62aea229 --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/README.md @@ -0,0 +1,98 @@ +# OpenShift GlusterFS Playbooks + +These playbooks are intended to enable the use of GlusterFS volumes by pods in +OpenShift. While they try to provide a sane set of defaults they do cover a +variety of scenarios and configurations, so read carefully. :) + +## Playbook: config.yml + +This is the main playbook that integrates GlusterFS into a new or existing +OpenShift cluster. It will also, if specified, configure a hosted Docker +registry with GlusterFS backend storage. + +This playbook requires the `glusterfs` group to exist in the Ansible inventory +file. The hosts in this group are the nodes of the GlusterFS cluster. + + * If this is a newly configured cluster each host must have a +   `glusterfs_devices` variable defined, each of which must be a list of block +   storage devices intended for use only by the GlusterFS cluster. If this is +   also an external GlusterFS cluster, you must specify +   `openshift_storage_glusterfs_is_native=False`. If the cluster is to be +   managed by an external heketi service you must also specify +   `openshift_storage_glusterfs_heketi_is_native=False` and +   `openshift_storage_glusterfs_heketi_url=<URL>` with the URL to the heketi +   service. All these variables are specified in `[OSEv3:vars]`, + * If this is an existing cluster you do not need to specify a list of block +   devices but you must specify the following variables in `[OSEv3:vars]`: +   * `openshift_storage_glusterfs_is_missing=False` +   * `openshift_storage_glusterfs_heketi_is_missing=False` + +By default, pods for a native GlusterFS cluster will be created in the +`default` namespace. To change this, specify +`openshift_storage_glusterfs_namespace=<other namespace>` in `[OSEv3:vars]`. + +To configure the deployment of a Docker registry with GlusterFS backend +storage, specify `openshift_hosted_registry_storage_kind=glusterfs` in +`[OSEv3:vars]`. To create a separate GlusterFS cluster for use only by the +registry, specify a `glusterfs_registry` group that is populated as the +`glusterfs` is with the nodes for the separate cluster. If no +`glusterfs_registry` group is specified, the cluster defined by the `glusterfs` +group will be used. + +To swap an existing hosted registry's backend storage for a GlusterFS volume, +specify `openshift_hosted_registry_storage_glusterfs_swap=True`. To +additoinally copy any existing contents from an existing hosted registry, +specify `openshift_hosted_registry_storage_glusterfs_swapcopy=True`. + +**NOTE:** For each namespace that is to have access to GlusterFS volumes an +Enpoints resource pointing to the GlusterFS cluster nodes and a corresponding +Service resource must be created. If dynamic provisioning using StorageClasses +is configure, these resources are created automatically in the namespaces that +require them. This playbook also takes care of creating these resources in the +namespaces used for deployment. + +An example of a minimal inventory file: +``` +[OSEv3:children] +masters +nodes +glusterfs + +[OSEv3:vars] +ansible_ssh_user=root +deployment_type=origin + +[masters] +master + +[nodes] +node0 +node1 +node2 + +[glusterfs] +node0 glusterfs_devices='[ "/dev/sdb" ]' +node1 glusterfs_devices='[ "/dev/sdb", "/dev/sdc" ]' +node2 glusterfs_devices='[ "/dev/sdd" ]' +``` + +## Playbook: registry.yml + +This playbook is intended for admins who want to deploy a hosted Docker +registry with GlusterFS backend storage on an existing OpenShift cluster. It +has all the same requirements and behaviors as `config.yml`. + +## Role: openshift_storage_glusterfs + +The bulk of the work is done by the `openshift_storage_glusterfs` role. This +role can handle the deployment of GlusterFS (if it is to be hosted on the +OpenShift cluster), the registration of GlusterFS nodes (hosted or standalone), +and (if specified) integration as backend storage for a hosted Docker registry. + +See the documentation in the role's directory for further details. + +## Role: openshift_hosted + +The `openshift_hosted` role recognizes `glusterfs` as a possible storage +backend for a hosted docker registry. It will also, if configured, handle the +swap of an existing registry's backend storage to a GlusterFS volume. diff --git a/playbooks/byo/openshift-glusterfs/config.yml b/playbooks/byo/openshift-glusterfs/config.yml new file mode 100644 index 000000000..3f11f3991 --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/config.yml @@ -0,0 +1,10 @@ +--- +- include: ../openshift-cluster/initialize_groups.yml +  tags: +  - always + +- include: ../../common/openshift-cluster/std_include.yml +  tags: +  - always + +- include: ../../common/openshift-glusterfs/config.yml diff --git a/playbooks/byo/openshift-glusterfs/filter_plugins b/playbooks/byo/openshift-glusterfs/filter_plugins new file mode 120000 index 000000000..99a95e4ca --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/filter_plugins @@ -0,0 +1 @@ +../../../filter_plugins
\ No newline at end of file diff --git a/playbooks/byo/openshift-glusterfs/lookup_plugins b/playbooks/byo/openshift-glusterfs/lookup_plugins new file mode 120000 index 000000000..ac79701db --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/lookup_plugins @@ -0,0 +1 @@ +../../../lookup_plugins
\ No newline at end of file diff --git a/playbooks/byo/openshift-glusterfs/registry.yml b/playbooks/byo/openshift-glusterfs/registry.yml new file mode 100644 index 000000000..6ee6febdb --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/registry.yml @@ -0,0 +1,10 @@ +--- +- include: ../openshift-cluster/initialize_groups.yml +  tags: +  - always + +- include: ../../common/openshift-cluster/std_include.yml +  tags: +  - always + +- include: ../../common/openshift-glusterfs/registry.yml diff --git a/playbooks/byo/openshift-glusterfs/roles b/playbooks/byo/openshift-glusterfs/roles new file mode 120000 index 000000000..20c4c58cf --- /dev/null +++ b/playbooks/byo/openshift-glusterfs/roles @@ -0,0 +1 @@ +../../../roles
\ No newline at end of file diff --git a/playbooks/byo/openshift-preflight/check.yml b/playbooks/byo/openshift-preflight/check.yml index 04a55308a..eb763221f 100644 --- a/playbooks/byo/openshift-preflight/check.yml +++ b/playbooks/byo/openshift-preflight/check.yml @@ -1,8 +1,9 @@  ---  - include: ../openshift-cluster/initialize_groups.yml -- hosts: g_all_hosts -  name: run OpenShift health checks +- name: Run OpenShift health checks +  # Temporarily reverting to OSEv3 until group standardization is complete +  hosts: OSEv3    roles:      - openshift_health_checker    post_tasks: diff --git a/playbooks/byo/openshift_facts.yml b/playbooks/byo/openshift_facts.yml index 75b606e61..a8c1c3a88 100644 --- a/playbooks/byo/openshift_facts.yml +++ b/playbooks/byo/openshift_facts.yml @@ -8,7 +8,8 @@    - always  - name: Gather Cluster facts -  hosts: g_all_hosts +  # Temporarily reverting to OSEv3 until group standardization is complete +  hosts: OSEv3    roles:    - openshift_facts    tasks: diff --git a/playbooks/byo/rhel_subscribe.yml b/playbooks/byo/rhel_subscribe.yml index aec87cf82..1b14ff32e 100644 --- a/playbooks/byo/rhel_subscribe.yml +++ b/playbooks/byo/rhel_subscribe.yml @@ -4,7 +4,8 @@    - always  - name: Subscribe hosts, update repos and update OS packages -  hosts: g_all_hosts +  # Temporarily reverting to OSEv3 until group standardization is complete +  hosts: OSEv3    roles:    - role: rhel_subscribe      when: deployment_type in ['atomic-enterprise', 'enterprise', 'openshift-enterprise'] and diff --git a/playbooks/common/openshift-cluster/config.yml b/playbooks/common/openshift-cluster/config.yml index 239bb211b..1482b3a3f 100644 --- a/playbooks/common/openshift-cluster/config.yml +++ b/playbooks/common/openshift-cluster/config.yml @@ -3,9 +3,15 @@    tags:    - always -- include: disable_excluder.yml +- name: Disable excluders +  hosts: oo_masters_to_config:oo_nodes_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}"  - include: ../openshift-etcd/config.yml    tags: @@ -39,6 +45,12 @@    tags:    - hosted -- include: reset_excluder.yml +- name: Re-enable excluder if it was previously enabled +  hosts: oo_masters_to_config:oo_nodes_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" diff --git a/playbooks/common/openshift-cluster/disable_excluder.yml b/playbooks/common/openshift-cluster/disable_excluder.yml deleted file mode 100644 index f664c51c9..000000000 --- a/playbooks/common/openshift-cluster/disable_excluder.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -- name: Disable excluders -  hosts: oo_masters_to_config:oo_nodes_to_config -  gather_facts: no -  tasks: - -  # During installation the excluders are installed with present state. -  # So no pre-validation check here as the excluders are either to be installed (present = latest) -  # or they are not going to be updated if already installed - -  # disable excluders based on their status -  - include_role: -      name: openshift_excluder -      tasks_from: disable -    vars: -      openshift_excluder_package_state: present -      docker_excluder_package_state: present diff --git a/playbooks/common/openshift-cluster/evaluate_groups.yml b/playbooks/common/openshift-cluster/evaluate_groups.yml index 17a177644..46932b27f 100644 --- a/playbooks/common/openshift-cluster/evaluate_groups.yml +++ b/playbooks/common/openshift-cluster/evaluate_groups.yml @@ -155,5 +155,5 @@        groups: oo_glusterfs_to_config        ansible_ssh_user: "{{ g_ssh_user | default(omit) }}"        ansible_become: "{{ g_sudo | default(omit) }}" -    with_items: "{{ g_glusterfs_hosts | default([]) }}" +    with_items: "{{ g_glusterfs_hosts | union(g_glusterfs_registry_hosts) | default([]) }}"      changed_when: no diff --git a/playbooks/common/openshift-cluster/reset_excluder.yml b/playbooks/common/openshift-cluster/reset_excluder.yml deleted file mode 100644 index eaa8ce39c..000000000 --- a/playbooks/common/openshift-cluster/reset_excluder.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- name: Re-enable excluder if it was previously enabled -  hosts: oo_masters_to_config:oo_nodes_to_config -  gather_facts: no -  tasks: -  - include_role: -      name: openshift_excluder -      tasks_from: enable diff --git a/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml b/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml deleted file mode 100644 index 02042c1ef..000000000 --- a/playbooks/common/openshift-cluster/upgrades/disable_excluder.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -- name: Record excluder state and disable -  hosts: oo_masters_to_config:oo_nodes_to_config -  gather_facts: no -  tasks: -  # verify the excluders can be upgraded -  - include_role: -      name: openshift_excluder -      tasks_from: verify_upgrade - -  # disable excluders based on their status -  - include_role: -      name: openshift_excluder -      tasks_from: disable -    vars: -      openshift_excluder_package_state: latest -      docker_excluder_package_state: latest diff --git a/playbooks/common/openshift-cluster/upgrades/disable_master_excluders.yml b/playbooks/common/openshift-cluster/upgrades/disable_master_excluders.yml new file mode 100644 index 000000000..800621857 --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/disable_master_excluders.yml @@ -0,0 +1,12 @@ +--- +- name: Disable excluders +  hosts: oo_masters_to_config +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" +    r_openshift_excluder_verify_upgrade: true +    r_openshift_excluder_upgrade_target: "{{ openshift_upgrade_target }}" +    r_openshift_excluder_package_state: latest +    r_openshift_excluder_docker_package_state: latest diff --git a/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml b/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml new file mode 100644 index 000000000..7988e97ab --- /dev/null +++ b/playbooks/common/openshift-cluster/upgrades/disable_node_excluders.yml @@ -0,0 +1,12 @@ +--- +- name: Disable excluders +  hosts: oo_nodes_to_config +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" +    r_openshift_excluder_verify_upgrade: true +    r_openshift_excluder_upgrade_target: "{{ openshift_upgrade_target }}" +    r_openshift_excluder_package_state: latest +    r_openshift_excluder_docker_package_state: latest diff --git a/playbooks/common/openshift-cluster/upgrades/etcd/files/etcdctl.sh b/playbooks/common/openshift-cluster/upgrades/etcd/files/etcdctl.sh deleted file mode 120000 index 641e04e44..000000000 --- a/playbooks/common/openshift-cluster/upgrades/etcd/files/etcdctl.sh +++ /dev/null @@ -1 +0,0 @@ -../roles/etcd/files/etcdctl.sh
\ No newline at end of file diff --git a/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml index 45e301315..54f9e21a1 100644 --- a/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/etcd/upgrade.yml @@ -2,43 +2,61 @@  - name: Determine etcd version    hosts: oo_etcd_hosts_to_upgrade    tasks: -  - name: Record RPM based etcd version -    command: rpm -qa --qf '%{version}' etcd\* -    args: -      warn: no -    register: etcd_rpm_version -    failed_when: false +  - block: +    - name: Record RPM based etcd version +      command: rpm -qa --qf '%{version}' etcd\* +      args: +        warn: no +      register: etcd_rpm_version +      failed_when: false +      # AUDIT:changed_when: `false` because we are only inspecting +      # state, not manipulating anything +      changed_when: false + +    - debug: +        msg: "Etcd rpm version {{ etcd_rpm_version.stdout }} detected"      when: not openshift.common.is_containerized | bool -    # AUDIT:changed_when: `false` because we are only inspecting -    # state, not manipulating anything -    changed_when: false - -  - name: Record containerized etcd version -    command: docker exec etcd_container rpm -qa --qf '%{version}' etcd\* -    register: etcd_container_version -    failed_when: false -    when: openshift.common.is_containerized | bool -    # AUDIT:changed_when: `false` because we are only inspecting -    # state, not manipulating anything -    changed_when: false - -  - name: Record containerized etcd version -    command: docker exec etcd_container rpm -qa --qf '%{version}' etcd\* -    register: etcd_container_version -    failed_when: false -    when: openshift.common.is_containerized | bool and not openshift.common.is_etcd_system_container | bool -    # AUDIT:changed_when: `false` because we are only inspecting -    # state, not manipulating anything -    changed_when: false - -  - name: Record containerized etcd version -    command: runc exec etcd_container rpm -qa --qf '%{version}' etcd\* -    register: etcd_container_version -    failed_when: false -    when: openshift.common.is_containerized | bool and openshift.common.is_etcd_system_container | bool -    # AUDIT:changed_when: `false` because we are only inspecting -    # state, not manipulating anything -    changed_when: false + +  - block: +    - name: Record containerized etcd version (docker) +      command: docker exec etcd_container rpm -qa --qf '%{version}' etcd\* +      register: etcd_container_version_docker +      failed_when: false +      # AUDIT:changed_when: `false` because we are only inspecting +      # state, not manipulating anything +      changed_when: false +      when: +      - not openshift.common.is_etcd_system_container | bool + +      # Given a register variables is set even if the whwen condition +      # is false, we need to set etcd_container_version separately +    - set_fact: +        etcd_container_version: "{{ etcd_container_version_docker.stdout }}" +      when: +      - not openshift.common.is_etcd_system_container | bool + +    - name: Record containerized etcd version (runc) +      command: runc exec etcd_container rpm -qa --qf '%{version}' etcd\* +      register: etcd_container_version_runc +      failed_when: false +      # AUDIT:changed_when: `false` because we are only inspecting +      # state, not manipulating anything +      changed_when: false +      when: +      - openshift.common.is_etcd_system_container | bool + +      # Given a register variables is set even if the whwen condition +      # is false, we need to set etcd_container_version separately +    - set_fact: +        etcd_container_version: "{{ etcd_container_version_runc.stdout }}" +      when: +      - openshift.common.is_etcd_system_container | bool + +    - debug: +        msg: "Etcd containerized version {{ etcd_container_version }} detected" + +    when: +    - openshift.common.is_containerized | bool  # I really dislike this copy/pasta but I wasn't able to find a way to get it to loop  # through hosts, then loop through tasks only when appropriate @@ -67,7 +85,7 @@      upgrade_version: 2.2.5    tasks:    - include: containerized_tasks.yml -    when: etcd_container_version.stdout | default('99') | version_compare('2.2','<') and openshift.common.is_containerized | bool +    when: etcd_container_version | default('99') | version_compare('2.2','<') and openshift.common.is_containerized | bool  - name: Upgrade RPM hosts to 2.3    hosts: oo_etcd_hosts_to_upgrade @@ -85,7 +103,7 @@      upgrade_version: 2.3.7    tasks:    - include: containerized_tasks.yml -    when: etcd_container_version.stdout | default('99') | version_compare('2.3','<') and openshift.common.is_containerized | bool +    when: etcd_container_version | default('99') | version_compare('2.3','<') and openshift.common.is_containerized | bool  - name: Upgrade RPM hosts to 3.0    hosts: oo_etcd_hosts_to_upgrade @@ -103,7 +121,7 @@      upgrade_version: 3.0.15    tasks:    - include: containerized_tasks.yml -    when: etcd_container_version.stdout | default('99') | version_compare('3.0','<') and openshift.common.is_containerized | bool +    when: etcd_container_version | default('99') | version_compare('3.0','<') and openshift.common.is_containerized | bool  - name: Upgrade fedora to latest    hosts: oo_etcd_hosts_to_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py b/playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py index 673f11889..4eac8b067 100755 --- a/playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py +++ b/playbooks/common/openshift-cluster/upgrades/library/openshift_upgrade_config.py @@ -1,7 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4 -  """Ansible module for modifying OpenShift configs during an upgrade"""  import os diff --git a/playbooks/common/openshift-cluster/upgrades/post_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/post_control_plane.yml index 0d7cdb227..9b76f1dd0 100644 --- a/playbooks/common/openshift-cluster/upgrades/post_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/post_control_plane.yml @@ -9,6 +9,8 @@                           replace ( '${version}', openshift_image_tag ) }}"      router_image: "{{ openshift.master.registry_url | replace( '${component}', 'haproxy-router' ) |                        replace ( '${version}', openshift_image_tag ) }}" +    registry_console_image: "{{ openshift.master.registry_url | replace ( '${component}', 'registry-console') | +                                replace ( '${version}', openshift.common.short_version ) }}"    pre_tasks:    - name: Load lib_openshift modules @@ -61,6 +63,26 @@      when:      - _default_registry.results.results[0] != {} +  - name: Check for registry-console +    oc_obj: +      state: list +      kind: dc +      name: registry-console +    register: _registry_console +    when: +    - openshift.common.deployment_type != 'origin' + +  - name: Update registry-console image to current version +    oc_edit: +      kind: dc +      name: registry-console +      namespace: default +      content: +        spec.template.spec.containers[0].image: "{{ registry_console_image }}" +    when: +    - openshift.common.deployment_type != 'origin' +    - _registry_console.results.results[0] != {} +    roles:    - openshift_manageiq    # Create the new templates shipped in 3.2, existing templates are left @@ -97,6 +119,12 @@      - not grep_plugin_order_override | skipped      - grep_plugin_order_override.rc == 0 -- include: ../reset_excluder.yml +- name: Re-enable excluder if it was previously enabled +  hosts: oo_masters_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" diff --git a/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml b/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml index c83923dae..6a9f88707 100644 --- a/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml +++ b/playbooks/common/openshift-cluster/upgrades/pre/verify_upgrade_targets.yml @@ -1,21 +1,13 @@  ---  - name: Verify upgrade targets    hosts: oo_masters_to_config:oo_nodes_to_upgrade -  vars: -    openshift_docker_hosted_registry_network: "{{ hostvars[groups.oo_first_master.0].openshift.common.portal_net }}" -  pre_tasks: -  - fail: + +  tasks: +  - name: Fail when OpenShift is not installed +    fail:        msg: Verify OpenShift is already installed      when: openshift.common.version is not defined -  - fail: -      msg: Verify the correct version was found -    when: verify_upgrade_version is defined and openshift_version != verify_upgrade_version - -  - set_fact: -      g_new_service_name: "{{ 'origin' if deployment_type =='origin' else 'atomic-openshift' }}" -    when: not openshift.common.is_containerized | bool -    - name: Verify containers are available for upgrade      command: >        docker pull {{ openshift.common.cli_image }}:{{ openshift_image_tag }} @@ -23,19 +15,31 @@      changed_when: "'Downloaded newer image' in pull_result.stdout"      when: openshift.common.is_containerized | bool -  - name: Check latest available OpenShift RPM version -    command: > -      {{ repoquery_cmd }} --qf '%{version}' "{{ openshift.common.service_type }}" -    failed_when: false -    changed_when: false -    register: avail_openshift_version -    when: not openshift.common.is_containerized | bool +  - when: not openshift.common.is_containerized | bool +    block: +    - name: Check latest available OpenShift RPM version +      command: > +        {{ repoquery_cmd }} --qf '%{version}' "{{ openshift.common.service_type }}" +      failed_when: false +      changed_when: false +      register: avail_openshift_version -  - name: Verify OpenShift RPMs are available for upgrade -    fail: -      msg: "OpenShift {{ avail_openshift_version.stdout }} is available, but {{ openshift_upgrade_target }} or greater is required" -    when: not openshift.common.is_containerized | bool and not avail_openshift_version | skipped and avail_openshift_version.stdout | default('0.0', True) | version_compare(openshift_release, '<') +    - name: Fail when unable to determine available OpenShift RPM version +      fail: +        msg: "Unable to determine available OpenShift RPM version" +      when: +      - avail_openshift_version.stdout == '' -  - fail: +    - name: Verify OpenShift RPMs are available for upgrade +      fail: +        msg: "OpenShift {{ avail_openshift_version.stdout }} is available, but {{ openshift_upgrade_target }} or greater is required" +      when: +      - not avail_openshift_version | skipped +      - avail_openshift_version.stdout | default('0.0', True) | version_compare(openshift_release, '<') + +  - name: Fail when openshift version does not meet minium requirement for Origin upgrade +    fail:        msg: "This upgrade playbook must be run against OpenShift {{ openshift_upgrade_min }} or later" -    when: deployment_type == 'origin' and openshift.common.version | version_compare(openshift_upgrade_min,'<') +    when: +    - deployment_type == 'origin' +    - openshift.common.version | version_compare(openshift_upgrade_min,'<') diff --git a/playbooks/common/openshift-cluster/upgrades/rpm_upgrade.yml b/playbooks/common/openshift-cluster/upgrades/rpm_upgrade.yml index 03ac02e9f..164baca81 100644 --- a/playbooks/common/openshift-cluster/upgrades/rpm_upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/rpm_upgrade.yml @@ -1,27 +1,39 @@  --- -# We verified latest rpm available is suitable, so just yum update. +# When we update package "a-${version}" and a requires b >= ${version} if we +# don't specify the version of b yum will choose the latest version of b +# available and the whole set of dependencies end up at the latest version. +# Since the package module, unlike the yum module, doesn't flatten a list +# of packages into one transaction we need to do that explicitly. The ansible +# core team tells us not to rely on yum module transaction flattening anyway. + +# TODO: If the sdn package isn't already installed this will install it, we +# should fix that -# Master package upgrade ends up depending on node and sdn packages, we need to be explicit -# with all versions to avoid yum from accidentally jumping to something newer than intended:  - name: Upgrade master packages -  package: name={{ item }} state=present -  when: component == "master" -  with_items: -  - "{{ openshift.common.service_type }}{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-master{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-sdn-ovs{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-clients{{ openshift_pkg_version }}" +  package: name={{ master_pkgs | join(',') }} state=present +  vars: +    master_pkgs: +      - "{{ openshift.common.service_type }}{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-master{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-sdn-ovs{{ openshift_pkg_version}}" +      - "{{ openshift.common.service_type }}-clients{{ openshift_pkg_version }}" +      - "tuned-profiles-{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" +      - PyYAML +  when: +    - component == "master" +    - not openshift.common.is_atomic | bool  - name: Upgrade node packages -  package: name={{ item }} state=present -  when: component == "node" -  with_items: -  - "{{ openshift.common.service_type }}{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-sdn-ovs{{ openshift_pkg_version }}" -  - "{{ openshift.common.service_type }}-clients{{ openshift_pkg_version }}" - -- name: Ensure python-yaml present for config upgrade -  package: name=PyYAML state=present -  when: not openshift.common.is_atomic | bool +  package: name={{ node_pkgs | join(',') }} state=present +  vars: +    node_pkgs: +      - "{{ openshift.common.service_type }}{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-sdn-ovs{{ openshift_pkg_version }}" +      - "{{ openshift.common.service_type }}-clients{{ openshift_pkg_version }}" +      - "tuned-profiles-{{ openshift.common.service_type }}-node{{ openshift_pkg_version }}" +      - PyYAML +  when: +    - component == "node" +    - not openshift.common.is_atomic | bool diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml index e9f894942..4d455fe0a 100644 --- a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml @@ -34,6 +34,9 @@    - openshift_facts    - docker    - openshift_node_upgrade +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}"    post_tasks:    - name: Set node schedulability @@ -46,7 +49,3 @@      register: node_schedulable      until: node_schedulable|succeeded      when: node_unschedulable|changed - -- include: ../reset_excluder.yml -  tags: -  - always diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml index be18c1edd..d81a13ef2 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade.yml @@ -46,7 +46,11 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml +  tags: +  - pre_upgrade + +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml index 20dffb44b..8a692d02b 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_control_plane.yml @@ -54,7 +54,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml index 14aaf70d6..2d30bba94 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_3/upgrade_nodes.yml @@ -47,7 +47,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml index 5d6455bef..e9ff47f32 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade.yml @@ -46,7 +46,11 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml +  tags: +  - pre_upgrade + +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml index c76920586..d4ae8d8b4 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_control_plane.yml @@ -54,7 +54,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml index f397f6015..ae205b172 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_4/upgrade_nodes.yml @@ -47,7 +47,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml index 7cedfb1ca..1269634d1 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade.yml @@ -46,12 +46,14 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml +  tags: +  - pre_upgrade + +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade -# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. -#       So it is necessary to run the play after running disable_excluder.yml.  - include: ../../initialize_openshift_version.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml index 0198074ed..21c075678 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_control_plane.yml @@ -54,7 +54,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml index 2b16875f4..e67e169fc 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/upgrade_nodes.yml @@ -47,7 +47,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml index 4604bdc8b..a1b1f3301 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade.yml @@ -46,12 +46,14 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml +  tags: +  - pre_upgrade + +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade -# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. -#       So it is necassary to run the play after running disable_excluder.yml.  - include: ../../initialize_openshift_version.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml index a09097ed9..af6e1f71b 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml @@ -54,7 +54,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_master_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml index 7640f2116..285c18b7b 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml @@ -47,7 +47,7 @@    tags:    - pre_upgrade -- include: ../disable_excluder.yml +- include: ../disable_node_excluders.yml    tags:    - pre_upgrade diff --git a/playbooks/common/openshift-cluster/upgrades/v3_6/validator.yml b/playbooks/common/openshift-cluster/upgrades/v3_6/validator.yml index ac5704f69..78c1767b8 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_6/validator.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_6/validator.yml @@ -7,4 +7,6 @@    hosts: oo_first_master    roles:    - { role: lib_openshift } -  tasks: [] +  tasks: +  - name: Check for invalid namespaces and SDN errors +    oc_objectvalidator: diff --git a/playbooks/common/openshift-glusterfs/config.yml b/playbooks/common/openshift-glusterfs/config.yml index 75faf5ba8..1efdfb336 100644 --- a/playbooks/common/openshift-glusterfs/config.yml +++ b/playbooks/common/openshift-glusterfs/config.yml @@ -12,7 +12,9 @@      - service: glusterfs_bricks        port: "49152-49251/tcp"    roles: -  - os_firewall +  - role: os_firewall +    when: +    - openshift_storage_glusterfs_is_native | default(True)  - name: Configure GlusterFS    hosts: oo_first_master diff --git a/playbooks/common/openshift-glusterfs/registry.yml b/playbooks/common/openshift-glusterfs/registry.yml new file mode 100644 index 000000000..80cf7529e --- /dev/null +++ b/playbooks/common/openshift-glusterfs/registry.yml @@ -0,0 +1,49 @@ +--- +- include: config.yml + +- name: Initialize GlusterFS registry PV and PVC vars +  hosts: oo_first_master +  tags: hosted +  tasks: +  - set_fact: +      glusterfs_pv: [] +      glusterfs_pvc: [] + +  - set_fact: +      glusterfs_pv: +      - name: "{{ openshift.hosted.registry.storage.volume.name }}-glusterfs-volume" +        capacity: "{{ openshift.hosted.registry.storage.volume.size }}" +        access_modes: "{{ openshift.hosted.registry.storage.access.modes }}" +        storage: +          glusterfs: +            endpoints: "{{ openshift.hosted.registry.storage.glusterfs.endpoints }}" +            path: "{{ openshift.hosted.registry.storage.glusterfs.path }}" +            readOnly: "{{ openshift.hosted.registry.storage.glusterfs.readOnly }}" +      glusterfs_pvc: +      - name: "{{ openshift.hosted.registry.storage.volume.name }}-glusterfs-claim" +        capacity: "{{ openshift.hosted.registry.storage.volume.size }}" +        access_modes: "{{ openshift.hosted.registry.storage.access.modes }}" +    when: openshift.hosted.registry.storage.glusterfs.swap + +- name: Create persistent volumes +  hosts: oo_first_master +  tags: +  - hosted +  vars: +    persistent_volumes: "{{ hostvars[groups.oo_first_master.0] | oo_persistent_volumes(groups, glusterfs_pv) }}" +    persistent_volume_claims: "{{ hostvars[groups.oo_first_master.0] | oo_persistent_volume_claims(glusterfs_pvc) }}" +  roles: +  - role: openshift_persistent_volumes +    when: persistent_volumes | union(glusterfs_pv) | length > 0 or persistent_volume_claims | union(glusterfs_pvc) | length > 0 + +- name: Create Hosted Resources +  hosts: oo_first_master +  tags: +  - hosted +  pre_tasks: +  - set_fact: +      openshift_hosted_router_registryurl: "{{ hostvars[groups.oo_first_master.0].openshift.master.registry_url }}" +      openshift_hosted_registry_registryurl: "{{ hostvars[groups.oo_first_master.0].openshift.master.registry_url }}" +    when: "'master' in hostvars[groups.oo_first_master.0].openshift and 'registry_url' in hostvars[groups.oo_first_master.0].openshift.master" +  roles: +  - role: openshift_hosted diff --git a/playbooks/common/openshift-master/scaleup.yml b/playbooks/common/openshift-master/scaleup.yml index ab0045a39..bc61ee9bb 100644 --- a/playbooks/common/openshift-master/scaleup.yml +++ b/playbooks/common/openshift-master/scaleup.yml @@ -60,9 +60,15 @@    - openshift_facts    - openshift_docker -- include: ../openshift-cluster/disable_excluder.yml +- name: Disable excluders +  hosts: oo_masters_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}"  - include: ../openshift-master/config.yml @@ -70,6 +76,12 @@  - include: ../openshift-node/config.yml -- include: ../openshift-cluster/reset_excluder.yml +- name: Re-enable excluder if it was previously enabled +  hosts: oo_masters_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" diff --git a/playbooks/common/openshift-node/restart.yml b/playbooks/common/openshift-node/restart.yml index 441b100e9..01cf948e0 100644 --- a/playbooks/common/openshift-node/restart.yml +++ b/playbooks/common/openshift-node/restart.yml @@ -51,7 +51,7 @@      register: node_output      delegate_to: "{{ groups.oo_first_master.0 }}"      when: inventory_hostname in groups.oo_nodes_to_config -    until: node_output.results.results[0].status.conditions | selectattr('type', 'match', '^Ready$') | map(attribute='status') | join | bool == True +    until: node_output.results.returncode == 0 and node_output.results.results[0].status.conditions | selectattr('type', 'match', '^Ready$') | map(attribute='status') | join | bool == True      # Give the node two minutes to come back online.      retries: 24      delay: 5 diff --git a/playbooks/common/openshift-node/scaleup.yml b/playbooks/common/openshift-node/scaleup.yml index c31aca62b..40da8990d 100644 --- a/playbooks/common/openshift-node/scaleup.yml +++ b/playbooks/common/openshift-node/scaleup.yml @@ -27,12 +27,24 @@    - openshift_facts    - openshift_docker -- include: ../openshift-cluster/disable_excluder.yml +- name: Disable excluders +  hosts: oo_nodes_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}"  - include: ../openshift-node/config.yml -- include: ../openshift-cluster/reset_excluder.yml +- name: Re-enable excluder if it was previously enabled +  hosts: oo_nodes_to_config    tags:    - always +  gather_facts: no +  roles: +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" diff --git a/requirements.txt b/requirements.txt index 1996a967d..734ee6201 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,7 @@  # Versions are pinned to prevent pypi releases arbitrarily breaking  # tests with new APIs/semantics. We want to update versions deliberately.  ansible==2.2.2.0 -boto==2.45.0 +boto==2.34.0  click==6.7  pyOpenSSL==16.2.0  # We need to disable ruamel.yaml for now because of test failures diff --git a/roles/calico/README.md b/roles/calico/README.md index 99e870521..9b9458bfa 100644 --- a/roles/calico/README.md +++ b/roles/calico/README.md @@ -20,6 +20,15 @@ To install, set the following inventory configuration parameters:  * `openshift_use_openshift_sdn=False`  * `os_sdn_network_plugin_name='cni'` +## Additional Calico/Node and Felix Configuration Options + +Additional parameters that can be defined in the inventory are: + +| Environment | Description | Schema | Default |    +|---------|----------------------|---------|---------| +|CALICO_IPV4POOL_CIDR|	The IPv4 Pool to create if none exists at start up. It is invalid to define this variable and NO_DEFAULT_POOLS.	|IPv4 CIDR	| 192.168.0.0/16 | +| CALICO_IPV4POOL_IPIP | IPIP Mode to use for the IPv4 POOL created at start up.	| off, always, cross-subnet	| always | +| CALICO_LOG_DIR | Directory on the host machine where Calico Logs are written.| String	| /var/log/calico |  ### Contact Information diff --git a/roles/calico/defaults/main.yaml b/roles/calico/defaults/main.yaml index a81fc3af7..03c612982 100644 --- a/roles/calico/defaults/main.yaml +++ b/roles/calico/defaults/main.yaml @@ -4,7 +4,17 @@ etcd_endpoints: "{{ hostvars[groups.oo_first_master.0].openshift.master.etcd_url  cni_conf_dir: "/etc/cni/net.d/"  cni_bin_dir: "/opt/cni/bin/" +cni_url: "https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz"  calico_etcd_ca_cert_file: "/etc/origin/calico/calico.etcd-ca.crt"  calico_etcd_cert_file: "/etc/origin/calico/calico.etcd-client.crt"  calico_etcd_key_file: "/etc/origin/calico/calico.etcd-client.key" + +calico_url_cni: "https://github.com/projectcalico/cni-plugin/releases/download/v1.5.5/calico" +calico_url_ipam: "https://github.com/projectcalico/cni-plugin/releases/download/v1.5.5/calico-ipam" + +calico_ipv4pool_ipip: "always" +calico_ipv4pool_cidr: "192.168.0.0/16" + +calico_log_dir: "/var/log/calico" +calico_node_image: "calico/node:v1.1.0" diff --git a/roles/calico/tasks/main.yml b/roles/calico/tasks/main.yml index 287fed321..fa5e338b3 100644 --- a/roles/calico/tasks/main.yml +++ b/roles/calico/tasks/main.yml @@ -7,7 +7,7 @@      etcd_ca_host: "{{ groups.oo_etcd_to_config.0 }}"      etcd_cert_subdir: "openshift-calico-{{ openshift.common.hostname }}" -- name: Assure the calico certs have been generated +- name: Calico Node | Assure the calico certs have been generated    stat:      path: "{{ item }}"    with_items: @@ -15,12 +15,12 @@    - "{{ calico_etcd_cert_file}}"    - "{{ calico_etcd_key_file }}" -- name: Configure Calico service unit file +- name: Calico Node | Configure Calico service unit file    template:      dest: "/lib/systemd/system/calico.service"      src: calico.service.j2 -- name: Enable calico +- name: Calico Node | Enable calico    become: yes    systemd:      name: calico @@ -29,46 +29,46 @@      enabled: yes    register: start_result -- name: Assure CNI conf dir exists +- name: Calico Node | Assure CNI conf dir exists    become: yes    file: path="{{ cni_conf_dir }}" state=directory -- name: Generate Calico CNI config +- name: Calico Node | Generate Calico CNI config    become: yes    template: -    src: "calico.conf.j2" +    src: "10-calico.conf.j2"      dest: "{{ cni_conf_dir }}/10-calico.conf" -- name: Assures Kuberentes CNI bin dir exists +- name: Calico Node | Assures Kuberentes CNI bin dir exists    become: yes    file: path="{{ cni_bin_dir }}" state=directory -- name: Download Calico CNI Plugin +- name: Calico Node | Download Calico CNI Plugin    become: yes    get_url: -    url: https://github.com/projectcalico/cni-plugin/releases/download/v1.5.5/calico +    url: "{{ calico_url_cni }}"      dest: "{{ cni_bin_dir }}"      mode: a+x -- name: Download Calico IPAM Plugin +- name: Calico Node | Download Calico IPAM Plugin    become: yes    get_url: -    url: https://github.com/projectcalico/cni-plugin/releases/download/v1.5.5/calico-ipam +    url: "{{ calico_url_ipam }}"      dest: "{{ cni_bin_dir }}"      mode: a+x -- name: Download and unzip standard CNI plugins +- name: Calico Node | Download and extract standard CNI plugins    become: yes    unarchive:      remote_src: True -    src: https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz +    src: "{{ cni_url }}"      dest: "{{ cni_bin_dir }}" -- name: Assure Calico conf dir exists +- name: Calico Node | Assure Calico conf dir exists    become: yes    file: path=/etc/calico/ state=directory -- name: Set calicoctl.cfg +- name: Calico Node | Set calicoctl.cfg    template: -    src: calico.cfg.j2 +    src: calicoctl.cfg.j2      dest: "/etc/calico/calicoctl.cfg" diff --git a/roles/calico/templates/calico.cfg.j2 b/roles/calico/templates/10-calico.cfg.j2 index 722385ed8..722385ed8 100644 --- a/roles/calico/templates/calico.cfg.j2 +++ b/roles/calico/templates/10-calico.cfg.j2 diff --git a/roles/calico/templates/calico.service.j2 b/roles/calico/templates/calico.service.j2 index b882a5597..719d7ba0d 100644 --- a/roles/calico/templates/calico.service.j2 +++ b/roles/calico/templates/calico.service.j2 @@ -1,7 +1,7 @@  [Unit]  Description=calico -After=docker.service -Requires=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service  [Service]  Restart=always @@ -10,7 +10,8 @@ ExecStart=/usr/bin/docker run --net=host --privileged \   --name=calico-node \   -e WAIT_FOR_DATASTORE=true \   -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \ - -e CALICO_IPV4POOL_IPIP=always \ + -e CALICO_IPV4POOL_IPIP={{ calico_ipv4pool_ipip }} \ + -e CALICO_IPV4POOL_CIDR={{ calico_ipv4pool_cidr }} \   -e FELIX_IPV6SUPPORT=false \   -e ETCD_ENDPOINTS={{ etcd_endpoints }} \   -v /etc/origin/calico:/etc/origin/calico \ @@ -18,10 +19,11 @@ ExecStart=/usr/bin/docker run --net=host --privileged \   -e ETCD_CERT_FILE={{ calico_etcd_cert_file }} \   -e ETCD_KEY_FILE={{ calico_etcd_key_file }} \   -e NODENAME={{ openshift.common.hostname }} \ - -v /var/log/calico:/var/log/calico \ + -v {{ calico_log_dir }}:/var/log/calico\   -v /lib/modules:/lib/modules \   -v /var/run/calico:/var/run/calico \ - calico/node:v1.1.0 + {{ calico_node_image }} +  ExecStop=-/usr/bin/docker stop calico-node diff --git a/roles/calico/templates/calico.conf.j2 b/roles/calico/templates/calicoctl.conf.j2 index 3c8c6b046..3c8c6b046 100644 --- a/roles/calico/templates/calico.conf.j2 +++ b/roles/calico/templates/calicoctl.conf.j2 diff --git a/roles/calico_master/README.md b/roles/calico_master/README.md index 2d34a967c..6f5ed0664 100644 --- a/roles/calico_master/README.md +++ b/roles/calico_master/README.md @@ -21,6 +21,18 @@ To install, set the following inventory configuration parameters:  * `os_sdn_network_plugin_name='cni'` + +## Additional Calico/Node and Felix Configuration Options + +Additional parameters that can be defined in the inventory are: + + +| Environment | Description | Schema | Default |    +|---------|----------------------|---------|---------| +|CALICO_IPV4POOL_CIDR|	The IPv4 Pool to create if none exists at start up. It is invalid to define this variable and NO_DEFAULT_POOLS.	|IPv4 CIDR	| 192.168.0.0/16 | +| CALICO_IPV4POOL_IPIP | IPIP Mode to use for the IPv4 POOL created at start up.	| off, always, cross-subnet	| always | +| CALICO_LOG_DIR | Directory on the host machine where Calico Logs are written.| String	| /var/log/calico | +  ### Contact Information  Author: Dan Osborne <dan@projectcalico.org> diff --git a/roles/calico_master/defaults/main.yaml b/roles/calico_master/defaults/main.yaml index db0d17884..5b324bce5 100644 --- a/roles/calico_master/defaults/main.yaml +++ b/roles/calico_master/defaults/main.yaml @@ -1,2 +1,6 @@  ---  kubeconfig: "{{ openshift.common.config_base }}/master/openshift-master.kubeconfig" + +calicoctl_bin_dir: "/usr/local/bin/" + +calico_url_calicoctl: "https://github.com/projectcalico/calicoctl/releases/download/v1.1.3/calicoctl" diff --git a/roles/calico_master/tasks/main.yml b/roles/calico_master/tasks/main.yml index 3358abe23..8ddca26d6 100644 --- a/roles/calico_master/tasks/main.yml +++ b/roles/calico_master/tasks/main.yml @@ -1,5 +1,5 @@  --- -- name: Assure the calico certs have been generated +- name: Calico Master | Assure the calico certs have been generated    stat:      path: "{{ item }}"    with_items: @@ -7,17 +7,17 @@    - "{{ calico_etcd_cert_file}}"    - "{{ calico_etcd_key_file }}" -- name: Create temp directory for policy controller definition +- name: Calico Master | Create temp directory for policy controller definition    command: mktemp -d /tmp/openshift-ansible-XXXXXXX    register: mktemp    changed_when: False -- name: Write Calico Policy Controller definition +- name: Calico Master | Write Calico Policy Controller definition    template:      dest: "{{ mktemp.stdout }}/calico-policy-controller.yml"      src: calico-policy-controller.yml.j2 -- name: Launch Calico Policy Controller +- name: Calico Master | Launch Calico Policy Controller    command: >      {{ openshift.common.client_binary }} create      -f {{ mktemp.stdout }}/calico-policy-controller.yml @@ -26,16 +26,23 @@    failed_when: ('already exists' not in calico_create_output.stderr) and ('created' not in calico_create_output.stdout)    changed_when: ('created' in calico_create_output.stdout) -- name: Delete temp directory +- name: Calico Master | Delete temp directory    file:      name: "{{ mktemp.stdout }}"      state: absent    changed_when: False -- name: oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:calico +- name: Calico Master | oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:calico    oc_adm_policy_user:      user: system:serviceaccount:kube-system:calico      resource_kind: scc      resource_name: privileged      state: present + +- name: Download Calicoctl +  become: yes +  get_url: +    url: "{{ calico_url_calicoctl }}" +    dest: "{{ calicoctl_bin_dir }}" +    mode: a+x diff --git a/roles/contiv/templates/aci-gw.service b/roles/contiv/templates/aci-gw.service index 8e4b66fbe..4506d2231 100644 --- a/roles/contiv/templates/aci-gw.service +++ b/roles/contiv/templates/aci-gw.service @@ -1,6 +1,6 @@  [Unit]  Description=Contiv ACI gw -After=auditd.service systemd-user-sessions.service time-sync.target docker.service +After=auditd.service systemd-user-sessions.service time-sync.target {{ openshift.docker.service_name }}.service  [Service]  ExecStart={{ bin_dir }}/aci_gw.sh start diff --git a/roles/dns/templates/named.service.j2 b/roles/dns/templates/named.service.j2 index 566739f25..6e0a7a640 100644 --- a/roles/dns/templates/named.service.j2 +++ b/roles/dns/templates/named.service.j2 @@ -1,7 +1,7 @@  [Unit] -Requires=docker.service -After=docker.service -PartOf=docker.service +Requires={{ openshift.docker.service_name }}.service +After={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  Type=simple @@ -12,4 +12,4 @@ ExecStart=/usr/bin/docker run --name bind -p 53:53/udp -v /var/log:/var/log -v /  ExecStop=/usr/bin/docker stop bind  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/docker/README.md b/roles/docker/README.md index f25ca03cd..4a9f21f22 100644 --- a/roles/docker/README.md +++ b/roles/docker/README.md @@ -3,6 +3,8 @@ Docker  Ensures docker package or system container is installed, and optionally raises timeout for systemd-udevd.service to 5 minutes. +daemon.json items may be found at https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file +  Requirements  ------------ diff --git a/roles/docker/tasks/package_docker.yml b/roles/docker/tasks/package_docker.yml index 10fb5772c..e101730d2 100644 --- a/roles/docker/tasks/package_docker.yml +++ b/roles/docker/tasks/package_docker.yml @@ -46,7 +46,7 @@      template:        dest: "{{ docker_systemd_dir }}/custom.conf"        src: custom.conf.j2 -  when: not os_firewall_use_firewalld | default(True) | bool +  when: not os_firewall_use_firewalld | default(False) | bool  - stat: path=/etc/sysconfig/docker    register: docker_check diff --git a/roles/docker/tasks/systemcontainer_docker.yml b/roles/docker/tasks/systemcontainer_docker.yml index b0d0632b0..3af3e00b2 100644 --- a/roles/docker/tasks/systemcontainer_docker.yml +++ b/roles/docker/tasks/systemcontainer_docker.yml @@ -27,27 +27,51 @@      state: present    when: not openshift.common.is_atomic | bool -# If we are on atomic, set http_proxy and https_proxy in /etc/atomic.conf +# Make sure Docker is installed so we are able to use the client +- name: Install Docker so we can use the client +  package: name=docker{{ '-' + docker_version if docker_version is defined else '' }} state=present +  when: not openshift.common.is_atomic | bool + +# Make sure docker is disabled. Errors are ignored. +- name: Disable Docker +  systemd: +    name: docker +    enabled: no +    state: stopped +    daemon_reload: yes +  ignore_errors: True + +# Set http_proxy, https_proxy, and no_proxy in /etc/atomic.conf +# regexp: the line starts with or without #, followed by the string +#         http_proxy, then either : or =  - block:      - name: Add http_proxy to /etc/atomic.conf        lineinfile: -        path: /etc/atomic.conf -        line: "http_proxy={{ openshift.common.http_proxy | default('') }}" +        dest: /etc/atomic.conf +        regexp: "^#?http_proxy[:=]{1}" +        line: "http_proxy: {{ openshift.common.http_proxy | default('') }}"        when:          - openshift.common.http_proxy is defined          - openshift.common.http_proxy != ''      - name: Add https_proxy to /etc/atomic.conf        lineinfile: -        path: /etc/atomic.conf -        line: "https_proxy={{ openshift.common.https_proxy | default('') }}" +        dest: /etc/atomic.conf +        regexp: "^#?https_proxy[:=]{1}" +        line: "https_proxy: {{ openshift.common.https_proxy | default('') }}"        when:          - openshift.common.https_proxy is defined          - openshift.common.https_proxy != '' -  when: openshift.common.is_atomic | bool - +    - name: Add no_proxy to /etc/atomic.conf +      lineinfile: +        dest: /etc/atomic.conf +        regexp: "^#?no_proxy[:=]{1}" +        line: "no_proxy: {{ openshift.common.no_proxy | default('') }}" +      when: +        - openshift.common.no_proxy is defined +        - openshift.common.no_proxy != ''  - block: @@ -77,23 +101,17 @@        set_fact:          l_docker_image: "{{ l_docker_image_prepend }}/{{ openshift.docker.service_name }}:latest" +# NOTE: no_proxy added as a workaround until https://github.com/projectatomic/atomic/pull/999 is released  - name: Pre-pull Container Enginer System Container image    command: "atomic pull --storage ostree {{ l_docker_image }}"    changed_when: false +  environment: +    NO_PROXY: "{{ openshift.common.no_proxy | default('') }}" -# Make sure docker is disabled Errors are ignored as docker may not -# be installed. -- name: Disable Docker -  systemd: -    name: docker -    enabled: no -    state: stopped -    daemon_reload: yes -  ignore_errors: True -- name: Ensure docker.service.d directory exists +- name: Ensure container-engine.service.d directory exists    file: -    path: "{{ docker_systemd_dir }}" +    path: "{{ container_engine_systemd_dir }}"      state: directory  - name: Ensure /etc/docker directory exists @@ -111,9 +129,18 @@  - name: Configure Container Engine Service File    template: -    dest: "{{ docker_systemd_dir }}/custom.conf" +    dest: "{{ container_engine_systemd_dir }}/custom.conf"      src: systemcontainercustom.conf.j2 +# Set local versions of facts that must be in json format for daemon.json +# NOTE: When jinja2.9+ is used the daemon.json file can move to using tojson +- set_fact: +    l_docker_insecure_registries: "{{ docker_insecure_registries | default([]) | to_json }}" +    l_docker_log_options: "{{ docker_log_options | default({}) | to_json }}" +    l_docker_additional_registries: "{{ docker_additional_registries | default([]) | to_json }}" +    l_docker_blocked_registries: "{{ docker_blocked_registries | default([]) | to_json }}" +    l_docker_selinux_enabled: "{{ docker_selinux_enabled | default(true) | to_json }}" +  # Configure container-engine using the daemon.json file  - name: Configure Container Engine    template: diff --git a/roles/docker/templates/daemon.json b/roles/docker/templates/daemon.json index 30a1b30f4..a41b7cdbd 100644 --- a/roles/docker/templates/daemon.json +++ b/roles/docker/templates/daemon.json @@ -1,66 +1,20 @@ -  { -    "api-cors-header": "",      "authorization-plugins": ["rhel-push-plugin"], -    "bip": "", -    "bridge": "", -    "cgroup-parent": "", -    "cluster-store": "", -    "cluster-store-opts": {}, -    "cluster-advertise": "", -    "debug": true, -    "default-gateway": "", -    "default-gateway-v6": "",      "default-runtime": "oci", -    "containerd": "/var/run/containerd.sock", -    "default-ulimits": {}, +    "containerd": "/run/containerd.sock",      "disable-legacy-registry": false, -    "dns": [], -    "dns-opts": [], -    "dns-search": [],      "exec-opts": ["native.cgroupdriver=systemd"], -    "exec-root": "", -    "fixed-cidr": "", -    "fixed-cidr-v6": "", -    "graph": "", -    "group": "", -    "hosts": [], -    "icc": false, -    "insecure-registries": {{ docker_insecure_registries|default([]) }}, -    "ip": "0.0.0.0", -    "iptables": false, -    "ipv6": false, -    "ip-forward": false, -    "ip-masq": false, -    "labels": [], -    "live-restore": true, +    "insecure-registries": {{ l_docker_insecure_registries }},  {% if docker_log_driver is defined  %}      "log-driver": "{{ docker_log_driver }}", -{% endif %} -    "log-level": "", -    "log-opts": {{ docker_log_options|default({}) }}, -    "max-concurrent-downloads": 3, -    "max-concurrent-uploads": 5, -    "mtu": 0, -    "oom-score-adjust": -500, -    "pidfile": "", -    "raw-logs": false, -    "registry-mirrors": [], +{%- endif %} +    "log-opts": {{ l_docker_log_options }},      "runtimes": {  	"oci": {  	    "path": "/usr/libexec/docker/docker-runc-current"  	}      }, -    "selinux-enabled": {{ docker_selinux_enabled|default(true) }}, -    "storage-driver": "", -    "storage-opts": [], -    "tls": true, -    "tlscacert": "", -    "tlscert": "", -    "tlskey": "", -    "tlsverify": true, -    "userns-remap": "", -    "add-registry": {{  docker_additional_registries|default([]) }}, -    "blocked-registries": {{ docker_blocked_registries|default([]) }}, -    "userland-proxy-path": "/usr/libexec/docker/docker-proxy-current" +    "selinux-enabled": {{ l_docker_selinux_enabled | lower }}, +    "add-registry": {{ l_docker_additional_registries }}, +    "block-registry": {{ l_docker_blocked_registries }}  } diff --git a/roles/docker/templates/systemcontainercustom.conf.j2 b/roles/docker/templates/systemcontainercustom.conf.j2 index a4fb01d2b..86eebfba6 100644 --- a/roles/docker/templates/systemcontainercustom.conf.j2 +++ b/roles/docker/templates/systemcontainercustom.conf.j2 @@ -1,16 +1,16 @@  # {{ ansible_managed }}  [Service] -{%- if "http_proxy" in openshift.common %} -ENVIRONMENT=HTTP_PROXY={{ docker_http_proxy }} -{%- endif -%} -{%- if "https_proxy" in openshift.common %} -ENVIRONMENT=HTTPS_PROXY={{ docker_http_proxy }} -{%- endif -%} -{%- if "no_proxy" in openshift.common %} -ENVIRONMENT=NO_PROXY={{ docker_no_proxy }} -{%- endif %} -{%- if os_firewall_use_firewalld|default(true) %} +{% if "http_proxy" in openshift.common %} +Environment=HTTP_PROXY={{ docker_http_proxy }} +{% endif -%} +{% if "https_proxy" in openshift.common %} +Environment=HTTPS_PROXY={{ docker_http_proxy }} +{% endif -%} +{% if "no_proxy" in openshift.common %} +Environment=NO_PROXY={{ docker_no_proxy }} +{% endif %} +{%- if os_firewall_use_firewalld|default(false) %}  [Unit]  Wants=iptables.service  After=iptables.service diff --git a/roles/docker/vars/main.yml b/roles/docker/vars/main.yml index 0082ded1e..4e940b7f5 100644 --- a/roles/docker/vars/main.yml +++ b/roles/docker/vars/main.yml @@ -1,4 +1,5 @@  ---  docker_systemd_dir: /etc/systemd/system/docker.service.d +container_engine_systemd_dir: /etc/systemd/system/container-engine.service.d  docker_conf_dir: /etc/docker/  udevw_udevd_dir: /etc/systemd/system/systemd-udevd.service.d diff --git a/roles/etcd/templates/etcd.docker.service b/roles/etcd/templates/etcd.docker.service index c8ceaa6ba..adeca7a91 100644 --- a/roles/etcd/templates/etcd.docker.service +++ b/roles/etcd/templates/etcd.docker.service @@ -1,8 +1,8 @@  [Unit]  Description=The Etcd Server container -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  EnvironmentFile={{ etcd_conf_file }} @@ -14,4 +14,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/etcd_client_certificates/tasks/main.yml b/roles/etcd_client_certificates/tasks/main.yml index 450b65209..bbd29ece1 100644 --- a/roles/etcd_client_certificates/tasks/main.yml +++ b/roles/etcd_client_certificates/tasks/main.yml @@ -84,7 +84,6 @@    register: g_etcd_client_mktemp    changed_when: False    when: etcd_client_certs_missing | bool -  delegate_to: localhost    become: no  - name: Create a tarball of the etcd certs @@ -133,8 +132,7 @@    when: etcd_client_certs_missing | bool  - name: Delete temporary directory -  file: name={{ g_etcd_client_mktemp.stdout }} state=absent +  local_action: file path="{{ g_etcd_client_mktemp.stdout }}" state=absent    changed_when: False    when: etcd_client_certs_missing | bool -  delegate_to: localhost    become: no diff --git a/roles/etcd_common/README.md b/roles/etcd_common/README.md index 131a01490..d1c3a6602 100644 --- a/roles/etcd_common/README.md +++ b/roles/etcd_common/README.md @@ -1,17 +1,21 @@  etcd_common  ======================== -TODO +Common resources for dependent etcd roles. E.g. default variables for: +* config directories +* certificates +* ports +* other settings -Requirements ------------- - -TODO +Or `delegated_serial_command` ansible module for executing a command on a remote node. E.g. -Role Variables --------------- +```yaml +- delegated_serial_command: +    command: /usr/bin/make_database.sh arg1 arg2 +    creates: /path/to/database +``` -TODO +Or etcdctl.yml playbook for installation of `etcdctl` aliases on a node (see example).  Dependencies  ------------ @@ -21,7 +25,22 @@ openshift-repos  Example Playbook  ---------------- -TODO +**Drop etcdctl aliases** + +```yaml +- include_role: +    name: etcd_common +    tasks_from: etcdctl +``` + +**Get access to common variables** + +```yaml +# meta.yml of etcd +... +dependencies: +- { role: etcd_common } +```  License  ------- diff --git a/roles/etcd_server_certificates/tasks/main.yml b/roles/etcd_server_certificates/tasks/main.yml index 956f5cc55..3ac7f3401 100644 --- a/roles/etcd_server_certificates/tasks/main.yml +++ b/roles/etcd_server_certificates/tasks/main.yml @@ -107,7 +107,6 @@    register: g_etcd_server_mktemp    changed_when: False    when: etcd_server_certs_missing | bool -  delegate_to: localhost  - name: Create a tarball of the etcd certs    command: > @@ -176,11 +175,10 @@    when: etcd_server_certs_missing | bool  - name: Delete temporary directory -  file: name={{ g_etcd_server_mktemp.stdout }} state=absent +  local_action: file path="{{ g_etcd_server_mktemp.stdout }}" state=absent    become: no    changed_when: False    when: etcd_server_certs_missing | bool -  delegate_to: localhost  - name: Validate permissions on certificate files    file: diff --git a/roles/lib_openshift/library/oc_adm_ca_server_cert.py b/roles/lib_openshift/library/oc_adm_ca_server_cert.py index 7039a0cec..a6273cfe4 100644 --- a/roles/lib_openshift/library/oc_adm_ca_server_cert.py +++ b/roles/lib_openshift/library/oc_adm_ca_server_cert.py @@ -166,7 +166,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments @@ -1534,6 +1534,10 @@ class CAServerCert(OpenShiftCLI):      def run_ansible(params, check_mode):          '''run the idempotent ansible code''' +        # Filter non-strings from hostnames list s.t. the omit filter +        # may be used to conditionally add a hostname. +        params['hostnames'] = [host for host in params['hostnames'] if isinstance(host, string_types)] +          config = CAServerCertConfig(params['kubeconfig'],                                      params['debug'],                                      {'cert':          {'value': params['cert'], 'include': True}, @@ -1583,6 +1587,10 @@ class CAServerCert(OpenShiftCLI):  # -*- -*- -*- Begin included fragment: ansible/oc_adm_ca_server_cert.py -*- -*- -*- + +# pylint: disable=wrong-import-position +from ansible.module_utils.six import string_types +  def main():      '''      ansible oc adm module for ca create-server-cert diff --git a/roles/lib_openshift/library/oc_adm_manage_node.py b/roles/lib_openshift/library/oc_adm_manage_node.py index ae5806137..7493b5c3d 100644 --- a/roles/lib_openshift/library/oc_adm_manage_node.py +++ b/roles/lib_openshift/library/oc_adm_manage_node.py @@ -152,7 +152,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_adm_policy_group.py b/roles/lib_openshift/library/oc_adm_policy_group.py index 36eb294a8..5e72f5954 100644 --- a/roles/lib_openshift/library/oc_adm_policy_group.py +++ b/roles/lib_openshift/library/oc_adm_policy_group.py @@ -138,7 +138,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_adm_policy_user.py b/roles/lib_openshift/library/oc_adm_policy_user.py index bedd45922..371a3953b 100644 --- a/roles/lib_openshift/library/oc_adm_policy_user.py +++ b/roles/lib_openshift/library/oc_adm_policy_user.py @@ -138,7 +138,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_adm_registry.py b/roles/lib_openshift/library/oc_adm_registry.py index c6fa85f90..7240521c6 100644 --- a/roles/lib_openshift/library/oc_adm_registry.py +++ b/roles/lib_openshift/library/oc_adm_registry.py @@ -256,7 +256,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_adm_router.py b/roles/lib_openshift/library/oc_adm_router.py index 8a4f93372..a54c62cd4 100644 --- a/roles/lib_openshift/library/oc_adm_router.py +++ b/roles/lib_openshift/library/oc_adm_router.py @@ -281,7 +281,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_clusterrole.py b/roles/lib_openshift/library/oc_clusterrole.py index d81c29784..78c72ef26 100644 --- a/roles/lib_openshift/library/oc_clusterrole.py +++ b/roles/lib_openshift/library/oc_clusterrole.py @@ -130,7 +130,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_configmap.py b/roles/lib_openshift/library/oc_configmap.py index bdcb3f278..c88f56fc6 100644 --- a/roles/lib_openshift/library/oc_configmap.py +++ b/roles/lib_openshift/library/oc_configmap.py @@ -136,7 +136,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_edit.py b/roles/lib_openshift/library/oc_edit.py index be1b3a01e..17e3f7dde 100644 --- a/roles/lib_openshift/library/oc_edit.py +++ b/roles/lib_openshift/library/oc_edit.py @@ -180,7 +180,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_env.py b/roles/lib_openshift/library/oc_env.py index 4ac6e4aeb..18ab97bc0 100644 --- a/roles/lib_openshift/library/oc_env.py +++ b/roles/lib_openshift/library/oc_env.py @@ -147,7 +147,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_group.py b/roles/lib_openshift/library/oc_group.py index b6f058340..88c6ef209 100644 --- a/roles/lib_openshift/library/oc_group.py +++ b/roles/lib_openshift/library/oc_group.py @@ -120,7 +120,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_image.py b/roles/lib_openshift/library/oc_image.py index c094c9472..45860cbe5 100644 --- a/roles/lib_openshift/library/oc_image.py +++ b/roles/lib_openshift/library/oc_image.py @@ -139,7 +139,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_label.py b/roles/lib_openshift/library/oc_label.py index a76dd44c4..65923a698 100644 --- a/roles/lib_openshift/library/oc_label.py +++ b/roles/lib_openshift/library/oc_label.py @@ -156,7 +156,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_obj.py b/roles/lib_openshift/library/oc_obj.py index e12137b51..1d75a21b9 100644 --- a/roles/lib_openshift/library/oc_obj.py +++ b/roles/lib_openshift/library/oc_obj.py @@ -159,7 +159,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments @@ -1548,7 +1548,7 @@ class OCObject(OpenShiftCLI):          if state == 'absent':              # verify its not in our results              if (params['name'] is not None or params['selector'] is not None) and \ -               (len(api_rval['results']) == 0 or len(api_rval['results'][0].getattr('items', [])) == 0): +               (len(api_rval['results']) == 0 or len(api_rval['results'][0].get('items', [])) == 0):                  return {'changed': False, 'state': state}              if check_mode: diff --git a/roles/lib_openshift/library/oc_objectvalidator.py b/roles/lib_openshift/library/oc_objectvalidator.py index aeb4e5686..72add01f4 100644 --- a/roles/lib_openshift/library/oc_objectvalidator.py +++ b/roles/lib_openshift/library/oc_objectvalidator.py @@ -91,7 +91,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments @@ -1398,8 +1398,10 @@ class OCObjectValidator(OpenShiftCLI):              # check if it uses a reserved name              name = namespace['metadata']['name']              if not any((name == 'kube', +                        name == 'kubernetes',                          name == 'openshift',                          name.startswith('kube-'), +                        name.startswith('kubernetes-'),                          name.startswith('openshift-'),)):                  return False diff --git a/roles/lib_openshift/library/oc_process.py b/roles/lib_openshift/library/oc_process.py index f7aa8c0d2..8e1ffe90f 100644 --- a/roles/lib_openshift/library/oc_process.py +++ b/roles/lib_openshift/library/oc_process.py @@ -148,7 +148,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_project.py b/roles/lib_openshift/library/oc_project.py index b044a47ce..a06852fd8 100644 --- a/roles/lib_openshift/library/oc_project.py +++ b/roles/lib_openshift/library/oc_project.py @@ -145,7 +145,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_pvc.py b/roles/lib_openshift/library/oc_pvc.py index 8604cc2f3..79673452d 100644 --- a/roles/lib_openshift/library/oc_pvc.py +++ b/roles/lib_openshift/library/oc_pvc.py @@ -140,7 +140,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_route.py b/roles/lib_openshift/library/oc_route.py index fef48daf0..ad705a6c5 100644 --- a/roles/lib_openshift/library/oc_route.py +++ b/roles/lib_openshift/library/oc_route.py @@ -190,7 +190,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_scale.py b/roles/lib_openshift/library/oc_scale.py index 384df0ee3..291ac8b19 100644 --- a/roles/lib_openshift/library/oc_scale.py +++ b/roles/lib_openshift/library/oc_scale.py @@ -134,7 +134,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_secret.py b/roles/lib_openshift/library/oc_secret.py index 443750c5d..df28df2bc 100644 --- a/roles/lib_openshift/library/oc_secret.py +++ b/roles/lib_openshift/library/oc_secret.py @@ -180,7 +180,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_service.py b/roles/lib_openshift/library/oc_service.py index 7537bdb5b..e98f83cc3 100644 --- a/roles/lib_openshift/library/oc_service.py +++ b/roles/lib_openshift/library/oc_service.py @@ -186,7 +186,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_serviceaccount.py b/roles/lib_openshift/library/oc_serviceaccount.py index 03a4dd3b9..f00e9e4f6 100644 --- a/roles/lib_openshift/library/oc_serviceaccount.py +++ b/roles/lib_openshift/library/oc_serviceaccount.py @@ -132,7 +132,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_serviceaccount_secret.py b/roles/lib_openshift/library/oc_serviceaccount_secret.py index db1010694..6691495a6 100644 --- a/roles/lib_openshift/library/oc_serviceaccount_secret.py +++ b/roles/lib_openshift/library/oc_serviceaccount_secret.py @@ -132,7 +132,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_user.py b/roles/lib_openshift/library/oc_user.py index c3885c1ac..72f2fbf03 100644 --- a/roles/lib_openshift/library/oc_user.py +++ b/roles/lib_openshift/library/oc_user.py @@ -192,7 +192,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_version.py b/roles/lib_openshift/library/oc_version.py index 5c4596c09..bc3340a94 100644 --- a/roles/lib_openshift/library/oc_version.py +++ b/roles/lib_openshift/library/oc_version.py @@ -104,7 +104,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/library/oc_volume.py b/roles/lib_openshift/library/oc_volume.py index 5a507348c..9dec0a6d4 100644 --- a/roles/lib_openshift/library/oc_volume.py +++ b/roles/lib_openshift/library/oc_volume.py @@ -80,6 +80,18 @@ options:      required: false      default: False      aliases: [] +  name: +    description: +    - Name of the object that is being queried. +    required: false +    default: None +    aliases: [] +  vol_name: +    description: +    - Name of the volume that is being queried. +    required: false +    default: None +    aliases: []    namespace:      description:      - The name of the namespace where the object lives @@ -169,7 +181,7 @@ class YeditException(Exception):  # pragma: no cover  class Yedit(object):  # pragma: no cover      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_openshift/src/ansible/oc_adm_ca_server_cert.py b/roles/lib_openshift/src/ansible/oc_adm_ca_server_cert.py index 10f1c9b4b..fc394cb43 100644 --- a/roles/lib_openshift/src/ansible/oc_adm_ca_server_cert.py +++ b/roles/lib_openshift/src/ansible/oc_adm_ca_server_cert.py @@ -1,6 +1,10 @@  # pylint: skip-file  # flake8: noqa + +# pylint: disable=wrong-import-position +from ansible.module_utils.six import string_types +  def main():      '''      ansible oc adm module for ca create-server-cert diff --git a/roles/lib_openshift/src/class/oc_adm_ca_server_cert.py b/roles/lib_openshift/src/class/oc_adm_ca_server_cert.py index cf99a6584..37a64e4ef 100644 --- a/roles/lib_openshift/src/class/oc_adm_ca_server_cert.py +++ b/roles/lib_openshift/src/class/oc_adm_ca_server_cert.py @@ -96,6 +96,10 @@ class CAServerCert(OpenShiftCLI):      def run_ansible(params, check_mode):          '''run the idempotent ansible code''' +        # Filter non-strings from hostnames list s.t. the omit filter +        # may be used to conditionally add a hostname. +        params['hostnames'] = [host for host in params['hostnames'] if isinstance(host, string_types)] +          config = CAServerCertConfig(params['kubeconfig'],                                      params['debug'],                                      {'cert':          {'value': params['cert'], 'include': True}, diff --git a/roles/lib_openshift/src/class/oc_obj.py b/roles/lib_openshift/src/class/oc_obj.py index 89ee2f5a0..6f0da3d5c 100644 --- a/roles/lib_openshift/src/class/oc_obj.py +++ b/roles/lib_openshift/src/class/oc_obj.py @@ -117,7 +117,7 @@ class OCObject(OpenShiftCLI):          if state == 'absent':              # verify its not in our results              if (params['name'] is not None or params['selector'] is not None) and \ -               (len(api_rval['results']) == 0 or len(api_rval['results'][0].getattr('items', [])) == 0): +               (len(api_rval['results']) == 0 or len(api_rval['results'][0].get('items', [])) == 0):                  return {'changed': False, 'state': state}              if check_mode: diff --git a/roles/lib_openshift/src/class/oc_objectvalidator.py b/roles/lib_openshift/src/class/oc_objectvalidator.py index 43f6cac67..c9fd3b532 100644 --- a/roles/lib_openshift/src/class/oc_objectvalidator.py +++ b/roles/lib_openshift/src/class/oc_objectvalidator.py @@ -35,8 +35,10 @@ class OCObjectValidator(OpenShiftCLI):              # check if it uses a reserved name              name = namespace['metadata']['name']              if not any((name == 'kube', +                        name == 'kubernetes',                          name == 'openshift',                          name.startswith('kube-'), +                        name.startswith('kubernetes-'),                          name.startswith('openshift-'),)):                  return False diff --git a/roles/lib_openshift/src/doc/volume b/roles/lib_openshift/src/doc/volume index 1d04afeef..43ff78c9f 100644 --- a/roles/lib_openshift/src/doc/volume +++ b/roles/lib_openshift/src/doc/volume @@ -29,6 +29,18 @@ options:      required: false      default: False      aliases: [] +  name: +    description: +    - Name of the object that is being queried. +    required: false +    default: None +    aliases: [] +  vol_name: +    description: +    - Name of the volume that is being queried. +    required: false +    default: None +    aliases: []    namespace:      description:      - The name of the namespace where the object lives diff --git a/roles/lib_openshift/src/test/integration/filter_plugins/filters.py b/roles/lib_openshift/src/test/integration/filter_plugins/filters.py index 6990a11a8..f350bd25d 100644 --- a/roles/lib_openshift/src/test/integration/filter_plugins/filters.py +++ b/roles/lib_openshift/src/test/integration/filter_plugins/filters.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Custom filters for use in testing  ''' diff --git a/roles/lib_utils/library/repoquery.py b/roles/lib_utils/library/repoquery.py index ee98470b0..95a305b58 100644 --- a/roles/lib_utils/library/repoquery.py +++ b/roles/lib_utils/library/repoquery.py @@ -34,6 +34,7 @@ import json  # noqa: F401  import os  # noqa: F401  import re  # noqa: F401  import shutil  # noqa: F401 +import tempfile  # noqa: F401  try:      import ruamel.yaml as yaml  # noqa: F401 @@ -421,15 +422,16 @@ class RepoqueryCLI(object):  class Repoquery(RepoqueryCLI):      ''' Class to wrap the repoquery      ''' -    # pylint: disable=too-many-arguments +    # pylint: disable=too-many-arguments,too-many-instance-attributes      def __init__(self, name, query_type, show_duplicates, -                 match_version, verbose): +                 match_version, ignore_excluders, verbose):          ''' Constructor for YumList '''          super(Repoquery, self).__init__(None)          self.name = name          self.query_type = query_type          self.show_duplicates = show_duplicates          self.match_version = match_version +        self.ignore_excluders = ignore_excluders          self.verbose = verbose          if self.match_version: @@ -437,6 +439,8 @@ class Repoquery(RepoqueryCLI):          self.query_format = "%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release}" +        self.tmp_file = None +      def build_cmd(self):          ''' build the repoquery cmd options ''' @@ -448,6 +452,9 @@ class Repoquery(RepoqueryCLI):          if self.show_duplicates:              repo_cmd.append('--show-duplicates') +        if self.ignore_excluders: +            repo_cmd.append('--config=' + self.tmp_file.name) +          repo_cmd.append(self.name)          return repo_cmd @@ -458,7 +465,7 @@ class Repoquery(RepoqueryCLI):          version_dict = defaultdict(dict) -        for version in query_output.split('\n'): +        for version in query_output.decode().split('\n'):              pkg_info = version.split("|")              pkg_version = {} @@ -519,6 +526,20 @@ class Repoquery(RepoqueryCLI):      def repoquery(self):          '''perform a repoquery ''' +        if self.ignore_excluders: +            # Duplicate yum.conf and reset exclude= line to an empty string +            # to clear a list of all excluded packages +            self.tmp_file = tempfile.NamedTemporaryFile() + +            with open("/etc/yum.conf", "r") as file_handler: +                yum_conf_lines = file_handler.readlines() + +            yum_conf_lines = ["exclude=" if l.startswith("exclude=") else l for l in yum_conf_lines] + +            with open(self.tmp_file.name, "w") as file_handler: +                file_handler.writelines(yum_conf_lines) +                file_handler.flush() +          repoquery_cmd = self.build_cmd()          rval = self._repoquery_cmd(repoquery_cmd, True, 'raw') @@ -541,6 +562,9 @@ class Repoquery(RepoqueryCLI):          else:              rval['package_found'] = False +        if self.ignore_excluders: +            self.tmp_file.close() +          return rval      @staticmethod @@ -552,6 +576,7 @@ class Repoquery(RepoqueryCLI):              params['query_type'],              params['show_duplicates'],              params['match_version'], +            params['ignore_excluders'],              params['verbose'],          ) @@ -592,6 +617,7 @@ def main():              verbose=dict(default=False, required=False, type='bool'),              show_duplicates=dict(default=False, required=False, type='bool'),              match_version=dict(default=None, required=False, type='str'), +            ignore_excluders=dict(default=False, required=False, type='bool'),          ),          supports_check_mode=False,          required_if=[('show_duplicates', True, ['name'])], diff --git a/roles/lib_utils/library/yedit.py b/roles/lib_utils/library/yedit.py index 9adaeeb52..baf72fe47 100644 --- a/roles/lib_utils/library/yedit.py +++ b/roles/lib_utils/library/yedit.py @@ -34,6 +34,7 @@ import json  # noqa: F401  import os  # noqa: F401  import re  # noqa: F401  import shutil  # noqa: F401 +import tempfile  # noqa: F401  try:      import ruamel.yaml as yaml  # noqa: F401 @@ -212,7 +213,7 @@ class YeditException(Exception):  class Yedit(object):      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_utils/src/ansible/repoquery.py b/roles/lib_utils/src/ansible/repoquery.py index cb4efa6c1..40773b1c1 100644 --- a/roles/lib_utils/src/ansible/repoquery.py +++ b/roles/lib_utils/src/ansible/repoquery.py @@ -18,6 +18,7 @@ def main():              verbose=dict(default=False, required=False, type='bool'),              show_duplicates=dict(default=False, required=False, type='bool'),              match_version=dict(default=None, required=False, type='str'), +            ignore_excluders=dict(default=False, required=False, type='bool'),          ),          supports_check_mode=False,          required_if=[('show_duplicates', True, ['name'])], diff --git a/roles/lib_utils/src/class/repoquery.py b/roles/lib_utils/src/class/repoquery.py index 82adcada5..e997780ad 100644 --- a/roles/lib_utils/src/class/repoquery.py +++ b/roles/lib_utils/src/class/repoquery.py @@ -5,15 +5,16 @@  class Repoquery(RepoqueryCLI):      ''' Class to wrap the repoquery      ''' -    # pylint: disable=too-many-arguments +    # pylint: disable=too-many-arguments,too-many-instance-attributes      def __init__(self, name, query_type, show_duplicates, -                 match_version, verbose): +                 match_version, ignore_excluders, verbose):          ''' Constructor for YumList '''          super(Repoquery, self).__init__(None)          self.name = name          self.query_type = query_type          self.show_duplicates = show_duplicates          self.match_version = match_version +        self.ignore_excluders = ignore_excluders          self.verbose = verbose          if self.match_version: @@ -21,6 +22,8 @@ class Repoquery(RepoqueryCLI):          self.query_format = "%{version}|%{release}|%{arch}|%{repo}|%{version}-%{release}" +        self.tmp_file = None +      def build_cmd(self):          ''' build the repoquery cmd options ''' @@ -32,6 +35,9 @@ class Repoquery(RepoqueryCLI):          if self.show_duplicates:              repo_cmd.append('--show-duplicates') +        if self.ignore_excluders: +            repo_cmd.append('--config=' + self.tmp_file.name) +          repo_cmd.append(self.name)          return repo_cmd @@ -42,7 +48,7 @@ class Repoquery(RepoqueryCLI):          version_dict = defaultdict(dict) -        for version in query_output.split('\n'): +        for version in query_output.decode().split('\n'):              pkg_info = version.split("|")              pkg_version = {} @@ -103,6 +109,20 @@ class Repoquery(RepoqueryCLI):      def repoquery(self):          '''perform a repoquery ''' +        if self.ignore_excluders: +            # Duplicate yum.conf and reset exclude= line to an empty string +            # to clear a list of all excluded packages +            self.tmp_file = tempfile.NamedTemporaryFile() + +            with open("/etc/yum.conf", "r") as file_handler: +                yum_conf_lines = file_handler.readlines() + +            yum_conf_lines = ["exclude=" if l.startswith("exclude=") else l for l in yum_conf_lines] + +            with open(self.tmp_file.name, "w") as file_handler: +                file_handler.writelines(yum_conf_lines) +                file_handler.flush() +          repoquery_cmd = self.build_cmd()          rval = self._repoquery_cmd(repoquery_cmd, True, 'raw') @@ -125,6 +145,9 @@ class Repoquery(RepoqueryCLI):          else:              rval['package_found'] = False +        if self.ignore_excluders: +            self.tmp_file.close() +          return rval      @staticmethod @@ -136,6 +159,7 @@ class Repoquery(RepoqueryCLI):              params['query_type'],              params['show_duplicates'],              params['match_version'], +            params['ignore_excluders'],              params['verbose'],          ) diff --git a/roles/lib_utils/src/class/yedit.py b/roles/lib_utils/src/class/yedit.py index e0a27012f..957c35a06 100644 --- a/roles/lib_utils/src/class/yedit.py +++ b/roles/lib_utils/src/class/yedit.py @@ -11,7 +11,7 @@ class YeditException(Exception):  class Yedit(object):      ''' Class to modify yaml files '''      re_valid_key = r"(((\[-?\d+\])|([0-9a-zA-Z%s/_-]+)).?)+$" -    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z%s/_-]+)" +    re_key = r"(?:\[(-?\d+)\])|([0-9a-zA-Z{}/_-]+)"      com_sep = set(['.', '#', '|', ':'])      # pylint: disable=too-many-arguments diff --git a/roles/lib_utils/src/lib/import.py b/roles/lib_utils/src/lib/import.py index b0ab7c92c..567f8c9e0 100644 --- a/roles/lib_utils/src/lib/import.py +++ b/roles/lib_utils/src/lib/import.py @@ -9,6 +9,7 @@ import json  # noqa: F401  import os  # noqa: F401  import re  # noqa: F401  import shutil  # noqa: F401 +import tempfile  # noqa: F401  try:      import ruamel.yaml as yaml  # noqa: F401 diff --git a/roles/lib_utils/src/test/unit/test_repoquery.py b/roles/lib_utils/src/test/unit/test_repoquery.py index e39d9d83f..325f41dab 100755 --- a/roles/lib_utils/src/test/unit/test_repoquery.py +++ b/roles/lib_utils/src/test/unit/test_repoquery.py @@ -37,6 +37,7 @@ class RepoQueryTest(unittest.TestCase):              'verbose': False,              'show_duplicates': False,              'match_version': None, +            'ignore_excluders': False,          }          valid_stderr = '''Repo rhel-7-server-extras-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/3268107132875399464-key.pem @@ -44,7 +45,7 @@ class RepoQueryTest(unittest.TestCase):          # Return values of our mocked function call. These get returned once per call.          mock_cmd.side_effect = [ -            (0, '4.2.46|21.el7_3|x86_64|rhel-7-server-rpms|4.2.46-21.el7_3', valid_stderr),  # first call to the mock +            (0, b'4.2.46|21.el7_3|x86_64|rhel-7-server-rpms|4.2.46-21.el7_3', valid_stderr),  # first call to the mock          ]          # Act diff --git a/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py b/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py index 577a14b9a..a2bc9ecdb 100644 --- a/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py +++ b/roles/openshift_certificate_expiry/filter_plugins/oo_cert_expiry.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  """  Custom filters for use in openshift-ansible  """ diff --git a/roles/openshift_cli/library/openshift_container_binary_sync.py b/roles/openshift_cli/library/openshift_container_binary_sync.py index 4ed3e1f01..57ac16602 100644 --- a/roles/openshift_cli/library/openshift_container_binary_sync.py +++ b/roles/openshift_cli/library/openshift_container_binary_sync.py @@ -1,8 +1,6 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  # pylint: disable=missing-docstring,invalid-name -#  import random  import tempfile diff --git a/roles/openshift_common/tasks/main.yml b/roles/openshift_common/tasks/main.yml index d9ccf87bc..51313a258 100644 --- a/roles/openshift_common/tasks/main.yml +++ b/roles/openshift_common/tasks/main.yml @@ -28,10 +28,18 @@    when: openshift_use_openshift_sdn | default(true) | bool and openshift_use_calico | default(false) | bool  - fail: -    msg: Calico cannot currently be used with Flannel in Openshift. Set either openshift_use_calico or openshift_use_flannel, but not both +    msg: The Calico playbook does not yet integrate with the Flannel playbook in Openshift. Set either openshift_use_calico or openshift_use_flannel, but not both.    when: openshift_use_calico | default(false) | bool and openshift_use_flannel | default(false) | bool  - fail: +    msg: Calico can not be used with Nuage in Openshift. Set either openshift_use_calico or openshift_use_nuage, but not both +  when: openshift_use_calico | default(false) | bool and openshift_use_nuage | default(false) | bool + +- fail: +    msg: Calico can not be used with Contiv in Openshift. Set either openshift_use_calico or openshift_use_contiv, but not both +  when: openshift_use_calico | default(false) | bool and openshift_use_contiv | default(false) | bool + +- fail:      msg: openshift_hostname must be 64 characters or less    when: openshift_hostname is defined and openshift_hostname | length > 64 diff --git a/roles/openshift_excluder/README.md b/roles/openshift_excluder/README.md index e048bd107..80cb88d45 100644 --- a/roles/openshift_excluder/README.md +++ b/roles/openshift_excluder/README.md @@ -1,47 +1,69 @@  OpenShift Excluder -================ +==================  Manages the excluder packages which add yum and dnf exclusions ensuring that -the packages we care about are not inadvertantly updated. See +the packages we care about are not inadvertently updated. See  https://github.com/openshift/origin/tree/master/contrib/excluder  Requirements  ------------ -openshift_facts +None -Facts ------ +Inventory Variables +------------------- -| Name                       | Default Value | Description                            | ------------------------------|---------------|----------------------------------------| -| enable_docker_excluder     | enable_excluders | Enable docker excluder. If not set, the docker excluder is ignored. | -| enable_openshift_excluder  | enable_excluders | Enable openshift excluder. If not set, the openshift excluder is ignored. | -| enable_excluders           | None             | Enable all excluders +| Name                                 | Default Value              | Description                            | +---------------------------------------|----------------------------|----------------------------------------| +| openshift_enable_excluders           | True                       | Enable all excluders                   | +| openshift_enable_docker_excluder     | openshift_enable_excluders | Enable docker excluder. If not set, the docker excluder is ignored. | +| openshift_enable_openshift_excluder  | openshift_enable_excluders | Enable openshift excluder. If not set, the openshift excluder is ignored. |  Role Variables  -------------- -None + +| Name                                      | Default | Choices         | Description                                                               | +|-------------------------------------------|---------|-----------------|---------------------------------------------------------------------------| +| r_openshift_excluder_action               | enable  | enable, disable | Action to perform when calling this role                                  | +| r_openshift_excluder_verify_upgrade       | false   | true, false     | When upgrading, this variable should be set to true when calling the role | +| r_openshift_excluder_package_state        | present | present, latest | Use 'latest' to upgrade openshift_excluder package                        | +| r_openshift_excluder_docker_package_state | present | present, latest | Use 'latest' to upgrade docker_excluder package                           | +| r_openshift_excluder_service_type         | None    |                 | (Required) Defined as openshift.common.service_type e.g. atomic-openshift | +| r_openshift_excluder_upgrade_target       | None    |                 | Required when r_openshift_excluder_verify_upgrade is true, defined as openshift_upgrade_target by Upgrade playbooks e.g. '3.6'|  Dependencies  ------------ -Tasks to include ----------------- - -- exclude: enable excluders (assuming excluders are installed) -- unexclude: disable excluders (assuming excluders are installed) -- install: install excluders (installation is followed by excluder enabling) -- enable: enable excluders (optionally with installation step) -- disabled: disable excluders (optionally with installation and status step, the status check that can override which excluder gets enabled/disabled) -- status: determine status of excluders +- lib_utils  Example Playbook  ---------------- +```yaml +- name: Demonstrate OpenShift Excluder usage +  hosts: oo_masters_to_config:oo_nodes_to_config +  roles: +  # Disable all excluders +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" +  # Enable all excluders +  - role: openshift_excluder +    r_openshift_excluder_action: enable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" +  # Disable all excluders and verify appropriate excluder packages are available for upgrade +  - role: openshift_excluder +    r_openshift_excluder_action: disable +    r_openshift_excluder_service_type: "{{ openshift.common.service_type }}" +    r_openshift_excluder_verify_upgrade: true +    r_openshift_excluder_upgrade_target: "{{ openshift_upgrade_target }}" +    r_openshift_excluder_package_state: latest +    r_openshift_excluder_docker_package_state: latest +```  TODO  ---- +  It should be possible to manage the two excluders independently though that's not a hard requirement. However it should be done to manage docker on RHEL Containerized hosts.  License diff --git a/roles/openshift_excluder/defaults/main.yml b/roles/openshift_excluder/defaults/main.yml index 7c3ae2a86..d4f151142 100644 --- a/roles/openshift_excluder/defaults/main.yml +++ b/roles/openshift_excluder/defaults/main.yml @@ -1,6 +1,19 @@  ---  # keep the 'current' package or update to 'latest' if available? -openshift_excluder_package_state: present -docker_excluder_package_state: present +r_openshift_excluder_package_state: present +r_openshift_excluder_docker_package_state: present -enable_excluders: true +# Legacy variables are included for backwards compatibility with v3.5 +# Inventory variables                   Legacy +# openshift_enable_excluders            enable_excluders +# openshift_enable_openshift_excluder   enable_openshift_excluder +# openshift_enable_docker_excluder      enable_docker_excluder +r_openshift_excluder_enable_excluders: "{{ openshift_enable_excluders | default(enable_excluders) | default(true) }}" +r_openshift_excluder_enable_openshift_excluder: "{{ openshift_enable_openshift_excluder | default(enable_openshift_excluder) | default(r_openshift_excluder_enable_excluders) }}" +r_openshift_excluder_enable_docker_excluder: "{{ openshift_enable_docker_excluder | default(enable_docker_excluder) | default(r_openshift_excluder_enable_excluders) }}" + +# Default action when calling this role +r_openshift_excluder_action: enable + +# When upgrading, this variable should be set to true when calling the role +r_openshift_excluder_verify_upgrade: false diff --git a/roles/openshift_excluder/meta/main.yml b/roles/openshift_excluder/meta/main.yml index 4d1c1efca..871081c19 100644 --- a/roles/openshift_excluder/meta/main.yml +++ b/roles/openshift_excluder/meta/main.yml @@ -1,7 +1,7 @@  ---  galaxy_info:    author: Scott Dodson -  description: OpenShift Examples +  description: OpenShift Excluder    company: Red Hat, Inc.    license: Apache License, Version 2.0    min_ansible_version: 2.2 @@ -12,5 +12,4 @@ galaxy_info:    categories:    - cloud  dependencies: -- { role: openshift_facts } -- { role: openshift_repos } +- role: lib_utils diff --git a/roles/openshift_excluder/tasks/disable.yml b/roles/openshift_excluder/tasks/disable.yml index 97044fff6..8d5a08874 100644 --- a/roles/openshift_excluder/tasks/disable.yml +++ b/roles/openshift_excluder/tasks/disable.yml @@ -1,47 +1,38 @@  --- -# input variables -# - excluder_package_state -# - docker_excluder_package_state -- include: init.yml +- when: r_openshift_excluder_verify_upgrade +  block: +  - name: Include verify_upgrade.yml when upgrading +    include: verify_upgrade.yml  # unexclude the current openshift/origin-excluder if it is installed so it can be updated -- include: unexclude.yml +- name: Disable OpenShift excluder so it can be updated +  include: unexclude.yml    vars:      unexclude_docker_excluder: false -    unexclude_openshift_excluder: "{{ openshift_excluder_on | bool }}" -  when: -  - not openshift.common.is_atomic | bool +    unexclude_openshift_excluder: "{{ r_openshift_excluder_enable_openshift_excluder }}"  # Install any excluder that is enabled -- include: install.yml -  vars: -    # Both docker_excluder_on and openshift_excluder_on are set in openshift_excluder->init task -    install_docker_excluder: "{{ docker_excluder_on | bool }}" -    install_openshift_excluder: "{{ openshift_excluder_on | bool }}" -  when: docker_excluder_on or openshift_excluder_on - -  # if the docker excluder is not enabled, we don't care about its status -  # it the docker excluder is enabled, we install it and in case its status is non-zero -  # it is enabled no matter what +- name: Include install.yml +  include: install.yml  # And finally adjust an excluder in order to update host components correctly. First  # exclude then unexclude -- block: -  - include: exclude.yml -    vars: -      # Enable the docker excluder only if it is overrided -      # BZ #1430612: docker excluders should be enabled even during installation and upgrade -      exclude_docker_excluder: "{{ docker_excluder_on | bool }}" -      # excluder is to be disabled by default -      exclude_openshift_excluder: false -  # All excluders that are to be disabled are disabled -  - include: unexclude.yml -    vars: -      # If the docker override  is not set, default to the generic behaviour -      # BZ #1430612: docker excluders should be enabled even during installation and upgrade -      unexclude_docker_excluder: false -      # disable openshift excluder is never overrided to be enabled -      # disable it if the docker excluder is enabled -      unexclude_openshift_excluder: "{{ openshift_excluder_on | bool }}" -  when: -  - not openshift.common.is_atomic | bool +- name: Include exclude.yml +  include: exclude.yml +  vars: +    # Enable the docker excluder only if it is overridden +    # BZ #1430612: docker excluders should be enabled even during installation and upgrade +    exclude_docker_excluder: "{{ r_openshift_excluder_enable_docker_excluder }}" +    # excluder is to be disabled by default +    exclude_openshift_excluder: false + +# All excluders that are to be disabled are disabled +- name: Include unexclude.yml +  include: unexclude.yml +  vars: +    # If the docker override  is not set, default to the generic behaviour +    # BZ #1430612: docker excluders should be enabled even during installation and upgrade +    unexclude_docker_excluder: false +    # disable openshift excluder is never overridden to be enabled +    # disable it if the docker excluder is enabled +    unexclude_openshift_excluder: "{{ r_openshift_excluder_enable_openshift_excluder }}" diff --git a/roles/openshift_excluder/tasks/enable.yml b/roles/openshift_excluder/tasks/enable.yml index e719325bc..fce44cfb5 100644 --- a/roles/openshift_excluder/tasks/enable.yml +++ b/roles/openshift_excluder/tasks/enable.yml @@ -1,18 +1,6 @@  --- -# input variables: -- block: -  - include: init.yml +- name: Install excluders +  include: install.yml -  - include: install.yml -    vars: -      install_docker_excluder: "{{ docker_excluder_on | bool }}" -      install_openshift_excluder: "{{ openshift_excluder_on | bool }}" -    when: docker_excluder_on or openshift_excluder_on | bool - -  - include: exclude.yml -    vars: -      exclude_docker_excluder: "{{ docker_excluder_on | bool }}" -      exclude_openshift_excluder: "{{ openshift_excluder_on | bool }}" - -  when: -  - not openshift.common.is_atomic | bool +- name: Enable excluders +  include: exclude.yml diff --git a/roles/openshift_excluder/tasks/exclude.yml b/roles/openshift_excluder/tasks/exclude.yml index ca18d343f..934f1b2d2 100644 --- a/roles/openshift_excluder/tasks/exclude.yml +++ b/roles/openshift_excluder/tasks/exclude.yml @@ -1,30 +1,22 @@  --- -# input variables: -# - exclude_docker_excluder -# - exclude_openshift_excluder -- block: +- name: Check for docker-excluder +  stat: +    path: /sbin/{{ r_openshift_excluder_service_type }}-docker-excluder +  register: docker_excluder_stat -  - name: Check for docker-excluder -    stat: -      path: /sbin/{{ openshift.common.service_type }}-docker-excluder -    register: docker_excluder_stat -  - name: Enable docker excluder -    command: "{{ openshift.common.service_type }}-docker-excluder exclude" -    when: -    - exclude_docker_excluder | default(false) | bool -    - docker_excluder_stat.stat.exists +- name: Enable docker excluder +  command: "{{ r_openshift_excluder_service_type }}-docker-excluder exclude" +  when: +  - r_openshift_excluder_enable_docker_excluder | bool +  - docker_excluder_stat.stat.exists -  - name: Check for openshift excluder -    stat: -      path: /sbin/{{ openshift.common.service_type }}-excluder -    register: openshift_excluder_stat -  - name: Enable openshift excluder -    command: "{{ openshift.common.service_type }}-excluder exclude" -    # if the openshift override is set, it means the openshift excluder is disabled no matter what -    # if the openshift override is not set, the excluder is set based on enable_openshift_excluder -    when: -    - exclude_openshift_excluder | default(false) | bool -    - openshift_excluder_stat.stat.exists +- name: Check for openshift excluder +  stat: +    path: /sbin/{{ r_openshift_excluder_service_type }}-excluder +  register: openshift_excluder_stat +- name: Enable openshift excluder +  command: "{{ r_openshift_excluder_service_type }}-excluder exclude"    when: -  - not openshift.common.is_atomic | bool +  - r_openshift_excluder_enable_openshift_excluder | bool +  - openshift_excluder_stat.stat.exists diff --git a/roles/openshift_excluder/tasks/init.yml b/roles/openshift_excluder/tasks/init.yml deleted file mode 100644 index 1ea18f363..000000000 --- a/roles/openshift_excluder/tasks/init.yml +++ /dev/null @@ -1,12 +0,0 @@ ---- -- name: Evalute if docker excluder is to be enabled -  set_fact: -    docker_excluder_on: "{{ enable_docker_excluder | default(enable_excluders) | bool }}" - -- debug: var=docker_excluder_on - -- name: Evalute if openshift excluder is to be enabled -  set_fact: -    openshift_excluder_on: "{{ enable_openshift_excluder | default(enable_excluders) | bool }}" - -- debug: var=openshift_excluder_on diff --git a/roles/openshift_excluder/tasks/install.yml b/roles/openshift_excluder/tasks/install.yml index 3490a613e..d09358bee 100644 --- a/roles/openshift_excluder/tasks/install.yml +++ b/roles/openshift_excluder/tasks/install.yml @@ -1,21 +1,14 @@  --- -# input Variables -# - install_docker_excluder -# - install_openshift_excluder -- block: - -  - name: Install docker excluder -    package: -      name: "{{ openshift.common.service_type }}-docker-excluder{{ openshift_pkg_version | default('') | oo_image_tag_to_rpm_version(include_dash=True) +  '*' }}" -      state: "{{ docker_excluder_package_state }}" -    when: -    - install_docker_excluder | default(true) | bool +- name: Install docker excluder +  package: +    name: "{{ r_openshift_excluder_service_type }}-docker-excluder{{ openshift_pkg_version | default('') | oo_image_tag_to_rpm_version(include_dash=True) +  '*' }}" +    state: "{{ r_openshift_excluder_docker_package_state }}" +  when: +  - r_openshift_excluder_enable_docker_excluder | bool -  - name: Install openshift excluder -    package: -      name: "{{ openshift.common.service_type }}-excluder{{ openshift_pkg_version | default('') | oo_image_tag_to_rpm_version(include_dash=True) + '*' }}" -      state: "{{ openshift_excluder_package_state }}" -    when: -    - install_openshift_excluder | default(true) | bool +- name: Install openshift excluder +  package: +    name: "{{ r_openshift_excluder_service_type }}-excluder{{ openshift_pkg_version | default('') | oo_image_tag_to_rpm_version(include_dash=True) + '*' }}" +    state: "{{ r_openshift_excluder_package_state }}"    when: -  - not openshift.common.is_atomic | bool +  - r_openshift_excluder_enable_openshift_excluder | bool diff --git a/roles/openshift_excluder/tasks/main.yml b/roles/openshift_excluder/tasks/main.yml new file mode 100644 index 000000000..db20b4012 --- /dev/null +++ b/roles/openshift_excluder/tasks/main.yml @@ -0,0 +1,38 @@ +--- +- name: Detecting Atomic Host Operating System +  stat: +    path: /run/ostree-booted +  register: ostree_booted + +- block: + +  - name: Debug r_openshift_excluder_enable_docker_excluder +    debug: +      var: r_openshift_excluder_enable_docker_excluder + +  - name: Debug r_openshift_excluder_enable_openshift_excluder +    debug: +      var: r_openshift_excluder_enable_openshift_excluder + +  - name: Fail if invalid openshift_excluder_action provided +    fail: +      msg: "openshift_excluder role can only be called with 'enable' or 'disable'" +    when: r_openshift_excluder_action not in ['enable', 'disable'] + +  - name: Fail if r_openshift_excluder_service_type is not defined +    fail: +      msg: "r_openshift_excluder_service_type must be specified for this role" +    when: r_openshift_excluder_service_type is not defined + +  - name: Fail if r_openshift_excluder_upgrade_target is not defined +    fail: +      msg: "r_openshift_excluder_upgrade_target must be provided when using this role for upgrades" +    when: +    - r_openshift_excluder_verify_upgrade | bool +    - r_openshift_excluder_upgrade_target is not defined + +  - name: Include main action task file +    include: "{{ r_openshift_excluder_action }}.yml" + +  when: +  - not ostree_booted.stat.exists | bool diff --git a/roles/openshift_excluder/tasks/unexclude.yml b/roles/openshift_excluder/tasks/unexclude.yml index 4df7f14b4..a5ce8d5c7 100644 --- a/roles/openshift_excluder/tasks/unexclude.yml +++ b/roles/openshift_excluder/tasks/unexclude.yml @@ -2,27 +2,25 @@  # input variables:  # - unexclude_docker_excluder  # - unexclude_openshift_excluder -- block: -  - name: Check for docker-excluder -    stat: -      path: /sbin/{{ openshift.common.service_type }}-docker-excluder -    register: docker_excluder_stat -  - name: disable docker excluder -    command: "{{ openshift.common.service_type }}-docker-excluder unexclude" -    when: -    - unexclude_docker_excluder | default(false) | bool -    - docker_excluder_stat.stat.exists +- name: Check for docker-excluder +  stat: +    path: /sbin/{{ r_openshift_excluder_service_type }}-docker-excluder +  register: docker_excluder_stat -  - name: Check for openshift excluder -    stat: -      path: /sbin/{{ openshift.common.service_type }}-excluder -    register: openshift_excluder_stat -  - name: disable openshift excluder -    command: "{{ openshift.common.service_type }}-excluder unexclude" -    when: -    - unexclude_openshift_excluder | default(false) | bool -    - openshift_excluder_stat.stat.exists +- name: disable docker excluder +  command: "{{ r_openshift_excluder_service_type }}-docker-excluder unexclude" +  when: +  - unexclude_docker_excluder | default(false) | bool +  - docker_excluder_stat.stat.exists + +- name: Check for openshift excluder +  stat: +    path: /sbin/{{ r_openshift_excluder_service_type }}-excluder +  register: openshift_excluder_stat +- name: disable openshift excluder +  command: "{{ r_openshift_excluder_service_type }}-excluder unexclude"    when: -  - not openshift.common.is_atomic | bool +  - unexclude_openshift_excluder | default(false) | bool +  - openshift_excluder_stat.stat.exists diff --git a/roles/openshift_excluder/tasks/verify_excluder.yml b/roles/openshift_excluder/tasks/verify_excluder.yml index 24a05d56e..c35639c1b 100644 --- a/roles/openshift_excluder/tasks/verify_excluder.yml +++ b/roles/openshift_excluder/tasks/verify_excluder.yml @@ -1,29 +1,32 @@  ---  # input variables: -# - repoquery_cmd  # - excluder -# - openshift_upgrade_target -- block: -  - name: Get available excluder version -    command: > -      {{ repoquery_cmd }} --qf '%{version}' "{{ excluder }}" -    register: excluder_version -    failed_when: false -    changed_when: false +- name: Get available excluder version +  repoquery: +    name: "{{ excluder }}" +    ignore_excluders: true +  register: repoquery_out -  - name: "{{ excluder }} version detected" -    debug: -      msg: "{{ excluder }}: {{ excluder_version.stdout }}" +- name: Fail when excluder package is not found +  fail: +    msg: "Package {{ excluder }} not found" +  when: not repoquery_out.results.package_found -  - name: Printing upgrade target version -    debug: -      msg: "{{ openshift_upgrade_target }}" +- name: Set fact excluder_version +  set_fact: +    excluder_version: "{{ repoquery_out.results.versions.available_versions.0 }}" -  - name: Check the available {{ excluder }} version is at most of the upgrade target version -    fail: -      msg: "Available {{ excluder }} version {{ excluder_version.stdout }} is higher than the upgrade target version" -    when: -    - "{{ excluder_version.stdout != '' }}" -    - "{{ excluder_version.stdout.split('.')[0:2] | join('.') | version_compare(openshift_upgrade_target.split('.')[0:2] | join('.'), '>', strict=True) }}" +- name: "{{ excluder }} version detected" +  debug: +    msg: "{{ excluder }}: {{ excluder_version }}" + +- name: Printing upgrade target version +  debug: +    msg: "{{ r_openshift_excluder_upgrade_target }}" + +- name: Check the available {{ excluder }} version is at most of the upgrade target version +  fail: +    msg: "Available {{ excluder }} version {{ excluder_version }} is higher than the upgrade target version"    when: -  - not openshift.common.is_atomic | bool +  - excluder_version != '' +  - excluder_version.split('.')[0:2] | join('.') | version_compare(r_openshift_excluder_upgrade_target.split('.')[0:2] | join('.'), '>', strict=True) diff --git a/roles/openshift_excluder/tasks/verify_upgrade.yml b/roles/openshift_excluder/tasks/verify_upgrade.yml index 6ea2130ac..42026664a 100644 --- a/roles/openshift_excluder/tasks/verify_upgrade.yml +++ b/roles/openshift_excluder/tasks/verify_upgrade.yml @@ -1,15 +1,12 @@  --- -# input variables -# - repoquery_cmd -# - openshift_upgrade_target -- include: init.yml - -- include: verify_excluder.yml +- name: Verify Docker Excluder version +  include: verify_excluder.yml    vars: -    excluder: "{{ openshift.common.service_type }}-docker-excluder" -  when: docker_excluder_on +    excluder: "{{ r_openshift_excluder_service_type }}-docker-excluder" +  when: r_openshift_excluder_enable_docker_excluder | bool -- include: verify_excluder.yml +- name: Verify OpenShift Excluder version +  include: verify_excluder.yml    vars: -    excluder: "{{ openshift.common.service_type }}-excluder" -  when: openshift_excluder_on +    excluder: "{{ r_openshift_excluder_service_type }}-excluder" +  when: r_openshift_excluder_enable_openshift_excluder | bool diff --git a/roles/openshift_facts/library/openshift_facts.py b/roles/openshift_facts/library/openshift_facts.py index 5ea902e2b..514c06500 100755 --- a/roles/openshift_facts/library/openshift_facts.py +++ b/roles/openshift_facts/library/openshift_facts.py @@ -1,7 +1,6 @@  #!/usr/bin/python  # pylint: disable=too-many-lines  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  # Reason: Disable pylint too-many-lines because we don't want to split up this file.  # Status: Permanently disabled to keep this module as self-contained as possible. @@ -1303,7 +1302,7 @@ def get_version_output(binary, version_cmd):  def get_docker_version_info():      """ Parses and returns the docker version info """      result = None -    if is_service_running('docker'): +    if is_service_running('docker') or is_service_running('container-engine'):          version_info = yaml.safe_load(get_version_output('/usr/bin/docker', 'version'))          if 'Server' in version_info:              result = { @@ -2168,7 +2167,9 @@ class OpenShiftFacts(object):                          glusterfs=dict(                              endpoints='glusterfs-registry-endpoints',                              path='glusterfs-registry-volume', -                            readOnly=False), +                            readOnly=False, +                            swap=False, +                            swapcopy=True),                          host=None,                          access=dict(                              modes=['ReadWriteMany'] diff --git a/roles/openshift_health_checker/callback_plugins/zz_failure_summary.py b/roles/openshift_health_checker/callback_plugins/zz_failure_summary.py index 208e81048..7bce7f107 100644 --- a/roles/openshift_health_checker/callback_plugins/zz_failure_summary.py +++ b/roles/openshift_health_checker/callback_plugins/zz_failure_summary.py @@ -1,4 +1,3 @@ -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Ansible callback plugin.  ''' diff --git a/roles/openshift_health_checker/library/aos_version.py b/roles/openshift_health_checker/library/aos_version.py index a46589443..4460ec324 100755 --- a/roles/openshift_health_checker/library/aos_version.py +++ b/roles/openshift_health_checker/library/aos_version.py @@ -1,5 +1,4 @@  #!/usr/bin/python -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Ansible module for yum-based systems determining if multiple releases  of an OpenShift package are available, and if the release requested diff --git a/roles/openshift_health_checker/library/check_yum_update.py b/roles/openshift_health_checker/library/check_yum_update.py index 630ebc848..433795b67 100755 --- a/roles/openshift_health_checker/library/check_yum_update.py +++ b/roles/openshift_health_checker/library/check_yum_update.py @@ -1,5 +1,4 @@  #!/usr/bin/python -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Ansible module to test whether a yum update or install will succeed,  without actually performing it or running yum. diff --git a/roles/openshift_health_checker/library/etcdkeysize.py b/roles/openshift_health_checker/library/etcdkeysize.py new file mode 100644 index 000000000..620e82d87 --- /dev/null +++ b/roles/openshift_health_checker/library/etcdkeysize.py @@ -0,0 +1,122 @@ +#!/usr/bin/python +"""Ansible module that recursively determines if the size of a key in an etcd cluster exceeds a given limit.""" + +from ansible.module_utils.basic import AnsibleModule + + +try: +    import etcd + +    IMPORT_EXCEPTION_MSG = None +except ImportError as err: +    IMPORT_EXCEPTION_MSG = str(err) + +    from collections import namedtuple +    EtcdMock = namedtuple("etcd", ["EtcdKeyNotFound"]) +    etcd = EtcdMock(KeyError) + + +# pylint: disable=too-many-arguments +def check_etcd_key_size(client, key, size_limit, total_size=0, depth=0, depth_limit=1000, visited=None): +    """Check size of an etcd path starting at given key. Returns tuple (string, bool)""" +    if visited is None: +        visited = set() + +    if key in visited: +        return 0, False + +    visited.add(key) + +    try: +        result = client.read(key, recursive=False) +    except etcd.EtcdKeyNotFound: +        return 0, False + +    size = 0 +    limit_exceeded = False + +    for node in result.leaves: +        if depth >= depth_limit: +            raise Exception("Maximum recursive stack depth ({}) exceeded.".format(depth_limit)) + +        if size_limit and total_size + size > size_limit: +            return size, True + +        if not node.dir: +            size += len(node.value) +            continue + +        key_size, limit_exceeded = check_etcd_key_size(client, node.key, +                                                       size_limit, +                                                       total_size + size, +                                                       depth + 1, +                                                       depth_limit, visited) +        size += key_size + +    max_limit_exceeded = limit_exceeded or (total_size + size > size_limit) +    return size, max_limit_exceeded + + +def main():  # pylint: disable=missing-docstring,too-many-branches +    module = AnsibleModule( +        argument_spec=dict( +            size_limit_bytes=dict(type="int", default=0), +            paths=dict(type="list", default=["/openshift.io/images"]), +            host=dict(type="str", default="127.0.0.1"), +            port=dict(type="int", default=4001), +            protocol=dict(type="str", default="http"), +            version_prefix=dict(type="str", default=""), +            allow_redirect=dict(type="bool", default=False), +            cert=dict(type="dict", default=""), +            ca_cert=dict(type="str", default=None), +        ), +        supports_check_mode=True +    ) + +    module.params["cert"] = ( +        module.params["cert"]["cert"], +        module.params["cert"]["key"], +    ) + +    size_limit = module.params.pop("size_limit_bytes") +    paths = module.params.pop("paths") + +    limit_exceeded = False + +    try: +        # pylint: disable=no-member +        client = etcd.Client(**module.params) +    except AttributeError as attrerr: +        msg = str(attrerr) +        if IMPORT_EXCEPTION_MSG: +            msg = IMPORT_EXCEPTION_MSG +            if "No module named etcd" in IMPORT_EXCEPTION_MSG: +                # pylint: disable=redefined-variable-type +                msg = ('Unable to import the python "etcd" dependency. ' +                       'Make sure python-etcd is installed on the host.') + +        module.exit_json( +            failed=True, +            changed=False, +            size_limit_exceeded=limit_exceeded, +            msg=msg, +        ) + +        return + +    size = 0 +    for path in paths: +        path_size, limit_exceeded = check_etcd_key_size(client, path, size_limit - size) +        size += path_size + +        if limit_exceeded: +            break + +    module.exit_json( +        changed=False, +        size_limit_exceeded=limit_exceeded, +    ) + + +if __name__ == '__main__': +    main() diff --git a/roles/openshift_health_checker/openshift_checks/etcd_imagedata_size.py b/roles/openshift_health_checker/openshift_checks/etcd_imagedata_size.py new file mode 100644 index 000000000..c04a69765 --- /dev/null +++ b/roles/openshift_health_checker/openshift_checks/etcd_imagedata_size.py @@ -0,0 +1,84 @@ +""" +Ansible module for determining if the size of OpenShift image data exceeds a specified limit in an etcd cluster. +""" + +from openshift_checks import OpenShiftCheck, OpenShiftCheckException, get_var + + +class EtcdImageDataSize(OpenShiftCheck): +    """Check that total size of OpenShift image data does not exceed the recommended limit in an etcd cluster""" + +    name = "etcd_imagedata_size" +    tags = ["etcd"] + +    def run(self, tmp, task_vars): +        etcd_mountpath = self._get_etcd_mountpath(get_var(task_vars, "ansible_mounts")) +        etcd_avail_diskspace = etcd_mountpath["size_available"] +        etcd_total_diskspace = etcd_mountpath["size_total"] + +        etcd_imagedata_size_limit = get_var(task_vars, +                                            "etcd_max_image_data_size_bytes", +                                            default=int(0.5 * float(etcd_total_diskspace - etcd_avail_diskspace))) + +        etcd_is_ssl = get_var(task_vars, "openshift", "master", "etcd_use_ssl", default=False) +        etcd_port = get_var(task_vars, "openshift", "master", "etcd_port", default=2379) +        etcd_hosts = get_var(task_vars, "openshift", "master", "etcd_hosts") + +        config_base = get_var(task_vars, "openshift", "common", "config_base") + +        cert = task_vars.get("etcd_client_cert", config_base + "/master/master.etcd-client.crt") +        key = task_vars.get("etcd_client_key", config_base + "/master/master.etcd-client.key") +        ca_cert = task_vars.get("etcd_client_ca_cert", config_base + "/master/master.etcd-ca.crt") + +        for etcd_host in list(etcd_hosts): +            args = { +                "size_limit_bytes": etcd_imagedata_size_limit, +                "paths": ["/openshift.io/images", "/openshift.io/imagestreams"], +                "host": etcd_host, +                "port": etcd_port, +                "protocol": "https" if etcd_is_ssl else "http", +                "version_prefix": "/v2", +                "allow_redirect": True, +                "ca_cert": ca_cert, +                "cert": { +                    "cert": cert, +                    "key": key, +                }, +            } + +            etcdkeysize = self.module_executor("etcdkeysize", args, task_vars) + +            if etcdkeysize.get("rc", 0) != 0 or etcdkeysize.get("failed"): +                msg = 'Failed to retrieve stats for etcd host "{host}": {reason}' +                reason = etcdkeysize.get("msg") +                if etcdkeysize.get("module_stderr"): +                    reason = etcdkeysize["module_stderr"] + +                msg = msg.format(host=etcd_host, reason=reason) +                return {"failed": True, "changed": False, "msg": msg} + +            if etcdkeysize["size_limit_exceeded"]: +                limit = self._to_gigabytes(etcd_imagedata_size_limit) +                msg = ("The size of OpenShift image data stored in etcd host " +                       "\"{host}\" exceeds the maximum recommended limit of {limit:.2f} GB. " +                       "Use the `oadm prune images` command to cleanup unused Docker images.") +                return {"failed": True, "msg": msg.format(host=etcd_host, limit=limit)} + +        return {"changed": False} + +    @staticmethod +    def _get_etcd_mountpath(ansible_mounts): +        valid_etcd_mount_paths = ["/var/lib/etcd", "/var/lib", "/var", "/"] + +        mount_for_path = {mnt.get("mount"): mnt for mnt in ansible_mounts} +        for path in valid_etcd_mount_paths: +            if path in mount_for_path: +                return mount_for_path[path] + +        paths = ', '.join(sorted(mount_for_path)) or 'none' +        msg = "Unable to determine a valid etcd mountpath. Paths mounted: {}.".format(paths) +        raise OpenShiftCheckException(msg) + +    @staticmethod +    def _to_gigabytes(byte_size): +        return float(byte_size) / 10.0**9 diff --git a/roles/openshift_health_checker/openshift_checks/etcd_volume.py b/roles/openshift_health_checker/openshift_checks/etcd_volume.py new file mode 100644 index 000000000..7452c9cc1 --- /dev/null +++ b/roles/openshift_health_checker/openshift_checks/etcd_volume.py @@ -0,0 +1,58 @@ +"""A health check for OpenShift clusters.""" + +from openshift_checks import OpenShiftCheck, OpenShiftCheckException, get_var + + +class EtcdVolume(OpenShiftCheck): +    """Ensures etcd storage usage does not exceed a given threshold.""" + +    name = "etcd_volume" +    tags = ["etcd", "health"] + +    # Default device usage threshold. Value should be in the range [0, 100]. +    default_threshold_percent = 90 +    # Where to find ectd data, higher priority first. +    supported_mount_paths = ["/var/lib/etcd", "/var/lib", "/var", "/"] + +    @classmethod +    def is_active(cls, task_vars): +        etcd_hosts = get_var(task_vars, "groups", "etcd", default=[]) or get_var(task_vars, "groups", "masters", +                                                                                 default=[]) or [] +        is_etcd_host = get_var(task_vars, "ansible_ssh_host") in etcd_hosts +        return super(EtcdVolume, cls).is_active(task_vars) and is_etcd_host + +    def run(self, tmp, task_vars): +        mount_info = self._etcd_mount_info(task_vars) +        available = mount_info["size_available"] +        total = mount_info["size_total"] +        used = total - available + +        threshold = get_var( +            task_vars, +            "etcd_device_usage_threshold_percent", +            default=self.default_threshold_percent +        ) + +        used_percent = 100.0 * used / total + +        if used_percent > threshold: +            device = mount_info.get("device", "unknown") +            mount = mount_info.get("mount", "unknown") +            msg = "etcd storage usage ({:.1f}%) is above threshold ({:.1f}%). Device: {}, mount: {}.".format( +                used_percent, threshold, device, mount +            ) +            return {"failed": True, "msg": msg} + +        return {"changed": False} + +    def _etcd_mount_info(self, task_vars): +        ansible_mounts = get_var(task_vars, "ansible_mounts") +        mounts = {mnt.get("mount"): mnt for mnt in ansible_mounts} + +        for path in self.supported_mount_paths: +            if path in mounts: +                return mounts[path] + +        paths = ', '.join(sorted(mounts)) or 'none' +        msg = "Unable to find etcd storage mount point. Paths mounted: {}.".format(paths) +        raise OpenShiftCheckException(msg) diff --git a/roles/openshift_health_checker/test/etcd_imagedata_size_test.py b/roles/openshift_health_checker/test/etcd_imagedata_size_test.py new file mode 100644 index 000000000..df9d52d41 --- /dev/null +++ b/roles/openshift_health_checker/test/etcd_imagedata_size_test.py @@ -0,0 +1,328 @@ +import pytest + +from collections import namedtuple +from openshift_checks.etcd_imagedata_size import EtcdImageDataSize, OpenShiftCheckException +from etcdkeysize import check_etcd_key_size + + +def fake_etcd_client(root): +    fake_nodes = dict() +    fake_etcd_node(root, fake_nodes) + +    clientclass = namedtuple("client", ["read"]) +    return clientclass(lambda key, recursive: fake_etcd_result(fake_nodes[key])) + + +def fake_etcd_result(fake_node): +    resultclass = namedtuple("result", ["leaves"]) +    if not fake_node.dir: +        return resultclass([fake_node]) + +    return resultclass(fake_node.leaves) + + +def fake_etcd_node(node, visited): +    min_req_fields = ["dir", "key"] +    fields = list(node) +    leaves = [] + +    if node["dir"] and node.get("leaves"): +        for leaf in node["leaves"]: +            leaves.append(fake_etcd_node(leaf, visited)) + +    if len(set(min_req_fields) - set(fields)) > 0: +        raise ValueError("fake etcd nodes require at least {} fields.".format(min_req_fields)) + +    if node.get("leaves"): +        node["leaves"] = leaves + +    nodeclass = namedtuple("node", fields) +    nodeinst = nodeclass(**node) +    visited[nodeinst.key] = nodeinst + +    return nodeinst + + +@pytest.mark.parametrize('ansible_mounts,extra_words', [ +    ([], ['none']),  # empty ansible_mounts +    ([{'mount': '/mnt'}], ['/mnt']),  # missing relevant mount paths +]) +def test_cannot_determine_available_mountpath(ansible_mounts, extra_words): +    task_vars = dict( +        ansible_mounts=ansible_mounts, +    ) +    check = EtcdImageDataSize(execute_module=fake_execute_module) + +    with pytest.raises(OpenShiftCheckException) as excinfo: +        check.run(tmp=None, task_vars=task_vars) + +    for word in 'determine valid etcd mountpath'.split() + extra_words: +        assert word in str(excinfo.value) + + +@pytest.mark.parametrize('ansible_mounts,tree,size_limit,should_fail,extra_words', [ +    ( +        # test that default image size limit evals to 1/2 * (total size in use) +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        {"dir": False, "key": "/", "value": "1234"}, +        None, +        False, +        [], +    ), +    ( +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 48 * 10**9, +        }], +        {"dir": False, "key": "/", "value": "1234"}, +        None, +        False, +        [], +    ), +    ( +        # set max size limit for image data to be below total node value +        # total node value is defined as the sum of the value field +        # from every node +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 48 * 10**9, +        }], +        {"dir": False, "key": "/", "value": "12345678"}, +        7, +        True, +        ["exceeds the maximum recommended limit", "0.00 GB"], +    ), +    ( +        [{ +            'mount': '/', +            'size_available': 48 * 10**9 - 1, +            'size_total': 48 * 10**9, +        }], +        {"dir": False, "key": "/", "value": "1234"}, +        None, +        True, +        ["exceeds the maximum recommended limit", "0.00 GB"], +    ) +]) +def test_check_etcd_key_size_calculates_correct_limit(ansible_mounts, tree, size_limit, should_fail, extra_words): +    def execute_module(module_name, args, tmp=None, task_vars=None): +        if module_name != "etcdkeysize": +            return { +                "changed": False, +            } + +        client = fake_etcd_client(tree) +        s, limit_exceeded = check_etcd_key_size(client, tree["key"], args["size_limit_bytes"]) + +        return {"size_limit_exceeded": limit_exceeded} + +    task_vars = dict( +        etcd_max_image_data_size_bytes=size_limit, +        ansible_mounts=ansible_mounts, +        openshift=dict( +            master=dict(etcd_hosts=["localhost"]), +            common=dict(config_base="/var/lib/origin") +        ) +    ) +    if size_limit is None: +        task_vars.pop("etcd_max_image_data_size_bytes") + +    check = EtcdImageDataSize(execute_module=execute_module).run(tmp=None, task_vars=task_vars) + +    if should_fail: +        assert check["failed"] + +        for word in extra_words: +            assert word in check["msg"] +    else: +        assert not check.get("failed", False) + + +@pytest.mark.parametrize('ansible_mounts,tree,root_path,expected_size,extra_words', [ +    ( +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        # test recursive size check on tree with height > 1 +        { +            "dir": True, +            "key": "/", +            "leaves": [ +                {"dir": False, "key": "/foo1", "value": "1234"}, +                {"dir": False, "key": "/foo2", "value": "1234"}, +                {"dir": False, "key": "/foo3", "value": "1234"}, +                {"dir": False, "key": "/foo4", "value": "1234"}, +                { +                    "dir": True, +                    "key": "/foo5", +                    "leaves": [ +                        {"dir": False, "key": "/foo/bar1", "value": "56789"}, +                        {"dir": False, "key": "/foo/bar2", "value": "56789"}, +                        {"dir": False, "key": "/foo/bar3", "value": "56789"}, +                        { +                            "dir": True, +                            "key": "/foo/bar4", +                            "leaves": [ +                                {"dir": False, "key": "/foo/bar/baz1", "value": "123"}, +                                {"dir": False, "key": "/foo/bar/baz2", "value": "123"}, +                            ] +                        }, +                    ] +                }, +            ] +        }, +        "/", +        37, +        [], +    ), +    ( +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        # test correct sub-tree size calculation +        { +            "dir": True, +            "key": "/", +            "leaves": [ +                {"dir": False, "key": "/foo1", "value": "1234"}, +                {"dir": False, "key": "/foo2", "value": "1234"}, +                {"dir": False, "key": "/foo3", "value": "1234"}, +                {"dir": False, "key": "/foo4", "value": "1234"}, +                { +                    "dir": True, +                    "key": "/foo5", +                    "leaves": [ +                        {"dir": False, "key": "/foo/bar1", "value": "56789"}, +                        {"dir": False, "key": "/foo/bar2", "value": "56789"}, +                        {"dir": False, "key": "/foo/bar3", "value": "56789"}, +                        { +                            "dir": True, +                            "key": "/foo/bar4", +                            "leaves": [ +                                {"dir": False, "key": "/foo/bar/baz1", "value": "123"}, +                                {"dir": False, "key": "/foo/bar/baz2", "value": "123"}, +                            ] +                        }, +                    ] +                }, +            ] +        }, +        "/foo5", +        21, +        [], +    ), +    ( +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        # test that a non-existing key is handled correctly +        { +            "dir": False, +            "key": "/", +            "value": "1234", +        }, +        "/missing", +        0, +        [], +    ), +    ( +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        # test etcd cycle handling +        { +            "dir": True, +            "key": "/", +            "leaves": [ +                {"dir": False, "key": "/foo1", "value": "1234"}, +                {"dir": False, "key": "/foo2", "value": "1234"}, +                {"dir": False, "key": "/foo3", "value": "1234"}, +                {"dir": False, "key": "/foo4", "value": "1234"}, +                { +                    "dir": True, +                    "key": "/", +                    "leaves": [ +                        {"dir": False, "key": "/foo1", "value": "1"}, +                    ], +                }, +            ] +        }, +        "/", +        16, +        [], +    ), +]) +def test_etcd_key_size_check_calculates_correct_size(ansible_mounts, tree, root_path, expected_size, extra_words): +    def execute_module(module_name, args, tmp=None, task_vars=None): +        if module_name != "etcdkeysize": +            return { +                "changed": False, +            } + +        client = fake_etcd_client(tree) +        size, limit_exceeded = check_etcd_key_size(client, root_path, args["size_limit_bytes"]) + +        assert size == expected_size +        return { +            "size_limit_exceeded": limit_exceeded, +        } + +    task_vars = dict( +        ansible_mounts=ansible_mounts, +        openshift=dict( +            master=dict(etcd_hosts=["localhost"]), +            common=dict(config_base="/var/lib/origin") +        ) +    ) + +    check = EtcdImageDataSize(execute_module=execute_module).run(tmp=None, task_vars=task_vars) +    assert not check.get("failed", False) + + +def test_etcdkeysize_module_failure(): +    def execute_module(module_name, tmp=None, task_vars=None): +        if module_name != "etcdkeysize": +            return { +                "changed": False, +            } + +        return { +            "rc": 1, +            "module_stderr": "failure", +        } + +    task_vars = dict( +        ansible_mounts=[{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9, +        }], +        openshift=dict( +            master=dict(etcd_hosts=["localhost"]), +            common=dict(config_base="/var/lib/origin") +        ) +    ) + +    check = EtcdImageDataSize(execute_module=execute_module).run(tmp=None, task_vars=task_vars) + +    assert check["failed"] +    for word in "Failed to retrieve stats": +        assert word in check["msg"] + + +def fake_execute_module(*args): +    raise AssertionError('this function should not be called') diff --git a/roles/openshift_health_checker/test/etcd_volume_test.py b/roles/openshift_health_checker/test/etcd_volume_test.py new file mode 100644 index 000000000..917045526 --- /dev/null +++ b/roles/openshift_health_checker/test/etcd_volume_test.py @@ -0,0 +1,149 @@ +import pytest + +from openshift_checks.etcd_volume import EtcdVolume, OpenShiftCheckException + + +@pytest.mark.parametrize('ansible_mounts,extra_words', [ +    ([], ['none']),  # empty ansible_mounts +    ([{'mount': '/mnt'}], ['/mnt']),  # missing relevant mount paths +]) +def test_cannot_determine_available_disk(ansible_mounts, extra_words): +    task_vars = dict( +        ansible_mounts=ansible_mounts, +    ) +    check = EtcdVolume(execute_module=fake_execute_module) + +    with pytest.raises(OpenShiftCheckException) as excinfo: +        check.run(tmp=None, task_vars=task_vars) + +    for word in 'Unable to find etcd storage mount point'.split() + extra_words: +        assert word in str(excinfo.value) + + +@pytest.mark.parametrize('size_limit,ansible_mounts', [ +    ( +        # if no size limit is specified, expect max usage +        # limit to default to 90% of size_total +        None, +        [{ +            'mount': '/', +            'size_available': 40 * 10**9, +            'size_total': 80 * 10**9 +        }], +    ), +    ( +        1, +        [{ +            'mount': '/', +            'size_available': 30 * 10**9, +            'size_total': 30 * 10**9, +        }], +    ), +    ( +        20000000000, +        [{ +            'mount': '/', +            'size_available': 20 * 10**9, +            'size_total': 40 * 10**9, +        }], +    ), +    ( +        5000000000, +        [{ +            # not enough space on / ... +            'mount': '/', +            'size_available': 0, +            'size_total': 0, +        }, { +            # not enough space on /var/lib ... +            'mount': '/var/lib', +            'size_available': 2 * 10**9, +            'size_total': 21 * 10**9, +        }, { +            # ... but enough on /var/lib/etcd +            'mount': '/var/lib/etcd', +            'size_available': 36 * 10**9, +            'size_total': 40 * 10**9 +        }], +    ) +]) +def test_succeeds_with_recommended_disk_space(size_limit, ansible_mounts): +    task_vars = dict( +        etcd_device_usage_threshold_percent=size_limit, +        ansible_mounts=ansible_mounts, +    ) + +    if task_vars["etcd_device_usage_threshold_percent"] is None: +        task_vars.pop("etcd_device_usage_threshold_percent") + +    check = EtcdVolume(execute_module=fake_execute_module) +    result = check.run(tmp=None, task_vars=task_vars) + +    assert not result.get('failed', False) + + +@pytest.mark.parametrize('size_limit_percent,ansible_mounts,extra_words', [ +    ( +        # if no size limit is specified, expect max usage +        # limit to default to 90% of size_total +        None, +        [{ +            'mount': '/', +            'size_available': 1 * 10**9, +            'size_total': 100 * 10**9, +        }], +        ['99.0%'], +    ), +    ( +        70.0, +        [{ +            'mount': '/', +            'size_available': 1 * 10**6, +            'size_total': 5 * 10**9, +        }], +        ['100.0%'], +    ), +    ( +        40.0, +        [{ +            'mount': '/', +            'size_available': 2 * 10**9, +            'size_total': 6 * 10**9, +        }], +        ['66.7%'], +    ), +    ( +        None, +        [{ +            # enough space on /var ... +            'mount': '/var', +            'size_available': 20 * 10**9, +            'size_total': 20 * 10**9, +        }, { +            # .. but not enough on /var/lib +            'mount': '/var/lib', +            'size_available': 1 * 10**9, +            'size_total': 20 * 10**9, +        }], +        ['95.0%'], +    ), +]) +def test_fails_with_insufficient_disk_space(size_limit_percent, ansible_mounts, extra_words): +    task_vars = dict( +        etcd_device_usage_threshold_percent=size_limit_percent, +        ansible_mounts=ansible_mounts, +    ) + +    if task_vars["etcd_device_usage_threshold_percent"] is None: +        task_vars.pop("etcd_device_usage_threshold_percent") + +    check = EtcdVolume(execute_module=fake_execute_module) +    result = check.run(tmp=None, task_vars=task_vars) + +    assert result['failed'] +    for word in extra_words: +        assert word in result['msg'] + + +def fake_execute_module(*args): +    raise AssertionError('this function should not be called') diff --git a/roles/openshift_hosted/README.md b/roles/openshift_hosted/README.md index 6d576df71..3e5d7f860 100644 --- a/roles/openshift_hosted/README.md +++ b/roles/openshift_hosted/README.md @@ -28,6 +28,14 @@ From this role:  | openshift_hosted_registry_selector    | region=infra                             | Node selector used when creating registry. The OpenShift registry will only be deployed to nodes matching this selector. |  | openshift_hosted_registry_cert_expire_days | `730` (2 years)                     | Validity of the certificates in days. Works only with OpenShift version 1.5 (3.5) and later.                             | +If you specify `openshift_hosted_registry_kind=glusterfs`, the following +variables also control configuration behavior: + +| Name                                         | Default value | Description                                                                  | +|----------------------------------------------|---------------|------------------------------------------------------------------------------| +| openshift_hosted_registry_glusterfs_swap     | False         | Whether to swap an existing registry's storage volume for a GlusterFS volume | +| openshift_hosted_registry_glusterfs_swapcopy | True          | If swapping, also copy the current contents of the registry volume           | +  Dependencies  ------------ diff --git a/roles/openshift_hosted/defaults/main.yml b/roles/openshift_hosted/defaults/main.yml index e7e62e5e4..089054e2f 100644 --- a/roles/openshift_hosted/defaults/main.yml +++ b/roles/openshift_hosted/defaults/main.yml @@ -30,3 +30,8 @@ openshift_hosted_routers:  openshift_hosted_router_certificate: {}  openshift_hosted_registry_cert_expire_days: 730  openshift_hosted_router_create_certificate: False + +os_firewall_allow: +- service: Docker Registry Port +  port: 5000/tcp +  when: openshift.common.use_calico | bool diff --git a/roles/openshift_hosted/meta/main.yml b/roles/openshift_hosted/meta/main.yml index 9626c23c1..9e3f37130 100644 --- a/roles/openshift_hosted/meta/main.yml +++ b/roles/openshift_hosted/meta/main.yml @@ -15,3 +15,8 @@ dependencies:  - role: openshift_cli  - role: openshift_hosted_facts  - role: lib_openshift +- role: os_firewall +  os_firewall_allow: +  - service: Docker Registry Port +    port: 5000/tcp +  when: openshift.common.use_calico | bool diff --git a/roles/openshift_hosted/tasks/registry/registry.yml b/roles/openshift_hosted/tasks/registry/registry.yml index 6e691c26f..751489958 100644 --- a/roles/openshift_hosted/tasks/registry/registry.yml +++ b/roles/openshift_hosted/tasks/registry/registry.yml @@ -61,7 +61,7 @@      name: "{{ openshift_hosted_registry_serviceaccount }}"      namespace: "{{ openshift_hosted_registry_namespace }}" -- name: Grant the registry serivce account access to the appropriate scc +- name: Grant the registry service account access to the appropriate scc    oc_adm_policy_user:      user: "system:serviceaccount:{{ openshift_hosted_registry_namespace }}:{{ openshift_hosted_registry_serviceaccount }}"      namespace: "{{ openshift_hosted_registry_namespace }}" @@ -126,4 +126,4 @@  - include: storage/glusterfs.yml    when: -  - openshift.hosted.registry.storage.kind | default(none) == 'glusterfs' +  - openshift.hosted.registry.storage.kind | default(none) == 'glusterfs' or openshift.hosted.registry.storage.glusterfs.swap diff --git a/roles/openshift_hosted/tasks/registry/storage/glusterfs.yml b/roles/openshift_hosted/tasks/registry/storage/glusterfs.yml index b18b24266..e6bb196b8 100644 --- a/roles/openshift_hosted/tasks/registry/storage/glusterfs.yml +++ b/roles/openshift_hosted/tasks/registry/storage/glusterfs.yml @@ -1,10 +1,18 @@  --- +- name: Get registry DeploymentConfig +  oc_obj: +    namespace: "{{ openshift_hosted_registry_namespace }}" +    state: list +    kind: dc +    name: "{{ openshift_hosted_registry_name }}" +  register: registry_dc +  - name: Wait for registry pods    oc_obj:      namespace: "{{ openshift_hosted_registry_namespace }}"      state: list      kind: pod -    selector: "{{ openshift_hosted_registry_name }}={{ openshift_hosted_registry_namespace }}" +    selector: "{% for label, value in registry_dc.results.results[0].spec.selector.iteritems() %}{{ label }}={{ value }}{% if not loop.last %},{% endif %}{% endfor %}"    register: registry_pods    until:    - "registry_pods.results.results[0]['items'] | count > 0" @@ -38,6 +46,39 @@      mode: "2775"      recurse: True +- block: +  - name: Activate registry maintenance mode +    oc_env: +      namespace: "{{ openshift_hosted_registry_namespace }}" +      name: "{{ openshift_hosted_registry_name }}" +      env_vars: +      - REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED: 'true' + +  - name: Get first registry pod name +    set_fact: +      registry_pod_name: "{{ registry_pods.results.results[0]['items'][0].metadata.name }}" + +  - name: Copy current registry contents to new GlusterFS volume +    command: "oc rsync {{ registry_pod_name }}:/registry/ {{ mktemp.stdout }}/" +    when: openshift.hosted.registry.storage.glusterfs.swapcopy + +  - name: Swap new GlusterFS registry volume +    oc_volume: +      namespace: "{{ openshift_hosted_registry_namespace }}" +      name: "{{ openshift_hosted_registry_name }}" +      vol_name: registry-storage +      mount_type: pvc +      claim_name: "{{ openshift.hosted.registry.storage.volume.name }}-glusterfs-claim" + +  - name: Deactivate registry maintenance mode +    oc_env: +      namespace: "{{ openshift_hosted_registry_namespace }}" +      name: "{{ openshift_hosted_registry_name }}" +      state: absent +      env_vars: +      - REGISTRY_STORAGE_MAINTENANCE_READONLY_ENABLED: 'true' +  when: openshift.hosted.registry.storage.glusterfs.swap +  - name: Unmount registry volume    mount:      state: unmounted diff --git a/roles/openshift_hosted_templates/files/v3.6/enterprise/registry-console.yaml b/roles/openshift_hosted_templates/files/v3.6/enterprise/registry-console.yaml index 28feac4e6..8fe02444e 100644 --- a/roles/openshift_hosted_templates/files/v3.6/enterprise/registry-console.yaml +++ b/roles/openshift_hosted_templates/files/v3.6/enterprise/registry-console.yaml @@ -103,9 +103,9 @@ parameters:    - description: 'Specify "registry/repository" prefix for container image; e.g. for "registry.access.redhat.com/openshift3/registry-console:latest", set prefix "registry.access.redhat.com/openshift3/"'      name: IMAGE_PREFIX      value: "registry.access.redhat.com/openshift3/" -  - description: 'Specify image version; e.g. for "registry.access.redhat.com/openshift3/registry-console:3.5", set version "3.5"' +  - description: 'Specify image version; e.g. for "registry.access.redhat.com/openshift3/registry-console:3.6", set version "3.6"'      name: IMAGE_VERSION -    value: "3.5" +    value: "3.6"    - description: "The public URL for the Openshift OAuth Provider, e.g. https://openshift.example.com:8443"      name: OPENSHIFT_OAUTH_PROVIDER_URL      required: true diff --git a/roles/openshift_loadbalancer/templates/haproxy.docker.service.j2 b/roles/openshift_loadbalancer/templates/haproxy.docker.service.j2 index 5385df3b7..72182fcdd 100644 --- a/roles/openshift_loadbalancer/templates/haproxy.docker.service.j2 +++ b/roles/openshift_loadbalancer/templates/haproxy.docker.service.j2 @@ -1,7 +1,7 @@  [Unit] -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  ExecStartPre=-/usr/bin/docker rm -f openshift_loadbalancer @@ -14,4 +14,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_logging/README.md b/roles/openshift_logging/README.md index cba0f2de8..3c410eff2 100644 --- a/roles/openshift_logging/README.md +++ b/roles/openshift_logging/README.md @@ -97,3 +97,30 @@ same as above for their non-ops counterparts, but apply to the OPS cluster insta  - `openshift_logging_kibana_ops_proxy_cpu_limit`: The amount of CPU to allocate to Kibana proxy or unset if not specified.  - `openshift_logging_kibana_ops_proxy_memory_limit`: The amount of memory to allocate to Kibana proxy or unset if not specified.  - `openshift_logging_kibana_ops_replica_count`: The number of replicas Kibana ops should be scaled up to. Defaults to 1. + +Elasticsearch can be exposed for external clients outside of the cluster. +- `openshift_logging_es_allow_external`: True (default is False) - if this is +  True, Elasticsearch will be exposed as a Route +- `openshift_logging_es_hostname`: The external facing hostname to use for +  the route and the TLS server certificate (default is "es." + +  `openshift_master_default_subdomain`) +- `openshift_logging_es_cert`: The location of the certificate Elasticsearch +  uses for the external TLS server cert (default is a generated cert) +- `openshift_logging_es_key`: The location of the key Elasticsearch +  uses for the external TLS server cert (default is a generated key) +- `openshift_logging_es_ca_ext`: The location of the CA cert for the cert +  Elasticsearch uses for the external TLS server cert (default is the internal +  CA) +Elasticsearch OPS too, if using an OPS cluster: +- `openshift_logging_es_ops_allow_external`: True (default is False) - if this is +  True, Elasticsearch will be exposed as a Route +- `openshift_logging_es_ops_hostname`: The external facing hostname to use for +  the route and the TLS server certificate (default is "es-ops." + +  `openshift_master_default_subdomain`) +- `openshift_logging_es_ops_cert`: The location of the certificate Elasticsearch +  uses for the external TLS server cert (default is a generated cert) +- `openshift_logging_es_ops_key`: The location of the key Elasticsearch +  uses for the external TLS server cert (default is a generated key) +- `openshift_logging_es_ops_ca_ext`: The location of the CA cert for the cert +  Elasticsearch uses for the external TLS server cert (default is the internal +  CA) diff --git a/roles/openshift_logging/defaults/main.yml b/roles/openshift_logging/defaults/main.yml index c05cc5f98..837c54067 100644 --- a/roles/openshift_logging/defaults/main.yml +++ b/roles/openshift_logging/defaults/main.yml @@ -26,10 +26,10 @@ openshift_logging_curator_ops_nodeselector: "{{ openshift_hosted_logging_curator  openshift_logging_kibana_hostname: "{{ openshift_hosted_logging_hostname | default('kibana.' ~ (openshift_master_default_subdomain | default('router.default.svc.cluster.local', true))) }}"  openshift_logging_kibana_cpu_limit: null -openshift_logging_kibana_memory_limit: null +openshift_logging_kibana_memory_limit: 736Mi  openshift_logging_kibana_proxy_debug: false  openshift_logging_kibana_proxy_cpu_limit: null -openshift_logging_kibana_proxy_memory_limit: null +openshift_logging_kibana_proxy_memory_limit: 96Mi  openshift_logging_kibana_replica_count: 1  openshift_logging_kibana_edge_term_policy: Redirect @@ -50,10 +50,10 @@ openshift_logging_kibana_ca: ""  openshift_logging_kibana_ops_hostname: "{{ openshift_hosted_logging_ops_hostname | default('kibana-ops.' ~ (openshift_master_default_subdomain | default('router.default.svc.cluster.local', true))) }}"  openshift_logging_kibana_ops_cpu_limit: null -openshift_logging_kibana_ops_memory_limit: null +openshift_logging_kibana_ops_memory_limit: 736Mi  openshift_logging_kibana_ops_proxy_debug: false  openshift_logging_kibana_ops_proxy_cpu_limit: null -openshift_logging_kibana_ops_proxy_memory_limit: null +openshift_logging_kibana_ops_proxy_memory_limit: 96Mi  openshift_logging_kibana_ops_replica_count: 1  #The absolute path on the control node to the cert file to use @@ -72,7 +72,7 @@ openshift_logging_fluentd_nodeselector: "{{ openshift_hosted_logging_fluentd_nod  openshift_logging_fluentd_cpu_limit: 100m  openshift_logging_fluentd_memory_limit: 512Mi  openshift_logging_fluentd_es_copy: false -openshift_logging_fluentd_use_journal: "{{ openshift_hosted_logging_use_journal | default('') }}" +openshift_logging_fluentd_use_journal: "{{ openshift_hosted_logging_use_journal if openshift_hosted_logging_use_journal is defined else (docker_log_driver == 'journald') | ternary(True, False) if docker_log_driver is defined else (openshift.docker.log_driver == 'journald') | ternary(True, False) if openshift.docker.log_driver is defined else openshift.docker.options | search('--log-driver=journald') if openshift.docker.options is defined else default(omit) }}"  openshift_logging_fluentd_journal_source: "{{ openshift_hosted_logging_journal_source | default('') }}"  openshift_logging_fluentd_journal_read_from_head: "{{ openshift_hosted_logging_journal_read_from_head | default('') }}"  openshift_logging_fluentd_hosts: ['--all'] @@ -99,6 +99,22 @@ openshift_logging_es_config: {}  openshift_logging_es_number_of_shards: 1  openshift_logging_es_number_of_replicas: 0 +# for exposing es to external (outside of the cluster) clients +openshift_logging_es_allow_external: False +openshift_logging_es_hostname: "{{ 'es.' ~ (openshift_master_default_subdomain | default('router.default.svc.cluster.local', true)) }}" + +#The absolute path on the control node to the cert file to use +#for the public facing es certs +openshift_logging_es_cert: "" + +#The absolute path on the control node to the key file to use +#for the public facing es certs +openshift_logging_es_key: "" + +#The absolute path on the control node to the CA file to use +#for the public facing es certs +openshift_logging_es_ca_ext: "" +  # allow cluster-admin or cluster-reader to view operations index  openshift_logging_es_ops_allow_cluster_reader: False @@ -118,6 +134,22 @@ openshift_logging_es_ops_recover_after_time: 5m  openshift_logging_es_ops_storage_group: "{{ openshift_hosted_logging_elasticsearch_storage_group | default('65534') }}"  openshift_logging_es_ops_nodeselector: "{{ openshift_hosted_logging_elasticsearch_ops_nodeselector | default('') | map_from_pairs }}" +# for exposing es-ops to external (outside of the cluster) clients +openshift_logging_es_ops_allow_external: False +openshift_logging_es_ops_hostname: "{{ 'es-ops.' ~ (openshift_master_default_subdomain | default('router.default.svc.cluster.local', true)) }}" + +#The absolute path on the control node to the cert file to use +#for the public facing es-ops certs +openshift_logging_es_ops_cert: "" + +#The absolute path on the control node to the key file to use +#for the public facing es-ops certs +openshift_logging_es_ops_key: "" + +#The absolute path on the control node to the CA file to use +#for the public facing es-ops certs +openshift_logging_es_ops_ca_ext: "" +  # storage related defaults  openshift_logging_storage_access_modes: "{{ openshift_hosted_logging_storage_access_modes | default(['ReadWriteOnce']) }}" diff --git a/roles/openshift_logging/library/openshift_logging_facts.py b/roles/openshift_logging/library/openshift_logging_facts.py index 64bc33435..a55e72725 100644 --- a/roles/openshift_logging/library/openshift_logging_facts.py +++ b/roles/openshift_logging/library/openshift_logging_facts.py @@ -37,7 +37,7 @@ LOGGING_INFRA_KEY = "logging-infra"  # selectors for filtering resources  DS_FLUENTD_SELECTOR = LOGGING_INFRA_KEY + "=" + "fluentd"  LOGGING_SELECTOR = LOGGING_INFRA_KEY + "=" + "support" -ROUTE_SELECTOR = "component=support, logging-infra=support, provider=openshift" +ROUTE_SELECTOR = "component=support,logging-infra=support,provider=openshift"  COMPONENTS = ["kibana", "curator", "elasticsearch", "fluentd", "kibana_ops", "curator_ops", "elasticsearch_ops"] diff --git a/roles/openshift_logging/tasks/generate_certs.yaml b/roles/openshift_logging/tasks/generate_certs.yaml index b34df018d..46a7e82c6 100644 --- a/roles/openshift_logging/tasks/generate_certs.yaml +++ b/roles/openshift_logging/tasks/generate_certs.yaml @@ -60,6 +60,24 @@      - procure_component: mux    when: openshift_logging_use_mux +- include: procure_server_certs.yaml +  loop_control: +    loop_var: cert_info +  with_items: +    - procure_component: es +      hostnames: "es, {{openshift_logging_es_hostname}}" +  when: openshift_logging_es_allow_external | bool + +- include: procure_server_certs.yaml +  loop_control: +    loop_var: cert_info +  with_items: +    - procure_component: es-ops +      hostnames: "es-ops, {{openshift_logging_es_ops_hostname}}" +  when: +    - openshift_logging_es_allow_external | bool +    - openshift_logging_use_ops | bool +  - name: Copy proxy TLS configuration file    copy: src=server-tls.json dest={{generated_certs_dir}}/server-tls.json    when: server_tls_json is undefined @@ -108,6 +126,14 @@      loop_var: node_name    when: openshift_logging_use_mux +- name: Generate PEM cert for Elasticsearch external route +  include: generate_pems.yaml component={{node_name}} +  with_items: +    - system.logging.es +  loop_control: +    loop_var: node_name +  when: openshift_logging_es_allow_external | bool +  - name: Creating necessary JKS certs    include: generate_jks.yaml diff --git a/roles/openshift_logging/tasks/generate_routes.yaml b/roles/openshift_logging/tasks/generate_routes.yaml index f76bb3a0a..ae9a8e023 100644 --- a/roles/openshift_logging/tasks/generate_routes.yaml +++ b/roles/openshift_logging/tasks/generate_routes.yaml @@ -75,3 +75,95 @@        provider: openshift    when: openshift_logging_use_ops | bool    changed_when: no + +- set_fact: es_key={{ lookup('file', openshift_logging_es_key) | b64encode }} +  when: +  - openshift_logging_es_key | trim | length > 0 +  - openshift_logging_es_allow_external | bool +  changed_when: false + +- set_fact: es_cert={{ lookup('file', openshift_logging_es_cert)| b64encode  }} +  when: +  - openshift_logging_es_cert | trim | length > 0 +  - openshift_logging_es_allow_external | bool +  changed_when: false + +- set_fact: es_ca={{ lookup('file', openshift_logging_es_ca_ext)| b64encode  }} +  when: +  - openshift_logging_es_ca_ext | trim | length > 0 +  - openshift_logging_es_allow_external | bool +  changed_when: false + +- set_fact: es_ca={{key_pairs | entry_from_named_pair('ca_file') }} +  when: +  - es_ca is not defined +  - openshift_logging_es_allow_external | bool +  changed_when: false + +- name: Generating Elasticsearch logging routes +  template: src=route_reencrypt.j2 dest={{mktemp.stdout}}/templates/logging-logging-es-route.yaml +  tags: routes +  vars: +    obj_name: "logging-es" +    route_host: "{{openshift_logging_es_hostname}}" +    service_name: "logging-es" +    tls_key: "{{es_key | default('') | b64decode}}" +    tls_cert: "{{es_cert | default('') | b64decode}}" +    tls_ca_cert: "{{es_ca | b64decode}}" +    tls_dest_ca_cert: "{{key_pairs | entry_from_named_pair('ca_file')| b64decode }}" +    edge_term_policy: "{{openshift_logging_es_edge_term_policy | default('') }}" +    labels: +      component: support +      logging-infra: support +      provider: openshift +  changed_when: no +  when: openshift_logging_es_allow_external | bool + +- set_fact: es_ops_key={{ lookup('file', openshift_logging_es_ops_key) | b64encode }} +  when: +  - openshift_logging_es_ops_allow_external | bool +  - openshift_logging_use_ops | bool +  - "{{ openshift_logging_es_ops_key | trim | length > 0 }}" +  changed_when: false + +- set_fact: es_ops_cert={{ lookup('file', openshift_logging_es_ops_cert)| b64encode  }} +  when: +  - openshift_logging_es_ops_allow_external | bool +  - openshift_logging_use_ops | bool +  - "{{openshift_logging_es_ops_cert | trim | length > 0}}" +  changed_when: false + +- set_fact: es_ops_ca={{ lookup('file', openshift_logging_es_ops_ca_ext)| b64encode  }} +  when: +  - openshift_logging_es_ops_allow_external | bool +  - openshift_logging_use_ops | bool +  - "{{openshift_logging_es_ops_ca_ext | trim | length > 0}}" +  changed_when: false + +- set_fact: es_ops_ca={{key_pairs | entry_from_named_pair('ca_file') }} +  when: +  - openshift_logging_es_ops_allow_external | bool +  - openshift_logging_use_ops | bool +  - es_ops_ca is not defined +  changed_when: false + +- name: Generating Elasticsearch logging ops routes +  template: src=route_reencrypt.j2 dest={{mktemp.stdout}}/templates/logging-logging-es-ops-route.yaml +  tags: routes +  vars: +    obj_name: "logging-es-ops" +    route_host: "{{openshift_logging_es_ops_hostname}}" +    service_name: "logging-es-ops" +    tls_key: "{{es_ops_key | default('') | b64decode}}" +    tls_cert: "{{es_ops_cert | default('') | b64decode}}" +    tls_ca_cert: "{{es_ops_ca | b64decode}}" +    tls_dest_ca_cert: "{{key_pairs | entry_from_named_pair('ca_file')| b64decode }}" +    edge_term_policy: "{{openshift_logging_es_ops_edge_term_policy | default('') }}" +    labels: +      component: support +      logging-infra: support +      provider: openshift +  when: +  - openshift_logging_es_ops_allow_external | bool +  - openshift_logging_use_ops | bool +  changed_when: no diff --git a/roles/openshift_logging/tasks/generate_secrets.yaml b/roles/openshift_logging/tasks/generate_secrets.yaml index c1da49fd8..b629bd995 100644 --- a/roles/openshift_logging/tasks/generate_secrets.yaml +++ b/roles/openshift_logging/tasks/generate_secrets.yaml @@ -99,3 +99,31 @@    when: logging_es_secret.stdout is defined    check_mode: no    changed_when: no + +- name: Retrieving the cert to use when generating secrets for Elasticsearch external route +  slurp: src="{{generated_certs_dir}}/{{item.file}}" +  register: es_key_pairs +  with_items: +    - { name: "ca_file", file: "ca.crt" } +    - { name: "es_key", file: "system.logging.es.key"} +    - { name: "es_cert", file: "system.logging.es.crt"} +  when: openshift_logging_es_allow_external | bool + +- name: Generating secrets for Elasticsearch external route +  template: src=secret.j2 dest={{mktemp.stdout}}/templates/{{secret_name}}-secret.yaml +  vars: +    secret_name: "logging-{{component}}" +    secret_key_file: "{{component}}_key" +    secret_cert_file: "{{component}}_cert" +    secrets: +      - {key: ca, value: "{{es_key_pairs | entry_from_named_pair('ca_file')| b64decode }}"} +      - {key: key, value: "{{es_key_pairs | entry_from_named_pair(secret_key_file)| b64decode }}"} +      - {key: cert, value: "{{es_key_pairs | entry_from_named_pair(secret_cert_file)| b64decode }}"} +    secret_keys: ["ca", "cert", "key"] +  with_items: +    - es +  loop_control: +    loop_var: component +  check_mode: no +  changed_when: no +  when: openshift_logging_es_allow_external | bool diff --git a/roles/openshift_logging/tasks/main.yaml b/roles/openshift_logging/tasks/main.yaml index 387da618d..3d8cd3410 100644 --- a/roles/openshift_logging/tasks/main.yaml +++ b/roles/openshift_logging/tasks/main.yaml @@ -28,6 +28,7 @@    register: local_tmp    changed_when: False    check_mode: no +  become: no  - debug: msg="Created local temp dir {{local_tmp.stdout}}" diff --git a/roles/openshift_logging/templates/elasticsearch.yml.j2 b/roles/openshift_logging/templates/elasticsearch.yml.j2 index 93c4d854c..355642cb7 100644 --- a/roles/openshift_logging/templates/elasticsearch.yml.j2 +++ b/roles/openshift_logging/templates/elasticsearch.yml.j2 @@ -28,11 +28,10 @@ cloud:  discovery:    type: kubernetes    zen.ping.multicast.enabled: false -  zen.minimum_master_nodes: {{es_min_masters}} +  zen.minimum_master_nodes: ${NODE_QUORUM}  gateway: -  expected_master_nodes: ${NODE_QUORUM} -  recover_after_nodes: ${RECOVER_AFTER_NODES} +  recover_after_nodes: ${NODE_QUORUM}    expected_nodes: ${RECOVER_EXPECTED_NODES}    recover_after_time: ${RECOVER_AFTER_TIME} diff --git a/roles/openshift_logging/templates/es.j2 b/roles/openshift_logging/templates/es.j2 index f89855bf5..680c16cf4 100644 --- a/roles/openshift_logging/templates/es.j2 +++ b/roles/openshift_logging/templates/es.j2 @@ -78,9 +78,6 @@ spec:                name: "NODE_QUORUM"                value: "{{es_node_quorum | int}}"              - -              name: "RECOVER_AFTER_NODES" -              value: "{{es_recover_after_nodes}}" -            -                name: "RECOVER_EXPECTED_NODES"                value: "{{es_recover_expected_nodes}}"              - diff --git a/roles/openshift_logging/templates/fluentd.j2 b/roles/openshift_logging/templates/fluentd.j2 index d13691259..5c93d823e 100644 --- a/roles/openshift_logging/templates/fluentd.j2 +++ b/roles/openshift_logging/templates/fluentd.j2 @@ -59,6 +59,9 @@ spec:          - name: dockercfg            mountPath: /etc/sysconfig/docker            readOnly: true +        - name: dockerdaemoncfg +          mountPath: /etc/docker +          readOnly: true  {% if openshift_logging_use_mux_client | bool %}          - name: muxcerts            mountPath: /etc/fluent/muxkeys @@ -154,6 +157,9 @@ spec:        - name: dockercfg          hostPath:            path: /etc/sysconfig/docker +      - name: dockerdaemoncfg +        hostPath: +          path: /etc/docker  {% if openshift_logging_use_mux_client | bool %}        - name: muxcerts          secret: diff --git a/roles/openshift_logging/templates/kibana.j2 b/roles/openshift_logging/templates/kibana.j2 index e6ecf82ff..25fab9ac4 100644 --- a/roles/openshift_logging/templates/kibana.j2 +++ b/roles/openshift_logging/templates/kibana.j2 @@ -44,15 +44,19 @@ spec:  {% if kibana_cpu_limit is not none %}                cpu: "{{kibana_cpu_limit}}"  {% endif %} -{% if kibana_memory_limit is not none %} -              memory: "{{kibana_memory_limit}}" -{% endif %} +              memory: "{{kibana_memory_limit | default('736Mi') }}"  {% endif %}            env:              - name: "ES_HOST"                value: "{{es_host}}"              - name: "ES_PORT"                value: "{{es_port}}" +            - +              name: "KIBANA_MEMORY_LIMIT" +              valueFrom: +                resourceFieldRef: +                  containerName: kibana +                  resource: limits.memory            volumeMounts:              - name: kibana                mountPath: /etc/kibana/keys @@ -67,9 +71,7 @@ spec:  {% if kibana_proxy_cpu_limit is not none %}                cpu: "{{kibana_proxy_cpu_limit}}"  {% endif %} -{% if kibana_proxy_memory_limit is not none %} -              memory: "{{kibana_proxy_memory_limit}}" -{% endif %} +              memory: "{{kibana_proxy_memory_limit | default('96Mi') }}"  {% endif %}            ports:              - @@ -103,6 +105,27 @@ spec:              -               name: "OAP_DEBUG"               value: "{{openshift_logging_kibana_proxy_debug}}" +            - +             name: "OAP_OAUTH_SECRET_FILE" +             value: "/secret/oauth-secret" +            - +             name: "OAP_SERVER_CERT_FILE" +             value: "/secret/server-cert" +            - +             name: "OAP_SERVER_KEY_FILE" +             value: "/secret/server-key" +            - +             name: "OAP_SERVER_TLS_FILE" +             value: "/secret/server-tls.json" +            - +             name: "OAP_SESSION_SECRET_FILE" +             value: "/secret/session-secret" +            - +             name: "OCP_AUTH_PROXY_MEMORY_LIMIT" +             valueFrom: +               resourceFieldRef: +                 containerName: kibana-proxy +                 resource: limits.memory            volumeMounts:              - name: kibana-proxy                mountPath: /secret diff --git a/roles/openshift_logging/vars/main.yaml b/roles/openshift_logging/vars/main.yaml index e06625e3f..e561b41e2 100644 --- a/roles/openshift_logging/vars/main.yaml +++ b/roles/openshift_logging/vars/main.yaml @@ -1,12 +1,8 @@  ---  openshift_master_config_dir: "{{ openshift.common.config_base }}/master" -es_node_quorum: "{{openshift_logging_es_cluster_size|int/2 + 1}}" -es_min_masters_default: "{{ (openshift_logging_es_cluster_size | int / 2 | round(0,'floor') + 1) | int }}" -es_min_masters: "{{ (openshift_logging_es_cluster_size == 1) | ternary(1, es_min_masters_default)}}" -es_recover_after_nodes: "{{openshift_logging_es_cluster_size|int - 1}}" -es_recover_expected_nodes: "{{openshift_logging_es_cluster_size|int}}" -es_ops_node_quorum: "{{openshift_logging_es_ops_cluster_size|int/2 + 1}}" -es_ops_recover_after_nodes: "{{openshift_logging_es_ops_cluster_size|int - 1}}" -es_ops_recover_expected_nodes: "{{openshift_logging_es_ops_cluster_size|int}}" +es_node_quorum: "{{ (openshift_logging_es_cluster_size | int/2 | round(0,'floor') + 1) | int}}" +es_recover_expected_nodes: "{{openshift_logging_es_cluster_size | int}}" +es_ops_node_quorum: "{{ (openshift_logging_es_ops_cluster_size | int/2 | round(0,'floor') + 1) | int}}" +es_ops_recover_expected_nodes: "{{openshift_logging_es_ops_cluster_size | int}}"  es_log_appenders: ['file', 'console'] diff --git a/roles/openshift_master/files/atomic-openshift-master.service b/roles/openshift_master/files/atomic-openshift-master.service new file mode 100644 index 000000000..02af4dd16 --- /dev/null +++ b/roles/openshift_master/files/atomic-openshift-master.service @@ -0,0 +1,23 @@ +[Unit] +Description=Atomic OpenShift Master +Documentation=https://github.com/openshift/origin +After=network-online.target +After=etcd.service +Before=atomic-openshift-node.service +Requires=network-online.target + +[Service] +Type=notify +EnvironmentFile=/etc/sysconfig/atomic-openshift-master +Environment=GOTRACEBACK=crash +ExecStart=/usr/bin/openshift start master --config=${CONFIG_FILE} $OPTIONS +LimitNOFILE=131072 +LimitCORE=infinity +WorkingDirectory=/var/lib/origin/ +SyslogIdentifier=atomic-openshift-master +Restart=always +RestartSec=5s + +[Install] +WantedBy=multi-user.target +WantedBy=atomic-openshift-node.service diff --git a/roles/openshift_master/files/origin-master.service b/roles/openshift_master/files/origin-master.service new file mode 100644 index 000000000..cf79dda02 --- /dev/null +++ b/roles/openshift_master/files/origin-master.service @@ -0,0 +1,23 @@ +[Unit] +Description=Origin Master Service +Documentation=https://github.com/openshift/origin +After=network-online.target +After=etcd.service +Before=origin-node.service +Requires=network-online.target + +[Service] +Type=notify +EnvironmentFile=/etc/sysconfig/origin-master +Environment=GOTRACEBACK=crash +ExecStart=/usr/bin/openshift start master --config=${CONFIG_FILE} $OPTIONS +LimitNOFILE=131072 +LimitCORE=infinity +WorkingDirectory=/var/lib/origin/ +SyslogIdentifier=origin-master +Restart=always +RestartSec=5s + +[Install] +WantedBy=multi-user.target +WantedBy=origin-node.service diff --git a/roles/openshift_master/tasks/files b/roles/openshift_master/tasks/files new file mode 120000 index 000000000..feb122881 --- /dev/null +++ b/roles/openshift_master/tasks/files @@ -0,0 +1 @@ +../files
\ No newline at end of file diff --git a/roles/openshift_master/tasks/systemd_units.yml b/roles/openshift_master/tasks/systemd_units.yml index 58fabddeb..dfc255b3d 100644 --- a/roles/openshift_master/tasks/systemd_units.yml +++ b/roles/openshift_master/tasks/systemd_units.yml @@ -32,6 +32,15 @@    - not openshift.common.is_master_system_container | bool    register: create_master_unit_file +- name: Install Master service file +  copy: +    dest: "/etc/systemd/system/{{ openshift.common.service_type }}-master.service" +    src: "{{ openshift.common.service_type }}-master.service" +  register: create_master_unit_file +  when: +  - not openshift.common.is_containerized | bool +  - (openshift.master.ha is not defined or not openshift.master.ha) | bool +  - command: systemctl daemon-reload    when: create_master_unit_file | changed diff --git a/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-api.service.j2 b/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-api.service.j2 index 155abd970..897ee7285 100644 --- a/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-api.service.j2 +++ b/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-api.service.j2 @@ -4,9 +4,9 @@ Documentation=https://github.com/openshift/origin  After=etcd_container.service  Wants=etcd_container.service  Before={{ openshift.common.service_type }}-node.service -After=docker.service -PartOf=docker.service -Requires=docker.service +After={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service  [Service]  EnvironmentFile=/etc/sysconfig/{{ openshift.common.service_type }}-master-api @@ -23,5 +23,5 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service  WantedBy={{ openshift.common.service_type }}-node.service diff --git a/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-controllers.service.j2 b/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-controllers.service.j2 index 088e8db43..451f3436a 100644 --- a/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-controllers.service.j2 +++ b/roles/openshift_master/templates/docker-cluster/atomic-openshift-master-controllers.service.j2 @@ -3,9 +3,9 @@ Description=Atomic OpenShift Master Controllers  Documentation=https://github.com/openshift/origin  Wants={{ openshift.common.service_type }}-master-api.service  After={{ openshift.common.service_type }}-master-api.service -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  EnvironmentFile=/etc/sysconfig/{{ openshift.common.service_type }}-master-controllers @@ -22,4 +22,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_master/templates/master_docker/master.docker.service.j2 b/roles/openshift_master/templates/master_docker/master.docker.service.j2 index 13381cd1a..7f40cb042 100644 --- a/roles/openshift_master/templates/master_docker/master.docker.service.j2 +++ b/roles/openshift_master/templates/master_docker/master.docker.service.j2 @@ -1,7 +1,7 @@  [Unit] -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  After=etcd_container.service  Wants=etcd_container.service @@ -15,4 +15,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_master_certificates/tasks/main.yml b/roles/openshift_master_certificates/tasks/main.yml index 33a0af07f..9706da24b 100644 --- a/roles/openshift_master_certificates/tasks/main.yml +++ b/roles/openshift_master_certificates/tasks/main.yml @@ -64,7 +64,7 @@      --signer-key={{ openshift_ca_key }}      --signer-serial={{ openshift_ca_serial }}      --overwrite=false -  when: inventory_hostname != openshift_ca_host +  when: item != openshift_ca_host    with_items: "{{ hostvars                    | oo_select_keys(groups['oo_masters_to_config'])                    | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) }}" @@ -95,7 +95,7 @@    with_items: "{{ hostvars                    | oo_select_keys(groups['oo_masters_to_config'])                    | oo_collect(attribute='inventory_hostname', filters={'master_certs_missing':True}) }}" -  when: inventory_hostname != openshift_ca_host +  when: item != openshift_ca_host    delegate_to: "{{ openshift_ca_host }}"    run_once: true @@ -124,7 +124,6 @@    register: g_master_certs_mktemp    changed_when: False    when: master_certs_missing | bool -  delegate_to: localhost    become: no  - name: Create a tarball of the master certs @@ -158,10 +157,10 @@      dest: "{{ openshift_master_config_dir }}"    when: master_certs_missing | bool and inventory_hostname != openshift_ca_host -- file: name={{ g_master_certs_mktemp.stdout }} state=absent +- name: Delete local temp directory +  local_action: file path="{{ g_master_certs_mktemp.stdout }}" state=absent    changed_when: False    when: master_certs_missing | bool -  delegate_to: localhost    become: no  - name: Lookup default group for ansible_ssh_user diff --git a/roles/openshift_master_facts/defaults/main.yml b/roles/openshift_master_facts/defaults/main.yml index f1cbbeb2d..a80313505 100644 --- a/roles/openshift_master_facts/defaults/main.yml +++ b/roles/openshift_master_facts/defaults/main.yml @@ -1,2 +1,24 @@  ---  openshift_master_default_subdomain: "{{ lookup('oo_option', 'openshift_master_default_subdomain') | default(None, true) }}" +openshift_master_admission_plugin_config: +  openshift.io/ImagePolicy: +    configuration: +      kind: ImagePolicyConfig +      apiVersion: v1 +      # To require that all images running on the platform be imported first, you may uncomment the +      # following rule. Any image that refers to a registry outside of OpenShift will be rejected unless it +      # unless it points directly to an image digest (myregistry.com/myrepo/image@sha256:ea83bcf...) and that +      # digest has been imported via the import-image flow. +      #resolveImages: Required +      executionRules: +      - name: execution-denied +        # Reject all images that have the annotation images.openshift.io/deny-execution set to true. +        # This annotation may be set by infrastructure that wishes to flag particular images as dangerous +        onResources: +        - resource: pods +        - resource: builds +        reject: true +        matchImageAnnotations: +        - key: images.openshift.io/deny-execution +          value: "true" +        skipOnResolutionFailure: true diff --git a/roles/openshift_master_facts/filter_plugins/openshift_master.py b/roles/openshift_master_facts/filter_plugins/openshift_master.py index e570392ff..e767772ce 100644 --- a/roles/openshift_master_facts/filter_plugins/openshift_master.py +++ b/roles/openshift_master_facts/filter_plugins/openshift_master.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  '''  Custom filters for use in openshift-master  ''' @@ -469,7 +468,8 @@ class GitHubIdentityProvider(IdentityProviderOauthBase):      """      def __init__(self, api_version, idp):          IdentityProviderOauthBase.__init__(self, api_version, idp) -        self._optional += [['organizations']] +        self._optional += [['organizations'], +                           ['teams']]  class FilterModule(object): @@ -496,6 +496,7 @@ class FilterModule(object):          return u(yaml.dump([idp.to_dict() for idp in idp_list],                             allow_unicode=True,                             default_flow_style=False, +                           width=float("inf"),                             Dumper=AnsibleDumper))      @staticmethod diff --git a/roles/openshift_master_facts/tasks/main.yml b/roles/openshift_master_facts/tasks/main.yml index f048e0aef..79f054b42 100644 --- a/roles/openshift_master_facts/tasks/main.yml +++ b/roles/openshift_master_facts/tasks/main.yml @@ -92,7 +92,7 @@        master_count: "{{ openshift_master_count | default(None) }}"        controller_lease_ttl: "{{ osm_controller_lease_ttl | default(None) }}"        master_image: "{{ osm_image | default(None) }}" -      admission_plugin_config: "{{openshift_master_admission_plugin_config | default(None) }}" +      admission_plugin_config: "{{openshift_master_admission_plugin_config }}"        kube_admission_plugin_config: "{{openshift_master_kube_admission_plugin_config | default(None) }}"  # deprecated, merged with admission_plugin_config        oauth_template: "{{ openshift_master_oauth_template | default(None) }}"  # deprecated in origin 1.2 / OSE 3.2        oauth_templates: "{{ openshift_master_oauth_templates | default(None) }}" diff --git a/roles/openshift_metrics/README.md b/roles/openshift_metrics/README.md index f4c61a75e..84503217b 100644 --- a/roles/openshift_metrics/README.md +++ b/roles/openshift_metrics/README.md @@ -76,7 +76,7 @@ openshift_metrics_<COMPONENT>_(limits|requests)_(memory|cpu): <VALUE>  ```  e.g  ``` -openshift_metrics_cassandra_limits_memory: 1G +openshift_metrics_cassandra_limits_memory: 1Gi  openshift_metrics_hawkular_requests_cpu: 100  ``` diff --git a/roles/openshift_metrics/tasks/main.yaml b/roles/openshift_metrics/tasks/main.yaml index e9389c78d..9af10a849 100644 --- a/roles/openshift_metrics/tasks/main.yaml +++ b/roles/openshift_metrics/tasks/main.yaml @@ -33,6 +33,7 @@    local_action: command mktemp -d    register: local_tmp    changed_when: False +  become: false  - name: Copy the admin client config(s)    command: > diff --git a/roles/openshift_node/defaults/main.yml b/roles/openshift_node/defaults/main.yml index bd95f8526..5904ca9bc 100644 --- a/roles/openshift_node/defaults/main.yml +++ b/roles/openshift_node/defaults/main.yml @@ -8,4 +8,7 @@ os_firewall_allow:    port: 443/tcp  - service: OpenShift OVS sdn    port: 4789/udp -  when: openshift.node.use_openshift_sdn | bool +  when: openshift.common.use_openshift_sdn | bool +- service: Calico BGP Port +  port: 179/tcp +  when: openshift.common.use_calico | bool diff --git a/roles/openshift_node/meta/main.yml b/roles/openshift_node/meta/main.yml index 0da41d0c1..3b7e8126a 100644 --- a/roles/openshift_node/meta/main.yml +++ b/roles/openshift_node/meta/main.yml @@ -33,6 +33,12 @@ dependencies:    when: openshift.common.use_openshift_sdn | bool  - role: os_firewall    os_firewall_allow: +  - service: Calico BGP Port +    port: 179/tcp +  when: openshift.common.use_calico | bool + +- role: os_firewall +  os_firewall_allow:    - service: Kubernetes service NodePort TCP      port: "{{ openshift_node_port_range | default('') }}/tcp"    - service: Kubernetes service NodePort UDP diff --git a/roles/openshift_node/tasks/systemd_units.yml b/roles/openshift_node/tasks/systemd_units.yml index 52482d09b..f58c803c4 100644 --- a/roles/openshift_node/tasks/systemd_units.yml +++ b/roles/openshift_node/tasks/systemd_units.yml @@ -25,6 +25,13 @@    - openshift.common.is_containerized | bool    - not openshift.common.is_node_system_container | bool +- name: Install Node service file +  template: +    dest: "/etc/systemd/system/{{ openshift.common.service_type }}-node.service" +    src: "{{ openshift.common.service_type }}-node.service.j2" +  register: install_node_result +  when: not openshift.common.is_containerized | bool +  - name: Create the openvswitch service env file    template:      src: openvswitch.sysconfig.j2 @@ -115,6 +122,5 @@  - name: Reload systemd units    command: systemctl daemon-reload -  when: (openshift.common.is_containerized | bool and (install_node_result | changed or install_ovs_sysconfig | changed or install_node_dep_result | changed)) or install_oom_fix_result | changed    notify:    - restart node diff --git a/roles/openshift_node/templates/atomic-openshift-node.service.j2 b/roles/openshift_node/templates/atomic-openshift-node.service.j2 new file mode 100644 index 000000000..80232094a --- /dev/null +++ b/roles/openshift_node/templates/atomic-openshift-node.service.j2 @@ -0,0 +1,22 @@ +[Unit] +Description=Atomic OpenShift Node +After={{ openshift.docker.service_name }}.service +After=openvswitch.service +Wants={{ openshift.docker.service_name }}.service +Documentation=https://github.com/openshift/origin + +[Service] +Type=notify +EnvironmentFile=/etc/sysconfig/atomic-openshift-node +Environment=GOTRACEBACK=crash +ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS +LimitNOFILE=65536 +LimitCORE=infinity +WorkingDirectory=/var/lib/origin/ +SyslogIdentifier=atomic-openshift-node +Restart=always +RestartSec=5s +OOMScoreAdjust=-999 + +[Install] +WantedBy=multi-user.target diff --git a/roles/openshift_node/templates/openshift.docker.node.dep.service b/roles/openshift_node/templates/openshift.docker.node.dep.service index 0fb34cffd..4c47f8c0d 100644 --- a/roles/openshift_node/templates/openshift.docker.node.dep.service +++ b/roles/openshift_node/templates/openshift.docker.node.dep.service @@ -1,6 +1,6 @@  [Unit] -Requires=docker.service -After=docker.service +Requires={{ openshift.docker.service_name }}.service +After={{ openshift.docker.service_name }}.service  PartOf={{ openshift.common.service_type }}-node.service  Before={{ openshift.common.service_type }}-node.service diff --git a/roles/openshift_node/templates/openshift.docker.node.service b/roles/openshift_node/templates/openshift.docker.node.service index c42bdb7c3..d89b64b06 100644 --- a/roles/openshift_node/templates/openshift.docker.node.service +++ b/roles/openshift_node/templates/openshift.docker.node.service @@ -1,11 +1,11 @@  [Unit]  After={{ openshift.common.service_type }}-master.service -After=docker.service +After={{ openshift.docker.service_name }}.service  After=openvswitch.service -PartOf=docker.service -Requires=docker.service +PartOf={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service  {% if openshift.common.use_openshift_sdn %} -Requires=openvswitch.service +Wants=openvswitch.service  After=ovsdb-server.service  After=ovs-vswitchd.service  {% endif %} @@ -25,4 +25,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_node/templates/openvswitch.docker.service b/roles/openshift_node/templates/openvswitch.docker.service index 1e1f8967d..34aaaabd6 100644 --- a/roles/openshift_node/templates/openvswitch.docker.service +++ b/roles/openshift_node/templates/openvswitch.docker.service @@ -1,7 +1,7 @@  [Unit] -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  EnvironmentFile=/etc/sysconfig/openvswitch @@ -14,4 +14,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_node/templates/origin-node.service.j2 b/roles/openshift_node/templates/origin-node.service.j2 new file mode 100644 index 000000000..8047301e6 --- /dev/null +++ b/roles/openshift_node/templates/origin-node.service.j2 @@ -0,0 +1,21 @@ +[Unit] +Description=Origin Node +After={{ openshift.docker.service_name }}.service +Wants={{ openshift.docker.service_name }}.service +Documentation=https://github.com/openshift/origin + +[Service] +Type=notify +EnvironmentFile=/etc/sysconfig/origin-node +Environment=GOTRACEBACK=crash +ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS +LimitNOFILE=65536 +LimitCORE=infinity +WorkingDirectory=/var/lib/origin/ +SyslogIdentifier=origin-node +Restart=always +RestartSec=5s +OOMScoreAdjust=-999 + +[Install] +WantedBy=multi-user.target diff --git a/roles/openshift_node_certificates/tasks/main.yml b/roles/openshift_node_certificates/tasks/main.yml index 9120915b2..1a775178d 100644 --- a/roles/openshift_node_certificates/tasks/main.yml +++ b/roles/openshift_node_certificates/tasks/main.yml @@ -103,7 +103,6 @@    register: node_cert_mktemp    changed_when: False    when: node_certs_missing | bool -  delegate_to: localhost    become: no  - name: Create a tarball of the node config directories @@ -141,10 +140,10 @@      dest: "{{ openshift_node_cert_dir }}"    when: node_certs_missing | bool -- file: name={{ node_cert_mktemp.stdout }} state=absent +- name: Delete local temp directory +  local_action: file path="{{ node_cert_mktemp.stdout }}" state=absent    changed_when: False    when: node_certs_missing | bool -  delegate_to: localhost    become: no  - name: Copy OpenShift CA to system CA trust diff --git a/roles/openshift_node_upgrade/tasks/main.yml b/roles/openshift_node_upgrade/tasks/main.yml index 94c97d0a5..7231bdb9d 100644 --- a/roles/openshift_node_upgrade/tasks/main.yml +++ b/roles/openshift_node_upgrade/tasks/main.yml @@ -127,6 +127,12 @@    - openshift_disable_swap | default(true) | bool    # End Disable Swap Block +- name: Reset selinux context +  command: restorecon -RF {{ openshift.common.data_dir }}/openshift.local.volumes +  when: +  - ansible_selinux is defined +  - ansible_selinux.status == 'enabled' +  # Restart all services  - include: restart.yml @@ -137,7 +143,7 @@      name: "{{ openshift.common.hostname | lower }}"    register: node_output    delegate_to: "{{ groups.oo_first_master.0 }}" -  until: node_output.results.results[0].status.conditions | selectattr('type', 'match', '^Ready$') | map(attribute='status') | join | bool == True +  until: node_output.results.returncode == 0 and node_output.results.results[0].status.conditions | selectattr('type', 'match', '^Ready$') | map(attribute='status') | join | bool == True    # Give the node two minutes to come back online.    retries: 24    delay: 5 diff --git a/roles/openshift_node_upgrade/templates/openshift.docker.node.dep.service b/roles/openshift_node_upgrade/templates/openshift.docker.node.dep.service index 0fb34cffd..4c47f8c0d 100644 --- a/roles/openshift_node_upgrade/templates/openshift.docker.node.dep.service +++ b/roles/openshift_node_upgrade/templates/openshift.docker.node.dep.service @@ -1,6 +1,6 @@  [Unit] -Requires=docker.service -After=docker.service +Requires={{ openshift.docker.service_name }}.service +After={{ openshift.docker.service_name }}.service  PartOf={{ openshift.common.service_type }}-node.service  Before={{ openshift.common.service_type }}-node.service diff --git a/roles/openshift_node_upgrade/templates/openshift.docker.node.service b/roles/openshift_node_upgrade/templates/openshift.docker.node.service index 0ff398152..2a099301a 100644 --- a/roles/openshift_node_upgrade/templates/openshift.docker.node.service +++ b/roles/openshift_node_upgrade/templates/openshift.docker.node.service @@ -1,11 +1,11 @@  [Unit]  After={{ openshift.common.service_type }}-master.service -After=docker.service +After={{ openshift.docker.service_name }}.service  After=openvswitch.service -PartOf=docker.service -Requires=docker.service +PartOf={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service  {% if openshift.common.use_openshift_sdn %} -Requires=openvswitch.service +Wants=openvswitch.service  {% endif %}  Wants={{ openshift.common.service_type }}-master.service  Requires={{ openshift.common.service_type }}-node-dep.service @@ -23,4 +23,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_node_upgrade/templates/openvswitch.docker.service b/roles/openshift_node_upgrade/templates/openvswitch.docker.service index 1e1f8967d..34aaaabd6 100644 --- a/roles/openshift_node_upgrade/templates/openvswitch.docker.service +++ b/roles/openshift_node_upgrade/templates/openvswitch.docker.service @@ -1,7 +1,7 @@  [Unit] -After=docker.service -Requires=docker.service -PartOf=docker.service +After={{ openshift.docker.service_name }}.service +Requires={{ openshift.docker.service_name }}.service +PartOf={{ openshift.docker.service_name }}.service  [Service]  EnvironmentFile=/etc/sysconfig/openvswitch @@ -14,4 +14,4 @@ Restart=always  RestartSec=5s  [Install] -WantedBy=docker.service +WantedBy={{ openshift.docker.service_name }}.service diff --git a/roles/openshift_repos/files/origin/repos/openshift-ansible-centos-paas-sig.repo b/roles/openshift_repos/files/origin/repos/openshift-ansible-centos-paas-sig.repo index 124bff09d..09364c26f 100644 --- a/roles/openshift_repos/files/origin/repos/openshift-ansible-centos-paas-sig.repo +++ b/roles/openshift_repos/files/origin/repos/openshift-ansible-centos-paas-sig.repo @@ -3,7 +3,7 @@ name=CentOS OpenShift Origin  baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/  enabled=1  gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/openshift-ansible-CentOS-SIG-PaaS +gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS  [centos-openshift-origin-testing]  name=CentOS OpenShift Origin Testing diff --git a/roles/openshift_repos/tasks/main.yaml b/roles/openshift_repos/tasks/main.yaml index 9a9436fcb..023b1a9b7 100644 --- a/roles/openshift_repos/tasks/main.yaml +++ b/roles/openshift_repos/tasks/main.yaml @@ -24,15 +24,19 @@      - openshift_additional_repos | length == 0      notify: refresh cache +  # Note: OpenShift repositories under CentOS may be shipped through the +  # "centos-release-openshift-origin" package which configures the repository. +  # This task matches the file names provided by the package so that they are +  # not installed twice in different files and remains idempotent.    - name: Configure origin gpg keys if needed      copy:        src: "{{ item.src }}"        dest: "{{ item.dest }}"      with_items:      - src: origin/gpg_keys/openshift-ansible-CentOS-SIG-PaaS -      dest: /etc/pki/rpm-gpg/ +      dest: /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS      - src: origin/repos/openshift-ansible-centos-paas-sig.repo -      dest: /etc/yum.repos.d/ +      dest: /etc/yum.repos.d/CentOS-OpenShift-Origin.repo      notify: refresh cache      when:      - ansible_os_family == "RedHat" diff --git a/roles/openshift_storage_glusterfs/README.md b/roles/openshift_storage_glusterfs/README.md index cf0fb94c9..7b310dbf8 100644 --- a/roles/openshift_storage_glusterfs/README.md +++ b/roles/openshift_storage_glusterfs/README.md @@ -8,10 +8,24 @@ Requirements  * Ansible 2.2 +Host Groups +----------- + +The following group is expected to be populated for this role to run: + +* `[glusterfs]` + +Additionally, the following group may be specified either in addition to or +instead of the above group to deploy a GlusterFS cluster for use by a natively +hosted Docker registry: + +* `[glusterfs_registry]` +  Role Variables  -------------- -From this role: +This role has the following variables that control the integration of a +GlusterFS cluster into a new or existing OpenShift cluster:  | Name                                             | Default value           |                                         |  |--------------------------------------------------|-------------------------|-----------------------------------------| @@ -31,6 +45,25 @@ From this role:  | openshift_storage_glusterfs_heketi_url           | Undefined               | URL for the heketi REST API, dynamically determined in native mode  | openshift_storage_glusterfs_heketi_wipe          | False                   | Destroy any existing heketi resources, defaults to the value of `openshift_storage_glusterfs_wipe` +Each role variable also has a corresponding variable to optionally configure a +separate GlusterFS cluster for use as storage for an integrated Docker +registry. These variables start with the prefix +`openshift_storage_glusterfs_registry_` and, for the most part, default to the +values in their corresponding non-registry variables. The following variables +are an exception: + +| Name                                              | Default value         |                                         | +|---------------------------------------------------|-----------------------|-----------------------------------------| +| openshift_storage_glusterfs_registry_namespace    | registry namespace    | Default is to use the hosted registry's namespace, otherwise 'default' +| openshift_storage_glusterfs_registry_nodeselector | 'storagenode=registry'| This allows for the logical separation of the registry GlusterFS cluster from any regular-use GlusterFS clusters + +Additionally, this role's behavior responds to the following registry-specific +variable: + +| Name                                         | Default value | Description                                                                  | +|----------------------------------------------|---------------|------------------------------------------------------------------------------| +| openshift_hosted_registry_glusterfs_swap     | False         | Whether to swap an existing registry's storage volume for a GlusterFS volume | +  Dependencies  ------------ @@ -47,6 +80,7 @@ Example Playbook    hosts: oo_first_master    roles:    - role: openshift_storage_glusterfs +    when: groups.oo_glusterfs_to_config | default([]) | count > 0  ```  License diff --git a/roles/openshift_storage_glusterfs/defaults/main.yml b/roles/openshift_storage_glusterfs/defaults/main.yml index ade850747..ebe9ca30b 100644 --- a/roles/openshift_storage_glusterfs/defaults/main.yml +++ b/roles/openshift_storage_glusterfs/defaults/main.yml @@ -2,7 +2,7 @@  openshift_storage_glusterfs_timeout: 300  openshift_storage_glusterfs_namespace: 'default'  openshift_storage_glusterfs_is_native: True -openshift_storage_glusterfs_nodeselector: "{{ openshift_storage_glusterfs_nodeselector_label | default('storagenode=glusterfs') | map_from_pairs }}" +openshift_storage_glusterfs_nodeselector: 'storagenode=glusterfs'  openshift_storage_glusterfs_image: "{{ 'rhgs3/rhgs-server-rhel7' | quote if deployment_type == 'openshift-enterprise' else 'gluster/gluster-centos' | quote }}"  openshift_storage_glusterfs_version: 'latest'  openshift_storage_glusterfs_wipe: False @@ -15,3 +15,22 @@ openshift_storage_glusterfs_heketi_admin_key: ''  openshift_storage_glusterfs_heketi_user_key: ''  openshift_storage_glusterfs_heketi_topology_load: True  openshift_storage_glusterfs_heketi_wipe: "{{ openshift_storage_glusterfs_wipe }}" +openshift_storage_glusterfs_heketi_url: "{{ omit }}" + +openshift_storage_glusterfs_registry_timeout: "{{ openshift_storage_glusterfs_timeout }}" +openshift_storage_glusterfs_registry_namespace: "{{ openshift.hosted.registry.namespace | default('default') }}" +openshift_storage_glusterfs_registry_is_native: "{{ openshift_storage_glusterfs_is_native }}" +openshift_storage_glusterfs_registry_nodeselector: 'storagenode=registry' +openshift_storage_glusterfs_registry_image: "{{ openshift_storage_glusterfs_image }}" +openshift_storage_glusterfs_registry_version: "{{ openshift_storage_glusterfs_version }}" +openshift_storage_glusterfs_registry_wipe: "{{ openshift_storage_glusterfs_wipe }}" +openshift_storage_glusterfs_registry_heketi_is_native: "{{ openshift_storage_glusterfs_heketi_is_native }}" +openshift_storage_glusterfs_registry_heketi_is_missing: "{{ openshift_storage_glusterfs_heketi_is_missing }}" +openshift_storage_glusterfs_registry_heketi_deploy_is_missing: "{{ openshift_storage_glusterfs_heketi_deploy_is_missing }}" +openshift_storage_glusterfs_registry_heketi_image: "{{ openshift_storage_glusterfs_heketi_image }}" +openshift_storage_glusterfs_registry_heketi_version: "{{ openshift_storage_glusterfs_heketi_version }}" +openshift_storage_glusterfs_registry_heketi_admin_key: "{{ openshift_storage_glusterfs_heketi_admin_key }}" +openshift_storage_glusterfs_registry_heketi_user_key: "{{ openshift_storage_glusterfs_heketi_user_key }}" +openshift_storage_glusterfs_registry_heketi_topology_load: "{{ openshift_storage_glusterfs_heketi_topology_load }}" +openshift_storage_glusterfs_registry_heketi_wipe: "{{ openshift_storage_glusterfs_heketi_wipe }}" +openshift_storage_glusterfs_registry_heketi_url: "{{ openshift_storage_glusterfs_heketi_url | default(omit) }}" diff --git a/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml b/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml new file mode 100644 index 000000000..fa5fa2cb0 --- /dev/null +++ b/roles/openshift_storage_glusterfs/tasks/glusterfs_common.yml @@ -0,0 +1,166 @@ +--- +- name: Verify target namespace exists +  oc_project: +    state: present +    name: "{{ glusterfs_namespace }}" +  when: glusterfs_is_native or glusterfs_heketi_is_native + +- include: glusterfs_deploy.yml +  when: glusterfs_is_native + +- name: Make sure heketi-client is installed +  package: name=heketi-client state=present + +- name: Delete pre-existing heketi resources +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    kind: "{{ item.kind }}" +    name: "{{ item.name | default(omit) }}" +    selector: "{{ item.selector | default(omit) }}" +    state: absent +  with_items: +  - kind: "template,route,service,dc,jobs,secret" +    selector: "deploy-heketi" +  - kind: "template,route,service,dc" +    name: "heketi" +  - kind: "svc,ep" +    name: "heketi-storage-endpoints" +  - kind: "sa" +    name: "heketi-service-account" +  failed_when: False +  when: glusterfs_heketi_wipe + +- name: Wait for deploy-heketi pods to terminate +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    kind: pod +    state: list +    selector: "glusterfs=deploy-heketi-pod" +  register: heketi_pod +  until: "heketi_pod.results.results[0]['items'] | count == 0" +  delay: 10 +  retries: "{{ (glusterfs_timeout / 10) | int }}" +  when: glusterfs_heketi_wipe + +- name: Wait for heketi pods to terminate +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    kind: pod +    state: list +    selector: "glusterfs=heketi-pod" +  register: heketi_pod +  until: "heketi_pod.results.results[0]['items'] | count == 0" +  delay: 10 +  retries: "{{ (glusterfs_timeout / 10) | int }}" +  when: glusterfs_heketi_wipe + +- name: Create heketi service account +  oc_serviceaccount: +    namespace: "{{ glusterfs_namespace }}" +    name: heketi-service-account +    state: present +  when: glusterfs_heketi_is_native + +- name: Add heketi service account to privileged SCC +  oc_adm_policy_user: +    user: "system:serviceaccount:{{ glusterfs_namespace }}:heketi-service-account" +    resource_kind: scc +    resource_name: privileged +    state: present +  when: glusterfs_heketi_is_native + +- name: Allow heketi service account to view/edit pods +  oc_adm_policy_user: +    user: "system:serviceaccount:{{ glusterfs_namespace }}:heketi-service-account" +    resource_kind: role +    resource_name: edit +    state: present +  when: glusterfs_heketi_is_native + +- name: Check for existing deploy-heketi pod +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    state: list +    kind: pod +    selector: "glusterfs=deploy-heketi-pod,deploy-heketi=support" +  register: heketi_pod +  when: glusterfs_heketi_is_native + +- name: Check if need to deploy deploy-heketi +  set_fact: +    glusterfs_heketi_deploy_is_missing: False +  when: +  - "glusterfs_heketi_is_native" +  - "heketi_pod.results.results[0]['items'] | count > 0" +  # deploy-heketi is not missing when there are one or more pods with matching labels whose 'Ready' status is True +  - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count > 0" + +- name: Check for existing heketi pod +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    state: list +    kind: pod +    selector: "glusterfs=heketi-pod" +  register: heketi_pod +  when: glusterfs_heketi_is_native + +- name: Check if need to deploy heketi +  set_fact: +    glusterfs_heketi_is_missing: False +  when: +  - "glusterfs_heketi_is_native" +  - "heketi_pod.results.results[0]['items'] | count > 0" +  # heketi is not missing when there are one or more pods with matching labels whose 'Ready' status is True +  - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count > 0" + +- include: heketi_deploy_part1.yml +  when: +  - glusterfs_heketi_is_native +  - glusterfs_heketi_deploy_is_missing +  - glusterfs_heketi_is_missing + +- name: Determine heketi URL +  oc_obj: +    namespace: "{{ glusterfs_namespace }}" +    state: list +    kind: ep +    selector: "glusterfs in (deploy-heketi-service, heketi-service)" +  register: heketi_url +  until: +  - "heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip != ''" +  - "heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port != ''" +  delay: 10 +  retries: "{{ (glusterfs_timeout / 10) | int }}" +  when: +  - glusterfs_heketi_is_native +  - glusterfs_heketi_url is undefined + +- name: Set heketi URL +  set_fact: +    glusterfs_heketi_url: "{{ heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip }}:{{ heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port }}" +  when: +  - glusterfs_heketi_is_native +  - glusterfs_heketi_url is undefined + +- name: Verify heketi service +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' cluster list" +  changed_when: False + +- name: Generate topology file +  template: +    src: "{{ openshift.common.examples_content_version }}/topology.json.j2" +    dest: "{{ mktemp.stdout }}/topology.json" +  when: +  - glusterfs_heketi_topology_load + +- name: Load heketi topology +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' topology load --json={{ mktemp.stdout }}/topology.json 2>&1" +  register: topology_load +  failed_when: "topology_load.rc != 0 or 'Unable' in topology_load.stdout" +  when: +  - glusterfs_heketi_topology_load + +- include: heketi_deploy_part2.yml +  when: +  - glusterfs_heketi_is_native +  - glusterfs_heketi_is_missing diff --git a/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml b/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml new file mode 100644 index 000000000..451990240 --- /dev/null +++ b/roles/openshift_storage_glusterfs/tasks/glusterfs_config.yml @@ -0,0 +1,22 @@ +--- +- set_fact: +    glusterfs_timeout: "{{ openshift_storage_glusterfs_timeout }}" +    glusterfs_namespace: "{{ openshift_storage_glusterfs_namespace }}" +    glusterfs_is_native: "{{ openshift_storage_glusterfs_is_native }}" +    glusterfs_nodeselector: "{{ openshift_storage_glusterfs_nodeselector | map_from_pairs }}" +    glusterfs_image: "{{ openshift_storage_glusterfs_image }}" +    glusterfs_version: "{{ openshift_storage_glusterfs_version }}" +    glusterfs_wipe: "{{ openshift_storage_glusterfs_wipe }}" +    glusterfs_heketi_is_native: "{{ openshift_storage_glusterfs_heketi_is_native }}" +    glusterfs_heketi_is_missing: "{{ openshift_storage_glusterfs_heketi_is_missing }}" +    glusterfs_heketi_deploy_is_missing: "{{ openshift_storage_glusterfs_heketi_deploy_is_missing }}" +    glusterfs_heketi_image: "{{ openshift_storage_glusterfs_heketi_image }}" +    glusterfs_heketi_version: "{{ openshift_storage_glusterfs_heketi_version }}" +    glusterfs_heketi_admin_key: "{{ openshift_storage_glusterfs_heketi_admin_key }}" +    glusterfs_heketi_user_key: "{{ openshift_storage_glusterfs_heketi_user_key }}" +    glusterfs_heketi_topology_load: "{{ openshift_storage_glusterfs_heketi_topology_load }}" +    glusterfs_heketi_wipe: "{{ openshift_storage_glusterfs_heketi_wipe }}" +    glusterfs_heketi_url: "{{ openshift_storage_glusterfs_heketi_url }}" +    glusterfs_nodes: "{{ g_glusterfs_hosts }}" + +- include: glusterfs_common.yml diff --git a/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml b/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml index 2b35e5137..579112349 100644 --- a/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml +++ b/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml @@ -1,44 +1,44 @@  ---  - assert: -    that: "openshift_storage_glusterfs_nodeselector.keys() | count == 1" +    that: "glusterfs_nodeselector.keys() | count == 1"      msg: Only one GlusterFS nodeselector key pair should be provided  - assert: -    that: "groups.oo_glusterfs_to_config | count >= 3" +    that: "glusterfs_nodes | count >= 3"      msg: There must be at least three GlusterFS nodes specified  - name: Delete pre-existing GlusterFS resources    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: "template,daemonset"      name: glusterfs      state: absent -  when: openshift_storage_glusterfs_wipe +  when: glusterfs_wipe  - name: Unlabel any existing GlusterFS nodes    oc_label:      name: "{{ item }}"      kind: node      state: absent -    labels: "{{ openshift_storage_glusterfs_nodeselector | oo_dict_to_list_of_dict }}" +    labels: "{{ glusterfs_nodeselector | oo_dict_to_list_of_dict }}"    with_items: "{{ groups.all }}" -  when: openshift_storage_glusterfs_wipe +  when: glusterfs_wipe  - name: Delete pre-existing GlusterFS config    file:      path: /var/lib/glusterd      state: absent    delegate_to: "{{ item }}" -  with_items: "{{ groups.oo_glusterfs_to_config | default([]) }}" -  when: openshift_storage_glusterfs_wipe +  with_items: "{{ glusterfs_nodes | default([]) }}" +  when: glusterfs_wipe  - name: Get GlusterFS storage devices state    command: "pvdisplay -C --noheadings -o pv_name,vg_name {% for device in hostvars[item].glusterfs_devices %}{{ device }} {% endfor %}"    register: devices_info    delegate_to: "{{ item }}" -  with_items: "{{ groups.oo_glusterfs_to_config | default([]) }}" +  with_items: "{{ glusterfs_nodes | default([]) }}"    failed_when: False -  when: openshift_storage_glusterfs_wipe +  when: glusterfs_wipe    # Runs "vgremove -fy <vg>; pvremove -fy <pv>" for every device found to be a physical volume.  - name: Clear GlusterFS storage device contents @@ -46,12 +46,12 @@    delegate_to: "{{ item.item }}"    with_items: "{{ devices_info.results }}"    when: -  - openshift_storage_glusterfs_wipe +  - glusterfs_wipe    - item.stdout_lines | count > 0  - name: Add service accounts to privileged SCC    oc_adm_policy_user: -    user: "system:serviceaccount:{{ openshift_storage_glusterfs_namespace }}:{{ item }}" +    user: "system:serviceaccount:{{ glusterfs_namespace }}:{{ item }}"      resource_kind: scc      resource_name: privileged      state: present @@ -64,8 +64,8 @@      name: "{{ glusterfs_host }}"      kind: node      state: add -    labels: "{{ openshift_storage_glusterfs_nodeselector | oo_dict_to_list_of_dict }}" -  with_items: "{{ groups.oo_glusterfs_to_config | default([]) }}" +    labels: "{{ glusterfs_nodeselector | oo_dict_to_list_of_dict }}" +  with_items: "{{ glusterfs_nodes | default([]) }}"    loop_control:      loop_var: glusterfs_host @@ -76,7 +76,7 @@  - name: Create GlusterFS template    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: template      name: glusterfs      state: present @@ -85,16 +85,16 @@  - name: Deploy GlusterFS pods    oc_process: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      template_name: "glusterfs"      create: True      params: -      IMAGE_NAME: "{{ openshift_storage_glusterfs_image }}" -      IMAGE_VERSION: "{{ openshift_storage_glusterfs_version }}" +      IMAGE_NAME: "{{ glusterfs_image }}" +      IMAGE_VERSION: "{{ glusterfs_version }}"  - name: Wait for GlusterFS pods    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: pod      state: list      selector: "glusterfs-node=pod" @@ -102,6 +102,6 @@    until:    - "glusterfs_pods.results.results[0]['items'] | count > 0"    # There must be as many pods with 'Ready' staus  True as there are nodes expecting those pods -  - "glusterfs_pods.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count == groups.oo_glusterfs_to_config | count" +  - "glusterfs_pods.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count == glusterfs_nodes | count"    delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" +  retries: "{{ (glusterfs_timeout / 10) | int }}" diff --git a/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml b/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml index 6d02d2090..392f4b65b 100644 --- a/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml +++ b/roles/openshift_storage_glusterfs/tasks/glusterfs_registry.yml @@ -1,7 +1,30 @@  --- +- set_fact: +    glusterfs_timeout: "{{ openshift_storage_glusterfs_registry_timeout }}" +    glusterfs_namespace: "{{ openshift_storage_glusterfs_registry_namespace }}" +    glusterfs_is_native: "{{ openshift_storage_glusterfs_registry_is_native }}" +    glusterfs_nodeselector: "{{ openshift_storage_glusterfs_registry_nodeselector | map_from_pairs }}" +    glusterfs_image: "{{ openshift_storage_glusterfs_registry_image }}" +    glusterfs_version: "{{ openshift_storage_glusterfs_registry_version }}" +    glusterfs_wipe: "{{ openshift_storage_glusterfs_registry_wipe }}" +    glusterfs_heketi_is_native: "{{ openshift_storage_glusterfs_registry_heketi_is_native }}" +    glusterfs_heketi_is_missing: "{{ openshift_storage_glusterfs_registry_heketi_is_missing }}" +    glusterfs_heketi_deploy_is_missing: "{{ openshift_storage_glusterfs_registry_heketi_deploy_is_missing }}" +    glusterfs_heketi_image: "{{ openshift_storage_glusterfs_registry_heketi_image }}" +    glusterfs_heketi_version: "{{ openshift_storage_glusterfs_registry_heketi_version }}" +    glusterfs_heketi_admin_key: "{{ openshift_storage_glusterfs_registry_heketi_admin_key }}" +    glusterfs_heketi_user_key: "{{ openshift_storage_glusterfs_registry_heketi_user_key }}" +    glusterfs_heketi_topology_load: "{{ openshift_storage_glusterfs_registry_heketi_topology_load }}" +    glusterfs_heketi_wipe: "{{ openshift_storage_glusterfs_registry_heketi_wipe }}" +    glusterfs_heketi_url: "{{ openshift_storage_glusterfs_registry_heketi_url }}" +    glusterfs_nodes: "{{ g_glusterfs_registry_hosts }}" + +- include: glusterfs_common.yml +  when: g_glusterfs_registry_hosts != g_glusterfs_hosts +  - name: Delete pre-existing GlusterFS registry resources    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: "{{ item.kind }}"      name: "{{ item.name | default(omit) }}"      selector: "{{ item.selector | default(omit) }}" @@ -23,7 +46,7 @@  - name: Create GlusterFS registry endpoints    oc_obj: -    namespace: "{{ openshift.hosted.registry.namespace | default('default') }}" +    namespace: "{{ glusterfs_namespace }}"      state: present      kind: endpoints      name: glusterfs-registry-endpoints @@ -32,7 +55,7 @@  - name: Create GlusterFS registry service    oc_obj: -    namespace: "{{ openshift.hosted.registry.namespace | default('default') }}" +    namespace: "{{ glusterfs_namespace }}"      state: present      kind: service      name: glusterfs-registry-endpoints @@ -40,9 +63,9 @@      - "{{ mktemp.stdout }}/glusterfs-registry-service.yml"  - name: Check if GlusterFS registry volume exists -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' volume list" +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' volume list"    register: registry_volume  - name: Create GlusterFS registry volume -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' volume create --size={{ openshift.hosted.registry.storage.volume.size | replace('Gi','') }} --name={{ openshift.hosted.registry.storage.glusterfs.path }}" -  when: "'openshift.hosted.registry.storage.glusterfs.path' not in registry_volume.stdout" +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' volume create --size={{ openshift.hosted.registry.storage.volume.size | replace('Gi','') }} --name={{ openshift.hosted.registry.storage.glusterfs.path }}" +  when: "openshift.hosted.registry.storage.glusterfs.path not in registry_volume.stdout" diff --git a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part1.yml b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part1.yml index 76ae1db75..c14fcfb15 100644 --- a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part1.yml +++ b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part1.yml @@ -8,7 +8,7 @@  - name: Create deploy-heketi resources    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: template      name: deploy-heketi      state: present @@ -17,18 +17,18 @@  - name: Deploy deploy-heketi pod    oc_process: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      template_name: "deploy-heketi"      create: True      params: -      IMAGE_NAME: "{{ openshift_storage_glusterfs_heketi_image }}" -      IMAGE_VERSION: "{{ openshift_storage_glusterfs_heketi_version }}" -      HEKETI_USER_KEY: "{{ openshift_storage_glusterfs_heketi_user_key }}" -      HEKETI_ADMIN_KEY: "{{ openshift_storage_glusterfs_heketi_admin_key }}" +      IMAGE_NAME: "{{ glusterfs_heketi_image }}" +      IMAGE_VERSION: "{{ glusterfs_heketi_version }}" +      HEKETI_USER_KEY: "{{ glusterfs_heketi_user_key }}" +      HEKETI_ADMIN_KEY: "{{ glusterfs_heketi_admin_key }}"  - name: Wait for deploy-heketi pod    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: pod      state: list      selector: "glusterfs=deploy-heketi-pod,deploy-heketi=support" @@ -38,4 +38,4 @@    # Pod's 'Ready' status must be True    - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count == 1"    delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" +  retries: "{{ (glusterfs_timeout / 10) | int }}" diff --git a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml index 778b5a673..64410a9ab 100644 --- a/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml +++ b/roles/openshift_storage_glusterfs/tasks/heketi_deploy_part2.yml @@ -1,6 +1,6 @@  ---  - name: Create heketi DB volume -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' setup-openshift-heketi-storage --listfile {{ mktemp.stdout }}/heketi-storage.json" +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' setup-openshift-heketi-storage --listfile {{ mktemp.stdout }}/heketi-storage.json"    register: setup_storage    failed_when: False @@ -13,12 +13,12 @@  # Need `command` here because heketi-storage.json contains multiple objects.  - name: Copy heketi DB to GlusterFS volume -  command: "{{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig create -f {{ mktemp.stdout }}/heketi-storage.json -n {{ openshift_storage_glusterfs_namespace }}" +  command: "{{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig create -f {{ mktemp.stdout }}/heketi-storage.json -n {{ glusterfs_namespace }}"    when: setup_storage.rc == 0  - name: Wait for copy job to finish    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: job      state: list      name: "heketi-storage-copy-job" @@ -28,7 +28,7 @@    # Pod's 'Complete' status must be True    - "heketi_job.results.results | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Complete'}) | map('bool') | select | list | count == 1"    delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" +  retries: "{{ (glusterfs_timeout / 10) | int }}"    failed_when:    - "'results' in heketi_job.results"    - "heketi_job.results.results | count > 0" @@ -38,7 +38,7 @@  - name: Delete deploy resources    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: "{{ item.kind }}"      name: "{{ item.name | default(omit) }}"      selector: "{{ item.selector | default(omit) }}" @@ -55,7 +55,7 @@  - name: Create heketi resources    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: template      name: heketi      state: present @@ -64,18 +64,18 @@  - name: Deploy heketi pod    oc_process: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      template_name: "heketi"      create: True      params: -      IMAGE_NAME: "{{ openshift_storage_glusterfs_heketi_image }}" -      IMAGE_VERSION: "{{ openshift_storage_glusterfs_heketi_version }}" -      HEKETI_USER_KEY: "{{ openshift_storage_glusterfs_heketi_user_key }}" -      HEKETI_ADMIN_KEY: "{{ openshift_storage_glusterfs_heketi_admin_key }}" +      IMAGE_NAME: "{{ glusterfs_heketi_image }}" +      IMAGE_VERSION: "{{ glusterfs_heketi_version }}" +      HEKETI_USER_KEY: "{{ glusterfs_heketi_user_key }}" +      HEKETI_ADMIN_KEY: "{{ glusterfs_heketi_admin_key }}"  - name: Wait for heketi pod    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      kind: pod      state: list      selector: "glusterfs=heketi-pod" @@ -85,11 +85,11 @@    # Pod's 'Ready' status must be True    - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count == 1"    delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" +  retries: "{{ (glusterfs_timeout / 10) | int }}"  - name: Determine heketi URL    oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" +    namespace: "{{ glusterfs_namespace }}"      state: list      kind: ep      selector: "glusterfs=heketi-service" @@ -98,12 +98,12 @@    - "heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip != ''"    - "heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port != ''"    delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" +  retries: "{{ (glusterfs_timeout / 10) | int }}"  - name: Set heketi URL    set_fact: -    openshift_storage_glusterfs_heketi_url: "{{ heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip }}:{{ heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port }}" +    glusterfs_heketi_url: "{{ heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip }}:{{ heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port }}"  - name: Verify heketi service -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' cluster list" +  command: "heketi-cli -s http://{{ glusterfs_heketi_url }} --user admin --secret '{{ glusterfs_heketi_admin_key }}' cluster list"    changed_when: False diff --git a/roles/openshift_storage_glusterfs/tasks/main.yml b/roles/openshift_storage_glusterfs/tasks/main.yml index 71c4a2732..ebd8db453 100644 --- a/roles/openshift_storage_glusterfs/tasks/main.yml +++ b/roles/openshift_storage_glusterfs/tasks/main.yml @@ -5,174 +5,14 @@    changed_when: False    check_mode: no -- name: Verify target namespace exists -  oc_project: -    state: present -    name: "{{ openshift_storage_glusterfs_namespace }}" -  when: openshift_storage_glusterfs_is_native or openshift_storage_glusterfs_heketi_is_native - -- include: glusterfs_deploy.yml -  when: openshift_storage_glusterfs_is_native - -- name: Make sure heketi-client is installed -  package: name=heketi-client state=present - -- name: Delete pre-existing heketi resources -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    kind: "{{ item.kind }}" -    name: "{{ item.name | default(omit) }}" -    selector: "{{ item.selector | default(omit) }}" -    state: absent -  with_items: -  - kind: "template,route,service,jobs,dc,secret" -    selector: "deploy-heketi" -  - kind: "template,route,dc,service" -    name: "heketi" -  - kind: "svc,ep" -    name: "heketi-storage-endpoints" -  - kind: "sa" -    name: "heketi-service-account" -  failed_when: False -  when: openshift_storage_glusterfs_heketi_wipe - -- name: Wait for deploy-heketi pods to terminate -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    kind: pod -    state: list -    selector: "glusterfs=deploy-heketi-pod" -  register: heketi_pod -  until: "heketi_pod.results.results[0]['items'] | count == 0" -  delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" -  when: openshift_storage_glusterfs_heketi_wipe - -- name: Wait for heketi pods to terminate -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    kind: pod -    state: list -    selector: "glusterfs=heketi-pod" -  register: heketi_pod -  until: "heketi_pod.results.results[0]['items'] | count == 0" -  delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" -  when: openshift_storage_glusterfs_heketi_wipe - -- name: Create heketi service account -  oc_serviceaccount: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    name: heketi-service-account -    state: present -  when: openshift_storage_glusterfs_heketi_is_native - -- name: Add heketi service account to privileged SCC -  oc_adm_policy_user: -    user: "system:serviceaccount:{{ openshift_storage_glusterfs_namespace }}:heketi-service-account" -    resource_kind: scc -    resource_name: privileged -    state: present -  when: openshift_storage_glusterfs_heketi_is_native - -- name: Allow heketi service account to view/edit pods -  oc_adm_policy_user: -    user: "system:serviceaccount:{{ openshift_storage_glusterfs_namespace }}:heketi-service-account" -    resource_kind: role -    resource_name: edit -    state: present -  when: openshift_storage_glusterfs_heketi_is_native - -- name: Check for existing deploy-heketi pod -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    state: list -    kind: pod -    selector: "glusterfs=deploy-heketi-pod,deploy-heketi=support" -  register: heketi_pod -  when: openshift_storage_glusterfs_heketi_is_native - -- name: Check if need to deploy deploy-heketi -  set_fact: -    openshift_storage_glusterfs_heketi_deploy_is_missing: False -  when: -  - "openshift_storage_glusterfs_heketi_is_native" -  - "heketi_pod.results.results[0]['items'] | count > 0" -  # deploy-heketi is not missing when there are one or more pods with matching labels whose 'Ready' status is True -  - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count > 0" - -- name: Check for existing heketi pod -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    state: list -    kind: pod -    selector: "glusterfs=heketi-pod" -  register: heketi_pod -  when: openshift_storage_glusterfs_heketi_is_native - -- name: Check if need to deploy heketi -  set_fact: -    openshift_storage_glusterfs_heketi_is_missing: False +- include: glusterfs_config.yml    when: -  - "openshift_storage_glusterfs_heketi_is_native" -  - "heketi_pod.results.results[0]['items'] | count > 0" -  # heketi is not missing when there are one or more pods with matching labels whose 'Ready' status is True -  - "heketi_pod.results.results[0]['items'] | oo_collect(attribute='status.conditions') | oo_collect(attribute='status', filters={'type': 'Ready'}) | map('bool') | select | list | count > 0" - -- include: heketi_deploy_part1.yml -  when: -  - openshift_storage_glusterfs_heketi_is_native -  - openshift_storage_glusterfs_heketi_deploy_is_missing -  - openshift_storage_glusterfs_heketi_is_missing - -- name: Determine heketi URL -  oc_obj: -    namespace: "{{ openshift_storage_glusterfs_namespace }}" -    state: list -    kind: ep -    selector: "glusterfs in (deploy-heketi-service, heketi-service)" -  register: heketi_url -  until: -  - "heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip != ''" -  - "heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port != ''" -  delay: 10 -  retries: "{{ (openshift_storage_glusterfs_timeout / 10) | int }}" -  when: -  - openshift_storage_glusterfs_heketi_is_native -  - openshift_storage_glusterfs_heketi_url is undefined - -- name: Set heketi URL -  set_fact: -    openshift_storage_glusterfs_heketi_url: "{{ heketi_url.results.results[0]['items'][0].subsets[0].addresses[0].ip }}:{{ heketi_url.results.results[0]['items'][0].subsets[0].ports[0].port }}" -  when: -  - openshift_storage_glusterfs_heketi_is_native -  - openshift_storage_glusterfs_heketi_url is undefined - -- name: Verify heketi service -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' cluster list" -  changed_when: False - -- name: Generate topology file -  template: -    src: "{{ openshift.common.examples_content_version }}/topology.json.j2" -    dest: "{{ mktemp.stdout }}/topology.json" -  when: -  - openshift_storage_glusterfs_is_native -  - openshift_storage_glusterfs_heketi_topology_load - -- name: Load heketi topology -  command: "heketi-cli -s http://{{ openshift_storage_glusterfs_heketi_url }} --user admin --secret '{{ openshift_storage_glusterfs_heketi_admin_key }}' topology load --json={{ mktemp.stdout }}/topology.json 2>&1" -  register: topology_load -  failed_when: topology_load.rc != 0 or 'Unable' in topology_load.stdout -  when: -  - openshift_storage_glusterfs_is_native -  - openshift_storage_glusterfs_heketi_topology_load - -- include: heketi_deploy_part2.yml -  when: openshift_storage_glusterfs_heketi_is_native and openshift_storage_glusterfs_heketi_is_missing +  - g_glusterfs_hosts | default([]) | count > 0  - include: glusterfs_registry.yml -  when: openshift.hosted.registry.storage.kind == 'glusterfs' +  when: +  - g_glusterfs_registry_hosts | default([]) | count > 0 +  - "openshift.hosted.registry.storage.kind == 'glusterfs' or openshift.hosted.registry.glusterfs.swap"  - name: Delete temp directory    file: diff --git a/roles/openshift_storage_glusterfs/templates/v3.6/glusterfs-registry-endpoints.yml.j2 b/roles/openshift_storage_glusterfs/templates/v3.6/glusterfs-registry-endpoints.yml.j2 index d72d085c9..605627ab5 100644 --- a/roles/openshift_storage_glusterfs/templates/v3.6/glusterfs-registry-endpoints.yml.j2 +++ b/roles/openshift_storage_glusterfs/templates/v3.6/glusterfs-registry-endpoints.yml.j2 @@ -4,7 +4,7 @@ metadata:    name: glusterfs-registry-endpoints  subsets:  - addresses: -{% for node in groups.oo_glusterfs_to_config %} +{% for node in glusterfs_nodes %}    - ip: {{ hostvars[node].glusterfs_ip | default(hostvars[node].openshift.common.ip) }}  {% endfor %}    ports: diff --git a/roles/openshift_storage_glusterfs/templates/v3.6/topology.json.j2 b/roles/openshift_storage_glusterfs/templates/v3.6/topology.json.j2 index eb5b4544f..33d8f9b36 100644 --- a/roles/openshift_storage_glusterfs/templates/v3.6/topology.json.j2 +++ b/roles/openshift_storage_glusterfs/templates/v3.6/topology.json.j2 @@ -1,7 +1,7 @@  {    "clusters": [  {%- set clusters = {} -%} -{%- for node in groups.oo_glusterfs_to_config -%} +{%- for node in glusterfs_nodes -%}    {%- set cluster = hostvars[node].glusterfs_cluster if 'glusterfs_cluster' in node else '1' -%}    {%- if cluster in clusters -%}      {%- set _dummy = clusters[cluster].append(node) -%} diff --git a/roles/openshift_version/meta/main.yml b/roles/openshift_version/meta/main.yml index 37c80c29e..ca896addd 100644 --- a/roles/openshift_version/meta/main.yml +++ b/roles/openshift_version/meta/main.yml @@ -16,3 +16,4 @@ dependencies:  - role: openshift_docker_facts  - role: docker    when: openshift.common.is_containerized | default(False) | bool and not skip_docker_role | default(False) | bool +- role: lib_utils diff --git a/roles/openshift_version/tasks/main.yml b/roles/openshift_version/tasks/main.yml index fa9b20e92..f2f4d16f0 100644 --- a/roles/openshift_version/tasks/main.yml +++ b/roles/openshift_version/tasks/main.yml @@ -3,6 +3,7 @@  - set_fact:      is_containerized: "{{ openshift.common.is_containerized | default(False) | bool }}" +    is_atomic: "{{ openshift.common.is_atomic | default(False) | bool }}"  # Block attempts to install origin without specifying some kind of version information.  # This is because the latest tags for origin are usually alpha builds, which should not @@ -90,6 +91,26 @@    include: set_version_containerized.yml    when: is_containerized | bool +- block: +  - name: Get available {{ openshift.common.service_type}} version +    repoquery: +      name: "{{ openshift.common.service_type}}" +      ignore_excluders: true +    register: rpm_results +  - fail: +      msg: "Package {{ openshift.common.service_type}} not found" +    when: not rpm_results.results.package_found +  - set_fact: +      openshift_rpm_version: "{{ rpm_results.results.versions.available_versions.0 | default('0.0', True) }}" +  - name: Fail if rpm version and docker image version are different +    fail: +      msg: "OCP rpm version {{ openshift_rpm_version }} is different from OCP image version {{ openshift_version }}" +    # Both versions have the same string representation +    when: openshift_rpm_version != openshift_version +  when: +  - is_containerized | bool +  - not is_atomic | bool +  # Warn if the user has provided an openshift_image_tag but is not doing a containerized install  # NOTE: This will need to be modified/removed for future container + rpm installations work.  - name: Warn if openshift_image_tag is defined when not doing a containerized install diff --git a/roles/openshift_version/tasks/set_version_rpm.yml b/roles/openshift_version/tasks/set_version_rpm.yml index c7604af1a..c40777bf1 100644 --- a/roles/openshift_version/tasks/set_version_rpm.yml +++ b/roles/openshift_version/tasks/set_version_rpm.yml @@ -7,42 +7,18 @@    - openshift_pkg_version is defined    - openshift_version is not defined -# if {{ openshift.common.service_type}}-excluder is enabled, -# the repoquery for {{ openshift.common.service_type}} will not work. -# Thus, create a temporary yum,conf file where exclude= is set to an empty list -- name: Create temporary yum.conf file -  command: mktemp -d /tmp/yum.conf.XXXXXX -  register: yum_conf_temp_file_result +- block: +  - name: Get available {{ openshift.common.service_type}} version +    repoquery: +      name: "{{ openshift.common.service_type}}" +      ignore_excluders: true +    register: rpm_results -- set_fact: -    yum_conf_temp_file: "{{yum_conf_temp_file_result.stdout}}/yum.conf" +  - fail: +      msg: "Package {{ openshift.common.service_type}} not found" +    when: not rpm_results.results.package_found -- name: Copy yum.conf into the temporary file -  copy: -    src: /etc/yum.conf -    dest: "{{ yum_conf_temp_file }}" -    remote_src: True - -- name: Clear the exclude= list in the temporary yum.conf -  lineinfile: -    # since ansible 2.3 s/dest/path -    dest: "{{ yum_conf_temp_file }}" -    regexp: '^exclude=' -    line: 'exclude=' - -- name: Gather common package version -  command: > -    {{ repoquery_cmd }} --config "{{ yum_conf_temp_file }}" --qf '%{version}' "{{ openshift.common.service_type}}" -  register: common_version -  failed_when: false -  changed_when: false -  when: openshift_version is not defined - -- name: Delete the temporary yum.conf -  file: -    path: "{{ yum_conf_temp_file_result.stdout }}" -    state: absent - -- set_fact: -    openshift_version: "{{ common_version.stdout | default('0.0', True) }}" -  when: openshift_version is not defined +  - set_fact: +      openshift_version: "{{ rpm_results.results.versions.available_versions.0 | default('0.0', True) }}" +  when: +  - openshift_version is not defined diff --git a/roles/os_firewall/README.md b/roles/os_firewall/README.md index 43db3cc74..e7ef544f4 100644 --- a/roles/os_firewall/README.md +++ b/roles/os_firewall/README.md @@ -17,7 +17,7 @@ Role Variables  | Name                      | Default |                                        |  |---------------------------|---------|----------------------------------------| -| os_firewall_use_firewalld | True    | If false, use iptables                 | +| os_firewall_use_firewalld | False   | If false, use iptables                 |  | os_firewall_allow         | []      | List of service,port mappings to allow |  | os_firewall_deny          | []      | List of service, port mappings to deny | diff --git a/roles/os_firewall/defaults/main.yml b/roles/os_firewall/defaults/main.yml index 4c544122f..01859e5fc 100644 --- a/roles/os_firewall/defaults/main.yml +++ b/roles/os_firewall/defaults/main.yml @@ -2,6 +2,6 @@  os_firewall_enabled: True  # firewalld is not supported on Atomic Host  # https://bugzilla.redhat.com/show_bug.cgi?id=1403331 -os_firewall_use_firewalld: "{{ False if openshift.common.is_atomic | bool else True }}" +os_firewall_use_firewalld: "{{ False }}"  os_firewall_allow: []  os_firewall_deny: [] diff --git a/roles/os_firewall/library/os_firewall_manage_iptables.py b/roles/os_firewall/library/os_firewall_manage_iptables.py index 8d4878fa7..aeee3ede8 100755 --- a/roles/os_firewall/library/os_firewall_manage_iptables.py +++ b/roles/os_firewall/library/os_firewall_manage_iptables.py @@ -1,6 +1,5 @@  #!/usr/bin/python  # -*- coding: utf-8 -*- -# vim: expandtab:tabstop=4:shiftwidth=4  # pylint: disable=fixme, missing-docstring  import subprocess  | 
