summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--.dockerignore4
-rw-r--r--BUILD.md29
-rw-r--r--README_CONTAINER_IMAGE.md6
-rw-r--r--images/installer/Dockerfile62
-rw-r--r--images/installer/Dockerfile.rhel767
-rw-r--r--images/installer/README_CONTAINER_IMAGE.md (renamed from images/installer/system-container/README.md)27
-rw-r--r--images/installer/root/exports/config.json.template (renamed from images/installer/system-container/root/exports/config.json.template)0
-rw-r--r--images/installer/root/exports/manifest.json (renamed from images/installer/system-container/root/exports/manifest.json)0
-rw-r--r--images/installer/root/exports/service.template (renamed from images/installer/system-container/root/exports/service.template)0
-rw-r--r--images/installer/root/exports/tmpfiles.template (renamed from images/installer/system-container/root/exports/tmpfiles.template)0
-rwxr-xr-ximages/installer/root/usr/local/bin/entrypoint17
-rwxr-xr-ximages/installer/root/usr/local/bin/run46
-rwxr-xr-ximages/installer/root/usr/local/bin/run-system-container.sh (renamed from images/installer/system-container/root/usr/local/bin/run-system-container.sh)0
-rwxr-xr-ximages/installer/root/usr/local/bin/usage33
-rwxr-xr-ximages/installer/root/usr/local/bin/usage.ocp33
-rwxr-xr-ximages/installer/root/usr/local/bin/user_setup17
-rw-r--r--playbooks/byo/openshift-checks/README.md11
-rwxr-xr-xroles/openshift_facts/library/openshift_facts.py83
-rw-r--r--roles/openshift_service_catalog/files/kubeservicecatalog_roles_bindings.yml6
-rw-r--r--roles/openshift_storage_nfs/tasks/main.yml2
-rw-r--r--roles/openshift_storage_nfs/templates/exports.j21
21 files changed, 257 insertions, 187 deletions
diff --git a/.dockerignore b/.dockerignore
index 968811df5..0a70c5bfa 100644
--- a/.dockerignore
+++ b/.dockerignore
@@ -1,8 +1,12 @@
.*
bin
docs
+hack
+inventory
test
utils
**/*.md
*.spec
+*.ini
+*.txt
setup*
diff --git a/BUILD.md b/BUILD.md
index d9de3f5aa..1c270db23 100644
--- a/BUILD.md
+++ b/BUILD.md
@@ -33,35 +33,6 @@ To build a container image of `openshift-ansible` using standalone **Docker**:
cd openshift-ansible
docker build -f images/installer/Dockerfile -t openshift-ansible .
-### Building on OpenShift
-
-To build an openshift-ansible image using an **OpenShift** [build and image stream](https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html) the straightforward command would be:
-
- oc new-build registry.centos.org/openshift/playbook2image~https://github.com/openshift/openshift-ansible
-
-However: because the `Dockerfile` for this repository is not in the top level directory, and because we can't change the build context to the `images/installer` path as it would cause the build to fail, the `oc new-app` command above will create a build configuration using the *source to image* strategy, which is the default approach of the [playbook2image](https://github.com/openshift/playbook2image) base image. This does build an image successfully, but unfortunately the resulting image will be missing some customizations that are handled by the [Dockerfile](images/installer/Dockerfile) in this repo.
-
-At the time of this writing there is no straightforward option to [set the dockerfilePath](https://docs.openshift.org/latest/dev_guide/builds/build_strategies.html#dockerfile-path) of a `docker` build strategy with `oc new-build`. The alternatives to achieve this are:
-
-- Use the simple `oc new-build` command above to generate the BuildConfig and ImageStream objects, and then manually edit the generated build configuration to change its strategy to `dockerStrategy` and set `dockerfilePath` to `images/installer/Dockerfile`.
-
-- Download and pass the `Dockerfile` to `oc new-build` with the `-D` option:
-
-```
-curl -s https://raw.githubusercontent.com/openshift/openshift-ansible/master/images/installer/Dockerfile |
- oc new-build -D - \
- --docker-image=registry.centos.org/openshift/playbook2image \
- https://github.com/openshift/openshift-ansible
-```
-
-Once a build is started, the progress of the build can be monitored with:
-
- oc logs -f bc/openshift-ansible
-
-Once built, the image will be visible in the Image Stream created by `oc new-app`:
-
- oc describe imagestream openshift-ansible
-
## Build the Atomic System Container
A system container runs using runC instead of Docker and it is managed
diff --git a/README_CONTAINER_IMAGE.md b/README_CONTAINER_IMAGE.md
index cf3b432df..a2151352d 100644
--- a/README_CONTAINER_IMAGE.md
+++ b/README_CONTAINER_IMAGE.md
@@ -1,6 +1,6 @@
# Containerized openshift-ansible to run playbooks
-The [Dockerfile](images/installer/Dockerfile) in this repository uses the [playbook2image](https://github.com/openshift/playbook2image) source-to-image base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.
+The [Dockerfile](images/installer/Dockerfile) in this repository can be used to build a containerized `openshift-ansible`. The resulting image can run any of the provided playbooks. See [BUILD.md](BUILD.md) for image build instructions.
The image is designed to **run as a non-root user**. The container's UID is mapped to the username `default` at runtime. Therefore, the container's environment reflects that user's settings, and the configuration should match that. For example `$HOME` is `/opt/app-root/src`, so ssh keys are expected to be under `/opt/app-root/src/.ssh`. If you ran a container as `root` you would have to adjust the container's configuration accordingly, e.g. by placing ssh keys under `/root/.ssh` instead. Nevertheless, the expectation is that containers will be run as non-root; for example, this container image can be run inside OpenShift under the default `restricted` [security context constraint](https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints).
@@ -14,8 +14,6 @@ This provides consistency with other images used by the platform and it's also a
## Usage
-The `playbook2image` base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/openshift/playbook2image) documentation.
-
At the very least, when running a container you must specify:
1. An **inventory**. This can be a location inside the container (possibly mounted as a volume) with a path referenced via the `INVENTORY_FILE` environment variable. Alternatively you can serve the inventory file from a web server and use the `INVENTORY_URL` environment variable to fetch it, or `DYNAMIC_SCRIPT_URL` to download a script that provides a dynamic inventory.
@@ -52,8 +50,6 @@ Here is a detailed explanation of the options used in the command above:
Further usage examples are available in the [examples directory](examples/) with samples of how to use the image from within OpenShift.
-Additional usage information for images built from `playbook2image` like this one can be found in the [playbook2image examples](https://github.com/openshift/playbook2image/tree/master/examples).
-
## Running openshift-ansible as a System Container
Building the System Container: See the [BUILD.md](BUILD.md).
diff --git a/images/installer/Dockerfile b/images/installer/Dockerfile
index 915dfe377..d03f33a1d 100644
--- a/images/installer/Dockerfile
+++ b/images/installer/Dockerfile
@@ -1,10 +1,18 @@
-# Using playbook2image as a base
-# See https://github.com/openshift/playbook2image for details on the image
-# including documentation for the settings/env vars referenced below
-FROM registry.centos.org/openshift/playbook2image:latest
+FROM centos:7
MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com>
+USER root
+
+# install ansible and deps
+RUN INSTALL_PKGS="python-lxml pyOpenSSL python2-cryptography openssl java-1.8.0-openjdk-headless httpd-tools openssh-clients" \
+ && yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \
+ && EPEL_PKGS="ansible python-passlib python2-boto" \
+ && yum install -y epel-release \
+ && yum install -y --setopt=tsflags=nodocs $EPEL_PKGS \
+ && rpm -q $INSTALL_PKGS $EPEL_PKGS \
+ && yum clean all
+
LABEL name="openshift/origin-ansible" \
summary="OpenShift's installation and configuration tool" \
description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
@@ -12,40 +20,24 @@ LABEL name="openshift/origin-ansible" \
io.k8s.display-name="openshift-ansible" \
io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
io.openshift.expose-services="" \
- io.openshift.tags="openshift,install,upgrade,ansible"
+ io.openshift.tags="openshift,install,upgrade,ansible" \
+ atomic.run="once"
-USER root
+ENV USER_UID=1001 \
+ HOME=/opt/app-root/src \
+ WORK_DIR=/usr/share/ansible/openshift-ansible \
+ OPTS="-v"
-# Create a symlink to /opt/app-root/src so that files under /usr/share/ansible are accessible.
-# This is required since the system-container uses by default the playbook under
-# /usr/share/ansible/openshift-ansible. With this change we won't need to keep two different
-# configurations for the two images.
-RUN mkdir -p /usr/share/ansible/ && ln -s /opt/app-root/src /usr/share/ansible/openshift-ansible
+# Add image scripts and files for running as a system container
+COPY images/installer/root /
+# Include playbooks, roles, plugins, etc. from this repo
+COPY . ${WORK_DIR}
-RUN INSTALL_PKGS="skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \
- yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
- rpm -V $INSTALL_PKGS && \
- yum clean all
+RUN /usr/local/bin/user_setup \
+ && rm /usr/local/bin/usage.ocp
USER ${USER_UID}
-# The playbook to be run is specified via the PLAYBOOK_FILE env var.
-# This sets a default of openshift_facts.yml as it's an informative playbook
-# that can help test that everything is set properly (inventory, sshkeys)
-ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
- OPTS="-v" \
- INSTALL_OC=true
-
-# playbook2image's assemble script expects the source to be available in
-# /tmp/src (as per the source-to-image specs) so we import it there
-ADD . /tmp/src
-
-# Running the 'assemble' script provided by playbook2image will install
-# dependencies specified in requirements.txt and install the 'oc' client
-# as per the INSTALL_OC environment setting above
-RUN /usr/libexec/s2i/assemble
-
-# Add files for running as a system container
-COPY images/installer/system-container/root /
-
-CMD [ "/usr/libexec/s2i/run" ]
+WORKDIR ${WORK_DIR}
+ENTRYPOINT [ "/usr/local/bin/entrypoint" ]
+CMD [ "/usr/local/bin/run" ]
diff --git a/images/installer/Dockerfile.rhel7 b/images/installer/Dockerfile.rhel7
index f861d4bcf..3110f409c 100644
--- a/images/installer/Dockerfile.rhel7
+++ b/images/installer/Dockerfile.rhel7
@@ -1,55 +1,46 @@
-FROM openshift3/playbook2image
+FROM rhel7.3:7.3-released
MAINTAINER OpenShift Team <dev@lists.openshift.redhat.com>
-# override env vars from base image
-ENV SUMMARY="OpenShift's installation and configuration tool" \
- DESCRIPTION="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster"
+USER root
+
+# Playbooks, roles, and their dependencies are installed from packages.
+RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto openssl java-1.8.0-openjdk-headless httpd-tools" \
+ && yum repolist > /dev/null \
+ && yum-config-manager --enable rhel-7-server-ose-3.6-rpms \
+ && yum-config-manager --enable rhel-7-server-rh-common-rpms \
+ && yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS \
+ && rpm -q $INSTALL_PKGS \
+ && yum clean all
LABEL name="openshift3/ose-ansible" \
- summary="$SUMMARY" \
- description="$DESCRIPTION" \
+ summary="OpenShift's installation and configuration tool" \
+ description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
url="https://github.com/openshift/openshift-ansible" \
io.k8s.display-name="openshift-ansible" \
- io.k8s.description="$DESCRIPTION" \
+ io.k8s.description="A containerized openshift-ansible image to let you run playbooks to install, upgrade, maintain and check an OpenShift cluster" \
io.openshift.expose-services="" \
io.openshift.tags="openshift,install,upgrade,ansible" \
com.redhat.component="aos3-installation-docker" \
version="v3.6.0" \
release="1" \
- architecture="x86_64"
-
-# Playbooks, roles and their dependencies are installed from packages.
-# Unlike in Dockerfile, we don't invoke the 'assemble' script here
-# because all content and dependencies (like 'oc') is already
-# installed via yum.
-USER root
-RUN INSTALL_PKGS="atomic-openshift-utils atomic-openshift-clients python-boto skopeo openssl java-1.8.0-openjdk-headless httpd-tools" && \
- yum repolist > /dev/null && \
- yum-config-manager --enable rhel-7-server-ose-3.6-rpms && \
- yum-config-manager --enable rhel-7-server-rh-common-rpms && \
- yum install -y $INSTALL_PKGS && \
- yum clean all
-
-# The symlinks below are a (hopefully temporary) hack to work around the fact that this
-# image is based on python s2i which uses the python27 SCL instead of system python,
-# and so the system python modules we need would otherwise not be in the path.
-RUN ln -s /usr/lib/python2.7/site-packages/{boto,passlib} /opt/app-root/lib64/python2.7/
-
-USER ${USER_UID}
+ architecture="x86_64" \
+ atomic.run="once"
-# The playbook to be run is specified via the PLAYBOOK_FILE env var.
-# This sets a default of openshift_facts.yml as it's an informative playbook
-# that can help test that everything is set properly (inventory, sshkeys).
-# As the playbooks are installed via packages instead of being copied to
-# $APP_HOME by the 'assemble' script, we set the WORK_DIR env var to the
-# location of openshift-ansible.
-ENV PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
- ANSIBLE_CONFIG=/usr/share/atomic-openshift-utils/ansible.cfg \
+ENV USER_UID=1001 \
+ HOME=/opt/app-root/src \
WORK_DIR=/usr/share/ansible/openshift-ansible \
+ ANSIBLE_CONFIG=/usr/share/atomic-openshift-utils/ansible.cfg \
OPTS="-v"
-# Add files for running as a system container
-COPY system-container/root /
+# Add image scripts and files for running as a system container
+COPY root /
+
+RUN /usr/local/bin/user_setup \
+ && mv /usr/local/bin/usage{.ocp,}
+
+USER ${USER_UID}
-CMD [ "/usr/libexec/s2i/run" ]
+WORKDIR ${WORK_DIR}
+ENTRYPOINT [ "/usr/local/bin/entrypoint" ]
+CMD [ "/usr/local/bin/run" ]
diff --git a/images/installer/system-container/README.md b/images/installer/README_CONTAINER_IMAGE.md
index fbcd47c4a..bc1ebb4a8 100644
--- a/images/installer/system-container/README.md
+++ b/images/installer/README_CONTAINER_IMAGE.md
@@ -1,17 +1,34 @@
-# System container installer
+ORIGIN-ANSIBLE IMAGE INSTALLER
+===============================
+
+Contains Dockerfile information for building an openshift/origin-ansible image
+based on `centos:7` or `rhel7.3:7.3-released`.
+
+Read additional setup information for this image at: https://hub.docker.com/r/openshift/origin-ansible/
+
+Read additional information about the `openshift/origin-ansible` at: https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md
+
+Also contains necessary components for running the installer using an Atomic System Container.
+
+
+System container installer
+==========================
These files are needed to run the installer using an [Atomic System container](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/).
+These files can be found under `root/exports`:
* config.json.template - Template of the configuration file used for running containers.
-* manifest.json - Used to define various settings for the system container, such as the default values to use for the installation.
-
-* run-system-container.sh - Entrypoint to the container.
+* manifest.json - Used to define various settings for the system container, such as the default values to use for the installation.
* service.template - Template file for the systemd service.
* tmpfiles.template - Template file for systemd-tmpfiles.
+These files can be found under `root/usr/local/bin`:
+
+* run-system-container.sh - Entrypoint to the container.
+
## Options
These options may be set via the ``atomic`` ``--set`` flag. For defaults see ``root/exports/manifest.json``
@@ -28,4 +45,4 @@ These options may be set via the ``atomic`` ``--set`` flag. For defaults see ``r
* ANSIBLE_CONFIG - Full path for the ansible configuration file to use inside the container
-* INVENTORY_FILE - Full path for the inventory to use from the host
+* INVENTORY_FILE - Full path for the inventory to use from the host \ No newline at end of file
diff --git a/images/installer/system-container/root/exports/config.json.template b/images/installer/root/exports/config.json.template
index 739c0080f..739c0080f 100644
--- a/images/installer/system-container/root/exports/config.json.template
+++ b/images/installer/root/exports/config.json.template
diff --git a/images/installer/system-container/root/exports/manifest.json b/images/installer/root/exports/manifest.json
index 8b984d7a3..8b984d7a3 100644
--- a/images/installer/system-container/root/exports/manifest.json
+++ b/images/installer/root/exports/manifest.json
diff --git a/images/installer/system-container/root/exports/service.template b/images/installer/root/exports/service.template
index bf5316af6..bf5316af6 100644
--- a/images/installer/system-container/root/exports/service.template
+++ b/images/installer/root/exports/service.template
diff --git a/images/installer/system-container/root/exports/tmpfiles.template b/images/installer/root/exports/tmpfiles.template
index b1f6caf47..b1f6caf47 100644
--- a/images/installer/system-container/root/exports/tmpfiles.template
+++ b/images/installer/root/exports/tmpfiles.template
diff --git a/images/installer/root/usr/local/bin/entrypoint b/images/installer/root/usr/local/bin/entrypoint
new file mode 100755
index 000000000..777bf3f11
--- /dev/null
+++ b/images/installer/root/usr/local/bin/entrypoint
@@ -0,0 +1,17 @@
+#!/bin/bash -e
+#
+# This file serves as the main entrypoint to the openshift-ansible image.
+#
+# For more information see the documentation:
+# https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md
+
+
+# Patch /etc/passwd file with the current user info.
+# The current user's entry must be correctly defined in this file in order for
+# the `ssh` command to work within the created container.
+
+if ! whoami &>/dev/null; then
+ echo "${USER:-default}:x:$(id -u):$(id -g):Default User:$HOME:/sbin/nologin" >> /etc/passwd
+fi
+
+exec "$@"
diff --git a/images/installer/root/usr/local/bin/run b/images/installer/root/usr/local/bin/run
new file mode 100755
index 000000000..9401ea118
--- /dev/null
+++ b/images/installer/root/usr/local/bin/run
@@ -0,0 +1,46 @@
+#!/bin/bash -e
+#
+# This file serves as the default command to the openshift-ansible image.
+# Runs a playbook with inventory as specified by environment variables.
+#
+# For more information see the documentation:
+# https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md
+
+# SOURCE and HOME DIRECTORY: /opt/app-root/src
+
+if [[ -z "${PLAYBOOK_FILE}" ]]; then
+ echo
+ echo "PLAYBOOK_FILE must be provided."
+ exec /usr/local/bin/usage
+fi
+
+INVENTORY="$(mktemp)"
+if [[ -v INVENTORY_FILE ]]; then
+ # Make a copy so that ALLOW_ANSIBLE_CONNECTION_LOCAL below
+ # does not attempt to modify the original
+ cp -a ${INVENTORY_FILE} ${INVENTORY}
+elif [[ -v INVENTORY_URL ]]; then
+ curl -o ${INVENTORY} ${INVENTORY_URL}
+elif [[ -v DYNAMIC_SCRIPT_URL ]]; then
+ curl -o ${INVENTORY} ${DYNAMIC_SCRIPT_URL}
+ chmod 755 ${INVENTORY}
+else
+ echo
+ echo "One of INVENTORY_FILE, INVENTORY_URL or DYNAMIC_SCRIPT_URL must be provided."
+ exec /usr/local/bin/usage
+fi
+INVENTORY_ARG="-i ${INVENTORY}"
+
+if [[ "$ALLOW_ANSIBLE_CONNECTION_LOCAL" = false ]]; then
+ sed -i s/ansible_connection=local// ${INVENTORY}
+fi
+
+if [[ -v VAULT_PASS ]]; then
+ VAULT_PASS_FILE=.vaultpass
+ echo ${VAULT_PASS} > ${VAULT_PASS_FILE}
+ VAULT_PASS_ARG="--vault-password-file ${VAULT_PASS_FILE}"
+fi
+
+cd ${WORK_DIR}
+
+exec ansible-playbook ${INVENTORY_ARG} ${VAULT_PASS_ARG} ${OPTS} ${PLAYBOOK_FILE}
diff --git a/images/installer/system-container/root/usr/local/bin/run-system-container.sh b/images/installer/root/usr/local/bin/run-system-container.sh
index 9ce7c7328..9ce7c7328 100755
--- a/images/installer/system-container/root/usr/local/bin/run-system-container.sh
+++ b/images/installer/root/usr/local/bin/run-system-container.sh
diff --git a/images/installer/root/usr/local/bin/usage b/images/installer/root/usr/local/bin/usage
new file mode 100755
index 000000000..3518d7f19
--- /dev/null
+++ b/images/installer/root/usr/local/bin/usage
@@ -0,0 +1,33 @@
+#!/bin/bash -e
+cat <<"EOF"
+
+The origin-ansible image provides several options to control the behaviour of the containers.
+For more details on these options see the documentation:
+
+ https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md
+
+At a minimum, when running a container using this image you must provide:
+
+* ssh keys so that Ansible can reach your hosts. These should be mounted as a volume under
+ /opt/app-root/src/.ssh
+* An inventory file. This can be mounted inside the container as a volume and specified with the
+ INVENTORY_FILE environment variable. Alternatively you can serve the inventory file from a web
+ server and use the INVENTORY_URL environment variable to fetch it.
+* The playbook to run. This is set using the PLAYBOOK_FILE environment variable.
+
+Here is an example of how to run a containerized origin-ansible with
+the openshift_facts playbook, which collects and displays facts about your
+OpenShift environment. The inventory and ssh keys are mounted as volumes
+(the latter requires setting the uid in the container and SELinux label
+in the key file via :Z so they can be accessed) and the PLAYBOOK_FILE
+environment variable is set to point to the playbook within the image:
+
+docker run -tu `id -u` \
+ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z,ro \
+ -v /etc/ansible/hosts:/tmp/inventory:Z,ro \
+ -e INVENTORY_FILE=/tmp/inventory \
+ -e OPTS="-v" \
+ -e PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
+ openshift/origin-ansible
+
+EOF
diff --git a/images/installer/root/usr/local/bin/usage.ocp b/images/installer/root/usr/local/bin/usage.ocp
new file mode 100755
index 000000000..50593af6e
--- /dev/null
+++ b/images/installer/root/usr/local/bin/usage.ocp
@@ -0,0 +1,33 @@
+#!/bin/bash -e
+cat <<"EOF"
+
+The ose-ansible image provides several options to control the behaviour of the containers.
+For more details on these options see the documentation:
+
+ https://github.com/openshift/openshift-ansible/blob/master/README_CONTAINER_IMAGE.md
+
+At a minimum, when running a container using this image you must provide:
+
+* ssh keys so that Ansible can reach your hosts. These should be mounted as a volume under
+ /opt/app-root/src/.ssh
+* An inventory file. This can be mounted inside the container as a volume and specified with the
+ INVENTORY_FILE environment variable. Alternatively you can serve the inventory file from a web
+ server and use the INVENTORY_URL environment variable to fetch it.
+* The playbook to run. This is set using the PLAYBOOK_FILE environment variable.
+
+Here is an example of how to run a containerized ose-ansible with
+the openshift_facts playbook, which collects and displays facts about your
+OpenShift environment. The inventory and ssh keys are mounted as volumes
+(the latter requires setting the uid in the container and SELinux label
+in the key file via :Z so they can be accessed) and the PLAYBOOK_FILE
+environment variable is set to point to the playbook within the image:
+
+docker run -tu `id -u` \
+ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z,ro \
+ -v /etc/ansible/hosts:/tmp/inventory:Z,ro \
+ -e INVENTORY_FILE=/tmp/inventory \
+ -e OPTS="-v" \
+ -e PLAYBOOK_FILE=playbooks/byo/openshift_facts.yml \
+ openshift3/ose-ansible
+
+EOF
diff --git a/images/installer/root/usr/local/bin/user_setup b/images/installer/root/usr/local/bin/user_setup
new file mode 100755
index 000000000..b76e60a4d
--- /dev/null
+++ b/images/installer/root/usr/local/bin/user_setup
@@ -0,0 +1,17 @@
+#!/bin/sh
+set -x
+
+# ensure $HOME exists and is accessible by group 0 (we don't know what the runtime UID will be)
+mkdir -p ${HOME}
+chown ${USER_UID}:0 ${HOME}
+chmod ug+rwx ${HOME}
+
+# runtime user will need to be able to self-insert in /etc/passwd
+chmod g+rw /etc/passwd
+
+# ensure that the ansible content is accessible
+chmod -R g+r ${WORK_DIR}
+find ${WORK_DIR} -type d -exec chmod g+x {} +
+
+# no need for this script to remain in the image after running
+rm $0
diff --git a/playbooks/byo/openshift-checks/README.md b/playbooks/byo/openshift-checks/README.md
index 4b2ff1f94..f0f14b268 100644
--- a/playbooks/byo/openshift-checks/README.md
+++ b/playbooks/byo/openshift-checks/README.md
@@ -39,7 +39,9 @@ against your inventory file. Here is the step-by-step:
$ cd openshift-ansible
```
-2. Run the appropriate playbook:
+2. Install the [dependencies](../../../README.md#setup)
+
+3. Run the appropriate playbook:
```console
$ ansible-playbook -i <inventory file> playbooks/byo/openshift-checks/pre-install.yml
@@ -57,9 +59,8 @@ against your inventory file. Here is the step-by-step:
$ ansible-playbook -i <inventory file> playbooks/byo/openshift-checks/certificate_expiry/default.yaml -v
```
-## Running via Docker image
+## Running in a container
This repository is built into a Docker image including Ansible so that it can
-be run anywhere Docker is available. Instructions for doing so may be found
-[in the README](../../README_CONTAINER_IMAGE.md).
-
+be run anywhere Docker is available, without the need to manually install dependencies.
+Instructions for doing so may be found [in the README](../../../README_CONTAINER_IMAGE.md).
diff --git a/roles/openshift_facts/library/openshift_facts.py b/roles/openshift_facts/library/openshift_facts.py
index beef77896..4712ca3a8 100755
--- a/roles/openshift_facts/library/openshift_facts.py
+++ b/roles/openshift_facts/library/openshift_facts.py
@@ -1642,75 +1642,20 @@ def set_proxy_facts(facts):
"""
if 'common' in facts:
common = facts['common']
-
- ######################################################################
- # We can exit early now if we don't need to set any proxy facts
- proxy_params = ['no_proxy', 'https_proxy', 'http_proxy']
- # If any of the known Proxy Params (pp) are defined
- proxy_settings_defined = any(
- [True for pp in proxy_params if pp in common]
- )
-
- if not proxy_settings_defined:
- common['no_proxy'] = ''
- return facts
-
- # As of 3.6 if ANY of the proxy parameters are defined in the
- # inventory then we MUST add certain domains to the NO_PROXY
- # environment variable.
-
- ######################################################################
-
- # Spot to build up some data we may insert later
- raw_no_proxy_list = []
-
- # Automatic 3.6 NO_PROXY additions if a proxy is in use
- svc_cluster_name = ['.svc', '.' + common['dns_domain'], common['hostname']]
-
- # auto_hosts: Added to NO_PROXY list if any proxy params are
- # set in the inventory. This a list of the FQDNs of all
- # cluster hosts:
- auto_hosts = common['no_proxy_internal_hostnames'].split(',')
-
- # custom_no_proxy_hosts: If you define openshift_no_proxy in
- # inventory we automatically add those hosts to the list:
- if 'no_proxy' in common:
- custom_no_proxy_hosts = common['no_proxy'].split(',')
- else:
- custom_no_proxy_hosts = []
-
- # This should exist no matter what. Defaults to true.
- if 'generate_no_proxy_hosts' in common:
- generate_no_proxy_hosts = safe_get_bool(common['generate_no_proxy_hosts'])
-
- ######################################################################
-
- # You set a proxy var. Now we are obliged to add some things
- raw_no_proxy_list = svc_cluster_name + custom_no_proxy_hosts
-
- # You did not turn openshift_generate_no_proxy_hosts to False
- if generate_no_proxy_hosts:
- raw_no_proxy_list.extend(auto_hosts)
-
- ######################################################################
-
- # Was anything actually added? There should be something by now.
- processed_no_proxy_list = sort_unique(raw_no_proxy_list)
- if processed_no_proxy_list != list():
- common['no_proxy'] = ','.join(processed_no_proxy_list)
- else:
- # Somehow we got an empty list. This should have been
- # skipped by now in the 'return' earlier. If
- # common['no_proxy'] is DEFINED it will cause unexpected
- # behavior and bad templating. Ensure it does not
- # exist. Even an empty list or string will have undesired
- # side-effects.
- del common['no_proxy']
-
- ######################################################################
- # In case you were wondering, because 'common' is a reference
- # to the object facts['common'], there is no need to re-assign
- # it.
+ if 'http_proxy' in common or 'https_proxy' in common or 'no_proxy' in common:
+ if 'no_proxy' in common and isinstance(common['no_proxy'], string_types):
+ common['no_proxy'] = common['no_proxy'].split(",")
+ elif 'no_proxy' not in common:
+ common['no_proxy'] = []
+ if 'generate_no_proxy_hosts' in common and safe_get_bool(common['generate_no_proxy_hosts']):
+ if 'no_proxy_internal_hostnames' in common:
+ common['no_proxy'].extend(common['no_proxy_internal_hostnames'].split(','))
+ # We always add local dns domain and ourselves no matter what
+ common['no_proxy'].append('.' + common['dns_domain'])
+ common['no_proxy'].append('.svc')
+ common['no_proxy'].append(common['hostname'])
+ common['no_proxy'] = ','.join(sort_unique(common['no_proxy']))
+ facts['common'] = common
return facts
diff --git a/roles/openshift_service_catalog/files/kubeservicecatalog_roles_bindings.yml b/roles/openshift_service_catalog/files/kubeservicecatalog_roles_bindings.yml
index bcc7fb590..c30c15778 100644
--- a/roles/openshift_service_catalog/files/kubeservicecatalog_roles_bindings.yml
+++ b/roles/openshift_service_catalog/files/kubeservicecatalog_roles_bindings.yml
@@ -137,6 +137,12 @@ objects:
- serviceclasses
verbs:
- create
+ - delete
+ - update
+ - patch
+ - get
+ - list
+ - watch
- apiGroups:
- settings.k8s.io
resources:
diff --git a/roles/openshift_storage_nfs/tasks/main.yml b/roles/openshift_storage_nfs/tasks/main.yml
index 0d6b8b7d4..019ada2fb 100644
--- a/roles/openshift_storage_nfs/tasks/main.yml
+++ b/roles/openshift_storage_nfs/tasks/main.yml
@@ -30,7 +30,7 @@
- "{{ openshift.hosted.metrics }}"
- "{{ openshift.hosted.logging }}"
- "{{ openshift.hosted.loggingops }}"
-
+ - "{{ openshift.hosted.etcd }}"
- name: Configure exports
template:
diff --git a/roles/openshift_storage_nfs/templates/exports.j2 b/roles/openshift_storage_nfs/templates/exports.j2
index 8c6d4105c..7e8f70b23 100644
--- a/roles/openshift_storage_nfs/templates/exports.j2
+++ b/roles/openshift_storage_nfs/templates/exports.j2
@@ -2,3 +2,4 @@
{{ openshift.hosted.metrics.storage.nfs.directory }}/{{ openshift.hosted.metrics.storage.volume.name }} {{ openshift.hosted.metrics.storage.nfs.options }}
{{ openshift.hosted.logging.storage.nfs.directory }}/{{ openshift.hosted.logging.storage.volume.name }} {{ openshift.hosted.logging.storage.nfs.options }}
{{ openshift.hosted.loggingops.storage.nfs.directory }}/{{ openshift.hosted.loggingops.storage.volume.name }} {{ openshift.hosted.loggingops.storage.nfs.options }}
+{{ openshift.hosted.etcd.storage.nfs.directory }}/{{ openshift.hosted.etcd.storage.volume.name }} {{ openshift.hosted.etcd.storage.nfs.options }}