summaryrefslogtreecommitdiffstats
path: root/playbooks/openstack
diff options
context:
space:
mode:
authorOpenShift Merge Robot <openshift-merge-robot@users.noreply.github.com>2018-01-11 11:39:02 -0800
committerGitHub <noreply@github.com>2018-01-11 11:39:02 -0800
commitfdc5829d6ca252e1574278cd1dc50e932378d98d (patch)
tree0f80b64e32b59e7c4a2735066aed5f4e363e5a8a /playbooks/openstack
parente347c3ed3cc23cdf3af1077dd4becc572a7dfad9 (diff)
parent3d943161cedc2fc0d1690e03ece1e2504d4d6d74 (diff)
downloadopenshift-fdc5829d6ca252e1574278cd1dc50e932378d98d.tar.gz
openshift-fdc5829d6ca252e1574278cd1dc50e932378d98d.tar.bz2
openshift-fdc5829d6ca252e1574278cd1dc50e932378d98d.tar.xz
openshift-fdc5829d6ca252e1574278cd1dc50e932378d98d.zip
Merge pull request #6607 from tomassedovic/fix-cinder-pv
Automatic merge from submit-queue. Fix Cinder Persistent Volume support This documents how to use Cinder-backed persistent volumes with OpenStack. It needed a change to the dynamic inventory because the "openstack" cloudprovider plugin does actually require internal name resolution -- and the `openshift_hostname` value must match the name of the Nova server. In addition, we need to be able to specify the V2 of the Cinder API for now as described in: https://github.com/openshift/openshift-docs/issues/5730
Diffstat (limited to 'playbooks/openstack')
-rw-r--r--playbooks/openstack/advanced-configuration.md106
-rw-r--r--playbooks/openstack/sample-inventory/group_vars/OSEv3.yml1
-rwxr-xr-xplaybooks/openstack/sample-inventory/inventory.py6
3 files changed, 111 insertions, 2 deletions
diff --git a/playbooks/openstack/advanced-configuration.md b/playbooks/openstack/advanced-configuration.md
index 2c9b70b5f..2caf63592 100644
--- a/playbooks/openstack/advanced-configuration.md
+++ b/playbooks/openstack/advanced-configuration.md
@@ -372,6 +372,112 @@ In order to set a custom entrypoint, update `openshift_master_cluster_public_hos
Note than an empty hostname does not work, so if your domain is `openshift.example.com`,
you cannot set this value to simply `openshift.example.com`.
+
+## Using Cinder-backed Persistent Volumes
+
+You will need to set up OpenStack credentials. You can try putting this in your
+`inventory/group_vars/OSEv3.yml`:
+
+ openshift_cloudprovider_kind: openstack
+ openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
+ openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
+ openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
+ openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_PROJECT_NAME') }}"
+ openshift_cloudprovider_openstack_domain_name: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"
+ openshift_cloudprovider_openstack_blockstorage_version: v2
+
+**NOTE**: you must specify the Block Storage version as v2, because OpenShift
+does not support the v3 API yet and the version detection is currently not
+working properly.
+
+For more information, consult the [Configuring for OpenStack page in the OpenShift documentation][openstack-credentials].
+
+[openstack-credentials]: https://docs.openshift.org/latest/install_config/configuring_openstack.html#install-config-configuring-openstack
+
+**NOTE** the OpenStack integration currently requires DNS to be configured and
+running and the `openshift_hostname` variable must match the Nova server name
+for each node. The cluster deployment will fail without it. If you use the
+provided OpenStack dynamic inventory and configure the
+`openshift_openstack_dns_nameservers` Ansible variable, this will be handled
+for you.
+
+After a successful deployment, the cluster is configured for Cinder persistent
+volumes.
+
+### Validation
+
+1. Log in and create a new project (with `oc login` and `oc new-project`)
+2. Create a file called `cinder-claim.yaml` with the following contents:
+
+```yaml
+apiVersion: "v1"
+kind: "PersistentVolumeClaim"
+metadata:
+ name: "claim1"
+spec:
+ accessModes:
+ - "ReadWriteOnce"
+ resources:
+ requests:
+ storage: "1Gi"
+```
+3. Run `oc create -f cinder-claim.yaml` to create the Persistent Volume Claim object in OpenShift
+4. Run `oc describe pvc claim1` to verify that the claim was created and its Status is `Bound`
+5. Run `openstack volume list`
+ * A new volume called `kubernetes-dynamic-pvc-UUID` should be created
+ * Its size should be `1`
+ * It should not be attached to any server
+6. Create a file called `mysql-pod.yaml` with the following contents:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mysql
+ labels:
+ name: mysql
+spec:
+ containers:
+ - resources:
+ limits :
+ cpu: 0.5
+ image: openshift/mysql-55-centos7
+ name: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: yourpassword
+ - name: MYSQL_USER
+ value: wp_user
+ - name: MYSQL_PASSWORD
+ value: wp_pass
+ - name: MYSQL_DATABASE
+ value: wp_db
+ ports:
+ - containerPort: 3306
+ name: mysql
+ volumeMounts:
+ - name: mysql-persistent-storage
+ mountPath: /var/lib/mysql/data
+ volumes:
+ - name: mysql-persistent-storage
+ persistentVolumeClaim:
+ claimName: claim1
+```
+
+7. Run `oc create -f mysql-pod.yaml` to create the pod
+8. Run `oc describe pod mysql`
+ * Its events should show that the pod has successfully attached the volume above
+ * It should show no errors
+ * `openstack volume list` should show the volume attached to an OpenShift app node
+ * NOTE: this can take several seconds
+9. After a while, `oc get pod` should show the `mysql` pod as running
+10. Run `oc delete pod mysql` to remove the pod
+ * The Cinder volume should no longer be attached
+11. Run `oc delete pvc claim1` to remove the volume claim
+ * The Cinder volume should be deleted
+
+
+
## Creating and using a Cinder volume for the OpenShift registry
You can optionally have the playbooks create a Cinder volume and set
diff --git a/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
index 481807dc9..a8663f946 100644
--- a/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
+++ b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
@@ -20,6 +20,7 @@ openshift_hosted_registry_wait: True
#openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
#openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"
#openshift_cloudprovider_openstack_region: "{{ lookup('env', 'OS_REGION_NAME') }}"
+#openshift_cloudprovider_openstack_blockstorage_version: v2
## Use Cinder volume for Openshift registry:
diff --git a/playbooks/openstack/sample-inventory/inventory.py b/playbooks/openstack/sample-inventory/inventory.py
index 45cc4e15a..76e658eb7 100755
--- a/playbooks/openstack/sample-inventory/inventory.py
+++ b/playbooks/openstack/sample-inventory/inventory.py
@@ -89,13 +89,15 @@ def build_inventory():
# TODO(shadower): what about multiple networks?
if server.private_v4:
hostvars['private_v4'] = server.private_v4
+ hostvars['openshift_ip'] = server.private_v4
+
# NOTE(shadower): Yes, we set both hostname and IP to the private
# IP address for each node. OpenStack doesn't resolve nodes by
# name at all, so using a hostname here would require an internal
# DNS which would complicate the setup and potentially introduce
# performance issues.
- hostvars['openshift_ip'] = server.private_v4
- hostvars['openshift_hostname'] = server.private_v4
+ hostvars['openshift_hostname'] = server.metadata.get(
+ 'openshift_hostname', server.private_v4)
hostvars['openshift_public_hostname'] = server.name
if server.metadata['host-type'] == 'cns':