summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Add dynamic inventoryTomas Sedovic2017-10-044-2/+112
| | | | | | | | | | This adds an `inventory.py` script to the `sample-inventory` that lists all the necessary servers and groups dynamically, skipping the `static_inventory` role as well as the `hosts` creation. It also adds an `os_cinder` lookup function which is necessary for a seamless Cinder OpenShift registry integration without a static inventory.
* Fixing various contrib changes causing CASL breakage (#771)Øystein Bedin2017-10-042-13/+13
|
* rollback to remove package to support origin (#775)Ryan Cook2017-10-040-0/+0
|
* Fix error when adding new nodes in Azure (number of application nodes > 8) ↵schen12017-10-040-0/+0
| | | | (#773)
* Required variables to create dedicated lv (#766)Eduardo Mínguez2017-10-034-7/+36
| | | | | | | | * Required variables to create dedicated lv https://bugzilla.redhat.com/show_bug.cgi?id=1490910#c11 * Fixed lint and added distribution to checks
* Set node selector in openshift-infra namespace (#759)Peter Schiffer2017-10-030-0/+0
| | | So the pods in this namespace are correctly scheduled on the infra nodes.
* Use Ansible stable 2.3 instead of 2.2 (#738)bgeesaman2017-10-030-0/+0
| | | To avoid a syntax error during origin greenfield deployments.
* timeout test (#762)Ryan Cook2017-10-030-0/+0
|
* Merge pull request #763 from dav1x/provider-setupDavis Phillips2017-10-030-0/+0
|\ | | | | WIP - add deploy host provider setup
| * lint issuesDavis Phillips2017-10-030-0/+0
| |
| * lint issuesDavis Phillips2017-10-030-0/+0
| |
| * lint issuesDavis Phillips2017-10-030-0/+0
| |
| * add ospDavis Phillips2017-10-020-0/+0
| |
| * check for provider and skip disable if its definedDavis Phillips2017-10-020-0/+0
| |
| * auto gen ssh key for rhv and vmwDavis Phillips2017-09-290-0/+0
| |
| * adding rhv repo and package installDavis Phillips2017-09-280-0/+0
| |
| * accidently renamed ovirtDavis Phillips2017-09-280-0/+0
| |
| * remove separate task for localhostDavis Phillips2017-09-280-0/+0
| |
| * update deploy-host to support specificed providersDavis Phillips2017-09-280-0/+0
| |
| * adding default repos to rhsm varsDavis Phillips2017-09-270-0/+0
| |
| * add provider setupDavis Phillips2017-09-270-0/+0
| |
* | Merge pull request #764 from e-minguez/overlay2_vars_vmwareDavis Phillips2017-10-030-0/+0
|\ \ | | | | | | Required variables to create dedicated lv
| * | Fixed lint and added ansible_distribution checkEduardo Minguez2017-10-030-0/+0
| | |
| * | Make it future proofEduardo Minguez2017-09-290-0/+0
| | |
| * | Required variables to create dedicated lvEduardo Minguez2017-09-290-0/+0
| |/ | | | | | | https://bugzilla.redhat.com/show_bug.cgi?id=1490910#c11
* | all systems need the atomic-openshift-node package anyways (#768)Ryan Cook2017-10-030-0/+0
| |
* | Adding the option to use 'stack_state' to allow for easy de-provisioning (#754)Øystein Bedin2017-10-024-39/+56
| | | | | | | | | | | | * Adding 'openstack-stack-delete' role to allow for easy de-provisioning * Updated per etsauer's comments
* | Adding role to clean up pvs (#769)Eric Sauer2017-10-020-0/+0
| |
* | version bump for upgrade plays (#770)Ryan Cook2017-10-020-0/+0
| | | | | | * version bump for upgrade plays
* | Required variables to create dedicated lv (#765)Eduardo Mínguez2017-09-290-0/+0
|/ | | https://bugzilla.redhat.com/show_bug.cgi?id=1490910#c11
* Fix public master cluster DNS record when using bastion (#752)Bogdan Dobrelya2017-09-263-0/+12
| | | | | When using a bastion and a single master, add the bastion node's public IP the public master's IP for the DNS record. Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Upscaling OpenShift application nodes (#571)Tlacenka2017-09-266-2/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * scale-up: playbook for upscaling app nodes * scale-up: removed debug * scale-up: made suggested changes * scale-up: indentation fix * upscaling: process split into two playbooks that are executed by a bash script - upscaling_run.sh: bash script, usage displayed using -h parameter - upscaling_pre-tasks: check that new value is higher, change inventory variable - upscaling_scale-up: rerun provisioning and installation, verify change * upscaling_run: fixed openshift-ansible-contrib directory name * upscaling_run: inventory can be entered as relative path * upscaling_scale-up: fixed formatting * upscaling: minor changes * upscaling: moved to .../provisioning/openstack directory, README updated, minor changes made * README: minor changes * README: formatting * uspcaling: minor fix * upscaling: fix * upscaling: added customisations, fixes - openshift-ansible-contrib and openshift-ansible paths are customisable - fixed implicit incrementation by 1 * upscaling: fixes * upscaling: fixes * upscaling: another fix * upscaling: another fix * upscaling: fix * upscaling: back to a single playbook, README updated * minor fix * pre_tasks: added labels for autoscaling * scale-up: fixes * scale-up: fixed host variables, post-verification is only based on labels * scale-up: added openshift-ansible path customisation - path has to be absolute, cannot contain '/' at the end * scale-up: fix * scale-up: debug removed * README: added docs on openshift_ansible_dir, note about bastion * static_inventory: newly added nodes are added to new_nodes group - note: re-running provisioning fails when trying to install docker * removing new line * scale-up: running byo/config.yml or scaleup.yml based on the situation - (whether there is an existing deployment or not) * openstack.yml: indentation fix * added refresh inventory * upscaling: new_nodes only contains new does, it is not used during the first deployment * static_inventory: make sure that new nodes end up only in their new_nodes group * bug fixes * another fix * fixed condition * scale-up, static_inventory role: all app node data gathered before provisioning * upscaling: bug fixes * upscaling: another fixes * fixes * upscaling: fix * upscaling: fix * upscaling: another logic fix * bug fix for non-scaling deployments
* Rhv 3.6 disks (#756)Chandler Wilkerson2017-09-250-0/+0
| | | | | | * Clean up cluster definition * Changed disk sizes for 3.6
* epel URL fix for Vagrant (#544) (#755)Benjamin Gentil2017-09-250-0/+0
|
* WIP: lowering required permissions for iam role (#748)Ryan Cook2017-09-210-0/+0
| | | Lowering required permissions for iam role
* load balancer formatting fix (#745)tzumainn2017-09-211-4/+4
|
* Set Ansible version in openstack CI for 2.3 (#750)Tomas Sedovic2017-09-210-0/+0
| | | | | | | | | | | | | | | | | | | | | * Set Ansible version in openstack CI for 2.3 Ansible 2.4 just came out and it breaks our playbooks. Let's pin the end to end CI version to 2.3 until we've figured it out. * Only use the deployed DNS for validation I think the openshift installation and teardown is broken now that the public DNS disables recursion. So we'll only use it for the validation steps and than turn it back off. * Use bash trap to clean up the DNS * Actually display the commit message * Use openshift-ansible-3.6.22-1 in openstack CI The commit following this tag is broken for us.
* Integrate SSO into 3.6 Ref Arch (#739)Glenn S West2017-09-200-0/+0
| | | | | | | | | * Add Keycloak/SSO Support * Make sure sso install occurs after ocp is done * Add sso/keycloak to 3.6 ha ref arch * switch to localhost for initial part of setup-sso.yml * Change restart after sso * Change to same password and switch to upstream repo
* Better documentation (#744)Peter Schiffer2017-09-190-0/+0
| | | | | | * Better documentation * Grammar fixes
* Docker ansible host (#742)Tomas Sedovic2017-09-191-0/+32
| | | | | | | | * Document using a Docker image for Ansible host * Fix the markdown url syntax * Mention keystonerc as well
* Empty ssh (#729)Tomas Sedovic2017-09-192-2/+2
| | | | | | | | | | | | | | | | | | | | | | | * Make `openstack_private_ssh_key` optional Before this, the deployer could not reasonably rely on their own SSH configuration or e.g. using the `--private-key` option to ansible-playbook because we always wrote the `ansible_private_key_file` value in the static inventory. This change makes the `openstack_private_ssh_key` variable truly optional: if it's not set, the static inventory will not configure the SSH key and will just rely on the existing configuration. * Update the openstack e2e CI It no longer sets the SSH keys explicitly -- which should just work with the previous commit. * Put back the `openstack_ssh_public_key` in CI This is the option we actually need to keep. This sholud fix the CI failures.
* Fix scaling up for 3.6 and RHEL (#741)Peter Schiffer2017-09-190-0/+0
| | | | | | * Parametrise openshift-emptydir-quota role * Fix scaling up for 3.6 and RHEL
* Use ansible installer role to set the node local quota (#736)Peter Schiffer2017-09-180-0/+0
| | | | | | | | * Use ansible installer role for setting the node local quota Try to sort the openshift vars in a better way * There is now only one google-compute-engine package
* Fixed typo (#735)Miguel P.C2017-09-150-0/+0
|
* change of docker backend (#731)Ryan Cook2017-09-140-0/+0
|
* Merge pull request #732 from tomassedovic/make-rhsm-registry-optionaltzumainn2017-09-142-3/+4
|\ | | | | Make rhsm registry optional for openstack
| * Remove the `rhsm_register` value from inventoryTomas Sedovic2017-09-141-2/+3
| | | | | | | | It is now commented out since it's no longer necessary.
| * Make the `rhsm_register` value optionalTomas Sedovic2017-09-141-1/+1
|/ | | | | This was a regression -- it used to be optional (defaulting to False), but among some changes we ended up requiring it again.
* Merge pull request #730 from tomassedovic/always-refresh-hoststzumainn2017-09-131-0/+8
|\ | | | | Clear the previous inventory during provisioning
| * Clear the previous inventory during provisioningTomas Sedovic2017-09-131-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If there was a left-over inventory from a previous run that had nodes which were subsequently removed, these would still show up in the Ansible's in-memory inventory and Ansible would fail trying to connect to them. This is because Ansible automatically loads the `inventory/hosts` file if it exists and even if we overwrite it later, every node and group still remains in the memory. By removing the inventory file and and calling the `refresh_inventory` meta task, we make sure that any left-over values are removed.