| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
Invalid JSON Format during Create Brownfield Infrastructure with existing bastion
|
|
|
|
|
|
|
|
| |
* Use example for rhsm pool name
* Add info about GCP trial version to the README
* Include openshift_disable_check var in static inventory file
|
|
|
|
|
|
| |
Move repeating pre_tasks to pre-install
(OpenShift Pre-Requisites) step.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* node labels: add checks for custom labels
- README: add more info about customising labels
- pre_tasks: add checks for label values, set to empty dict if undefined
- group_vars: move labels customisation from OSEv3 to all
* pre_tasks: tried a new approach to updating variables
* pre_tasks: variable update fixed
* pre_tasks: rollback upscaling changes (to be added in upscaling PR)
* pre_tasks: blank line removed
* pre_tasks: add check for undefined variable (should not happen though)
* pre_tasks: be sure to have regions defined
|
|
|
|
|
|
|
|
| |
* Add documentation regarding running custom post-provision tasks
* moved post-provision doc to openstack README
* added reference to OSEv3, clarified some text
|
|\
| |
| | |
Pin openshift ansible in ci
|
| |
| |
| |
| |
| |
| | |
This runs git status on the openshift-ansible repo in the end to end
openstack CI. This is useful for investigating breakages later on (we'll
know exactly which commit of openshift ansible to look at).
|
|/
|
|
|
|
| |
See issue #686
We'll pin this to unblock CI until that issues is properly resolved.
|
|\
| |
| | |
[WIP] Add docs and defaults for multi-master setup
|
| |
| |
| |
| |
| |
| |
| |
| | |
Additionally, add the lb group to contain lb nodes to the
static inventory template. Include the lb group into the
OSEv3 group, in order to apply the cluster group vars to it.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| |
| |
| | |
* Don't reattach the pool
* Fixed intendation
|
| |
| |
| |
| |
| |
| |
| |
| | |
* Move docker storage to overlay
* Preinstall docker in gold image
* Allow overwriting of dns records
|
| | |
|
|\ \
| | |
| | | |
Update main.yaml for new directory structure
|
| | |
| | |
| | | |
Without this change the azure deployment will fail because of the introduction of 3.5 and 3.6 directories.
|
|\ \ \
| |/ /
|/| | |
Change path to include 3.5 directory.
|
|/ /
| |
| |
| |
| |
| | |
Without this change the azure deployment will fail because of the introduction of 3.5 and 3.6 directories.
#### Who would you like to review this?
cc: @gwestredhat @e-minguez
|
| | |
|
| |
| |
| |
| | |
We've set the verbose logging too eagerly and it ends up spewing a lot
of useless input in the "waiting for the pods to come up" phase.
|
|/
|
|
|
|
| |
This allows our users to keep the ansible.cfg file in the inventory as
well as putting e.g. LDAP certificates in.
Fixes #481
|
|
|
|
|
|
|
| |
* Update openshift_release in the sample inventory
This removes setting the version for Openshift Origin, because the
only the latest release is actually available. So if a new Origin
release comes up, the installation will fail.
|
|\
| |
| | |
Add a contributing guide
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Hoping that this will get our pull requests and code reviews more
consistent (and it should help people getting started with code
reviews).
|
| | |
|
|\ \
| | |
| | | |
Merge back 3.5 support
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\|
| | |
| | |
| | | |
Resync With Master
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
adding files for 3.6 update and optimizing some rhsm
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* resolving #660 items
* only touch new node
|
| | | | |
|
| | | | |
|
|\ \ \ \
| |_|_|/
|/| | | |
need privilges to run non-atomic-command
|
| | | | |
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The end to end scripts just started failing because the contents of the
tarball shipping the oc tool changed.
Since the binary is always available on the master node after a
successful deployment, let us just copy it from there.
|