| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|\
| |
| | |
remote heal action for OVS down
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The variable's name for deleting the temporary file was a bit
missleading, so it has been renamed to be more explicit.
While the path was hardcoded in /root/, which could be problematic when
the playbook is not run as run.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It would be nice to have options to be able to:
* Delete or not the temporary config file - so that it can be check/modified directly
* Create or not the bucket, as you might not have the right to do so
This commit allows both of those things, without changing the default
behavior of the playbook.
|
|\ \
| | |
| | | |
Change etcd deamon name for atomic-host in playbooks/adhoc/uninstall.yml
|
| | |
| | |
| | |
| | |
| | |
| | | |
* Update playbooks/adhoc/uninstall.yml
* Etcd run into a container on atomic-host and his name is etcd_container.
We have to stop the container with the right name on atomic host
|
|/ / |
|
|/ |
|
|\
| |
| | |
s3_registry no filter named 'lookup'
|
| |
| |
| |
| |
| |
| |
| | |
* Added a default function for the lookup.
* According to [1] added default(,true) to avoid empty string
[1] https://github.com/openshift/openshift-ansible/blob/master/docs/best_practices_guide.adoc#filters
|
|/ |
|
|\
| |
| | |
adhoc s3 registry - add auth part in the registry config sample
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Without the auth part, after spawning the registry we were not able to do an auth.
```
docker login -u .. -p ... 172.30.234.98:5000
Error response from daemon: no successful auth challenge forhttp://172.30.234.98:5000/v2/ - errors: []
```
Simply adding this part in the registry config sample
|
|/
|
|
|
|
|
|
|
|
| |
File playbooks/adhoc/s3_registry/s3_registry*
To be able to use a different bucket name and region, aws_bucket and aws_region are now available
* Add variable for region and bucket into j2
* Update comment Usage
* Add default aws_bucket_name and aws_bucket_region
|
|\
| |
| | |
new role: added oso_moniotoring tools role
|
| | |
|
| |
| |
| | |
Following on from #1107 the host group's name is OSEv3 and not OSv3
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Removing env-host-type in preparation of env and environment changes.
|
| | |
|
|\ \
| | |
| | | |
Enforce connection: local and become: no on all localhost plays
|
| | | |
|
| | | |
|
| |/
|/| |
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
- ansible bootstrap playbook for Fedora 23+
- add conditionals to handle yum vs dnf
- add Fedora OpenShift COPR
- update BYO host README for repo configs and fedora bootstrap
Fix typo in etcd README, remove unnecessary parens in openshift_node main.yml
rebase on master, update package cache refresh handler for yum vs dnf
Fix typo in etcd README, remove unnecessary parens in openshift_node main.yml
|
| |
|
|
|
|
|
| |
This handles stage environments as well as the eventual change of aep3_beta to
aep3
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
| |
|
|
|
|
|
| |
Instead of combining this with tasks to restart services, add a separate
started+enabled play for masters and nodes at the end of the playbook.
|
|
|
|
|
|
| |
With the openshift to atomic-openshift renames, some services were not enabled
after upgrade. Added enabled directives to all service restart lines in the
upgrade playbook.
|
|\
| |
| | |
Remove upgrade playbook restriction on 3.0.2.
|
| |
| |
| |
| |
| | |
This is blocking 3.0.1 upgrades to 3.1 incorrectly, which is a scenario we
should support.
|
|/
|
|
|
|
|
|
|
| |
Rather than assuming the etcd data dir, we now read if from master-config.yaml
if using embedded etcd, otherwise from etcd.conf.
Doing so now required use of PyYAML to parse config file when gathering facts.
Fixed discrepancy with data_dir fact and openshift-enterprise deployment_type.
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Skip some 3.1 checks if doing a 3.0.x to 3.0.2 upgrade.
Improve error message when oc whoami fails (i.e. openshift is down) during
pre-upgrade checks, rather than assuming the binary doesn't exist.
|
|/ |
|
| |
|
| |
|
| |
|