| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Add upgrade job step after the entire upgrade performs
|
| | |
|
|/ |
|
| |
|
|
|
|
| |
This allows us to refer to a group of checks using a single handle.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This approach should make it easier to add new checks without having to
write lots of YAML and doing things against Ansible (e.g.
ignore_errors).
A single action plugin determines what checks to run per each host,
including arguments to the check. A check is implemented as a class with
a run method, with the same signature as an action plugin and module,
and is normally backed by a regular Ansible module.
Each check is implemented as a separate Python file. This allows whoever
adds a new check to focus solely in a single Python module, and
potentially an Ansible module within library/ too.
All checks are automatically loaded, and only active checks that are
requested by the playbook get executed.
|
|
|
|
| |
Fixes Bug 1419893
|
|
|
|
|
|
|
|
|
| |
It turned out that the playbook
`playbooks/byo/openshift-preflight/check.yml` would only work under a
certain `ansible.cfg` in which `roles/` was added to `roles_path`.
It was the case with the example config prior to
b804e70cdd0bc8601bfc87fcf3e34043223828ee.
|
|\
| |
| | |
Manage the excluder functionality
|
| |
| |
| |
| |
| | |
So that excluder is disabled and reset within the scope of each of those
in addition to the overall playbook
|
|/
|
|
| |
Closes #3268
|
| |
|
| |
|
| |
|
|\
| |
| | |
Correct usage of draining nodes
|
| | |
|
|/
|
|
|
|
|
| |
The add_host: task does not change any data on the host and as practice
has been configured to changed_when: False. This commit standardizes
that usage in the byo and common playbooks. Additionally, taks names
are added to each task to improve troubleshooting.
|
| |
|
|\
| |
| | |
Correct consistency between upgrade playbooks
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was done far into the process potentially leaving the user in a
difficult situation if they had now considered they were running the
upgrade playbook on a host that would be restarted. Instead check
configuration and what host we're running on in pre-upgrade and allow
the user to abort before making any substantial changes.
This is a step towards merging master upgrade into one serial process.
|
|\ \
| | |
| | | |
Logging deployer tasks
|
| | | |
|
| | |
| | |
| | |
| | | |
deployer image
|
|\ \ \
| | | |
| | | | |
Begin requiring Docker 1.12.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Building off the work done for Docker 1.10, we now require Docker 1.12
by default.
The upgrade process was already set to ensure you are running the latest
docker during upgrade, and the standalone docker upgrade playbook can
also be used if desired.
As before, you can override this Docker 1.12 requirement by setting a
docker_version=1.10.3 (or similar), and you can skip the default to
upgrade docker by setting docker_upgrade=False.
|
|\ \ \ \
| | | | |
| | | | | |
Document playbook directories
|
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Trying to improve the name, `init` needs to be loaded before calling other
subroles.
We don't make `init` a dependency of `common`, `masters` and `nodes` to
avoid running the relatively slow `openshift_facts` multiple times.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Note: on a simple example run of ansible-playbook against a single
docker-based host, I saw the execution time jump from 7s to 17s. That's
unfortunate, but it is probably better to reuse openshift_facts, than to
come up with new variables.
|
|/ / /
| | |
| | |
| | | |
Because that's the main playbook directory in use.
|
| | |
| | |
| | |
| | | |
Closes #3070
|
|\ \ \
| | | |
| | | | |
Deprecate node 'evacuation' with 'drain'
|
| | | |
| | | |
| | | |
| | | | |
* https://trello.com/c/TeaEB9fX/307-3-deprecate-node-evacuation
|
|/ / /
| | |
| | |
| | | |
hook run.
|
| | |
| | |
| | |
| | |
| | | |
* Added checks to make ci for yaml linting
* Modified y(a)ml files to pass lint checks
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Scheduler var fix
|
| | | | |
|
|/ / /
| | |
| | |
| | | |
Fixes #2738
|
|\ \ \
| | | |
| | | | |
Update scheduler defaults
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Remove duplicate when key
|
| |/ / / |
|
|\ \ \ \
| |/ / /
|/| | | |
Fix rare failure to deploy new registry/router after upgrade.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|\ \ \ \
| |/ / /
|/| | | |
Added a BYO playbook for configuring NetworkManager on nodes
|