| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Handlers normally only trigger at the end of the play, but in this case
we just set our node schedulable again resulting in it immediately
getting taken down again.
|
| |
|
|\
| |
| | |
Secure registry for atomic registry deployment
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Previously we were setting schedulability to the state defined in the inventory
without regard to whether or not it was manually made schedulable or
unschedulable. The right thing seems to be to record the state prior to upgrade
and set it back.
|
| | |
|
|\ \
| | |
| | | |
In AWS where the master node was not part of the nodes and unschedulable
|
| | |
| | |
| | |
| | | |
in an unschedulable way
|
|\ \ \
| |_|/
|/| | |
Bug 1369410 - uninstall fail at task [restart docker] on atomic-host
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Moved the restarting of docker and network services lower.
* Added /etc/systemd/system/docker.service.d/docker-sdn-ovs.conf to the list of
files to be removed (I suspect the RPM uninstall handles this for
non-containerized installs)
* sorted the file names
|
| | | |
|
| |/
|/| |
|
|\ \
| | |
| | | |
add run_once to repeatable actions
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Add the Registry deployment subtype as an option in the quick installer.
|
|\ \ \
| | | |
| | | | |
Metrics improvements
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Metrics deployer now checks for route activation. As such we need a router
before we install metrics.
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Remove duplicate flannel registration
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Adam Miller <maxamillion@fedoraproject.org>
|
| |/ / /
|/| | | |
|
|\ \ \ \
| | | | |
| | | | | |
Add warning at end of 3.3 upgrade if pluginOrderOverride is found.
|
| | | | | |
|
| |_|_|/
|/| | | |
|
|\ \ \ \
| | | | |
| | | | | |
Replace some virsh commands by native virt_XXX ansible module
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Fix etcd uninstall
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Open OpenStack security group for the service node port range
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
With OpenShift 3.2, creating a service accessible from the outside of the
cluster thanks to `nodePort` automatically opens the “local” `iptables`
firewall to allow incoming connection on the `nodePort` of the service.
In order to benefit from this improvement, the OpenStack security group
shouldn’t block those incoming connections.
This change opens, on the OS nodes, the port range dedicated to service
node ports.
|
|\ \ \ \ \ \
| |_|/ / / /
|/| | | | | |
Fix the “node on master” feature
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
What we want to do is to add the master as a node if:
* `g_nodeonmaster` is set to true, and
* we are not in the case where we want to add new nodes.
The second test was done by only checking whether `g_new_node_hosts` was defined.
This was wrong because, in all cloud-provider setups, this variable was set
with the default value of “empty list” (`[]`).
The test has been changed to use the `bool` filter so that it correctly evaluates
to false (and hence, effectively add the master as a node) when `g_new_node_hosts`
is the empty list.
|
|\ \ \ \ \
| | |/ / /
| |/| | | |
Fix standalone Docker upgrade missing symlink.
|
| | | | | |
|
| |/ / /
|/| | |
| | | |
| | | | |
Some expressions now need to be enclosed inside `{{…}}`.
|
| |/ /
|/| |
| | | |
Fixes #2317
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Prevents the network egress bug causing node restart to fail during 3.3
upgrade. (even though a separate fix is incoming for this)
Only catch is preventing the openshift_cli role, which requires docker,
from triggering a potential upgrade, which we still don't want at this
point. To avoid we use the same variable to protect docker installed
version as we use in pre.yml.
|
|\ \
| | |
| | | |
fixing openshift key error in case of node failure during run (ssh is…
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Improvements for Docker 1.10+ Upgrade Image Nuking
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In a parallel step prior to real upgrade tasks, clear out all unused
Docker images on all hosts. This should be relatively safe to interrupt
as no real upgrade steps have taken place.
Once into actual upgrade, we again clear all images only this time with
force, and after stopping and removing all containers.
Both rmi commands use a new and hopefully less error prone command to do
the removal, this should avoid missed orphans as we were hitting before.
Added some logging around the current image count before and after this
step, most of them are only printed if we're crossing the 1.10 boundary
but one does not, just for additional information in your ansible log.
|
| |/ /
| | |
| | |
| | |
| | | |
This avoids the automatic image migration in 1.10, which can take a very
long time and potentially cause rpm db corruption.
|
|/ / |
|