| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
We require ansible >= 2.2.0 now. Updating version checking playbook to
reflect this change.
|
|\
| |
| | |
Remove duplicate when key
|
| | |
|
|\ \
| |/
|/| |
Fix rare failure to deploy new registry/router after upgrade.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
| |
| |
| |
| | |
Fixes Bug 1395945
|
|\ \
| | |
| | | |
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
lhuard1A/fix_list_after_create_on_libvirt_and_openstack
Fix the list done after cluster creation on libvirt and OpenStack
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `list.yml` playbooks are using cloud provider specific variables to find
the IPs of the VMs since 82449c6.
Those “cloud provider specific” variables are the ones provided by the dynamic
inventories.
But there was a problem when the `list.yml` playbooks are invoked from the
`launch.yml` ones because, in that case, the inventory is not coming from the
dynamic inventory scripts, but from the `add_host` done inside
`launch_instances.yml`.
Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the
`add_host` the variables used by `list.yml`, libvirt and OpenStack were missing
that.
Fixes #2856
|
|\ \ \
| |/ /
|/| | |
Merge admission plugin configs
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
| | |
| | |
| | |
| | | |
containerized.
|
|\ \ \
| | | |
| | | | |
Added a BYO playbook for configuring NetworkManager on nodes
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In order to do a full install of OpenShfit using the byo/config.yml
playbook, it is currently required that NetworkManager be installed
and configured on the nodes prior to the installation. This playbook
introduces a very simple default configuration that can be used to
install, configure and enable NetworkManager on their nodes.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
|\ \ \ \
| |_|/ /
|/| | | |
Refactor to use Ansible package module
|
| | |/
| |/|
| | |
| | |
| | | |
The Ansible package module will call the correct package manager for the
underlying OS.
|
|\ \ \
| | | |
| | | | |
Allow ansible to continue when a node is unaccessible or fails.
|
| | | | |
|
| |/ / |
|
|\ \ \
| | | |
| | | | |
Fix yum/subman version check on Atomic.
|
| |/ / |
|
|/ / |
|
|\ \
| | |
| | | |
[openstack] allows timeout option for heat create stack
|
| | | |
|
|\ \ \
| | | |
| | | | |
Check for bad versions of yum and subscription-manager.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use of yum and repoquery will output the given additional warning when
using newer versions of subscription-manager, with older versions of
yum. (RHEL 7.1) Installing/upgrading newer docker can pull this
subscription-manager in resulting in problems with older versions of
ansible and it's yum module, as well as any use of repoquery/yum
commands in our playbooks.
This change explicitly checks for the problem by using repoquery and
fails early if found. This is run early in both config and upgrade.
|
|\ \ \ \
| | | | |
| | | | | |
Optimize the cloud-specific list.yml playbooks
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | | |
by removing the need to gather facts on all VMs in order to list them.
And prettify the output of AWS list the same way it is done for other cloud providers.
|
|\ \ \ \
| | | | |
| | | | | |
Fix GCE cluster creation
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Attempting to create a GCE cluster when the `gce.ini` configuration file
contains a non-default network leads to the following error:
```
TASK [Launch instance(s)] ******************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unexpected error attempting to create instance lenaic2-master-74f10, error: {'domain': 'global', 'message': \"Invalid value for field 'resource.networkInterfaces[0]': ''. Subnetwork should be specified for custom subnetmode network\", 'reason': 'invalid'}"}
```
The `subnetwork` parameter needs to be added and taken into account.
|
|\ \ \ \
| | | | |
| | | | | |
etcd upgrade playbooks
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
On Fedora we just blindly upgrade to the latest.
On RHEL we do stepwise upgrades 2.0,2.1,2.2,2.3,3.0
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Includes bash functions for etcdctl2 and etcdctl3 which provide reasonable
defaults for etcdctl functions on a host that's configured with openshift_etcd.
|
| | | | | |
|
| |/ / / |
|
|\ \ \ \
| | | | |
| | | | | |
Fix HA upgrade when fact cache deleted.
|
| |/ / /
| | | |
| | | |
| | | |
| | | | |
This variable is referenced in the systemd unit templates, this seems
like the easiest and most consistent fix.
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Reconcile role bindings for jenkins pipeline during upgrade.
|
| | | |
| | | |
| | | |
| | | | |
https://github.com/openshift/origin/issues/11170 for more info.
|
|\ \ \ \
| | | | |
| | | | | |
Bug 1393663 - Failed to upgrade v3.2 to v3.3
|
| |/ / /
| | | |
| | | |
| | | | |
upgrade.
|
|\ \ \ \
| |/ / /
|/| | | |
Don't upgrade etcd on backup operations
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes Bug 1393187
Fixes BZ1393187
|
|\ \ \ \
| |/ / /
|/| | | |
Fix HA etcd upgrade when facts cache has been deleted.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Simplest way to reproduce this issue is to attempt to upgrade having
removed /etc/ansible/facts.d/openshift.fact. Actual cause in the field
is not entirely known but critically it is possible for embedded_etcd to
default to true, causing the etcd fact lookup to check the wrong file
and fail silently, resulting in no etcd_data_dir fact being set.
|