| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Currently, docker_upgrade is ignored during
cluster upgrades.
This commit ensures that the variable is respected.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1543714
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue.
[openstack] custom user commands for cloud-init
Allow to specify additional user commands executed on all Nova servers
provisioned via Heat.
An example use case is installing and starting os-collect-config agents
to put Nova servers under the configuration management driven via the
host openstack cloud Heat services. This allows to integrate with another
deployment tools like TripleO.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Document use cases for custom post-provision ansible hooks
vs cloud-init runcmd shell commands. Rename to
openshift_openstack_cloud_init_runcmd.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allow to specify additional user commands executed on all Nova servers
provisioned via Heat.
An example use case is installing and starting os-collect-config agents
to put Nova servers under the configuration management driven via the
host openstack cloud Heat services. This allows to integrate with another
deployment tools like TripleO.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
Removing prefix, replacing with cidr, pool_start and pool_end vars
The heat template was hardcoded with a /24 cidr and that limited customers to 251 ip addresses in the OpenStack subnet. This allows the user to configure the cidr and the allocation pool start and end.
Addresses issue #6829 that I created last week.
@tomassedovic please take a look
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Automatic merge from submit-queue.
Limit host scope during plays
Many plays only target a select subset of hosts,
especially oo_first_master for components such
as logging and registry.
This commit limits the scope of most plays to
eliminate unnecessary task execution on node
groups. This will result in great time
savings for large deployments.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1516526
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Many plays only target a select subset of hosts,
especially oo_first_master for components such
as logging and registry.
This commit limits the scope of most plays to
eliminate unnecessary task execution on node
groups. This will result in great time
savings for large deployments.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1516526
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
An entry-point playbook was imported by mistake.
This caused common init code to run again, which
is undesireable.
This commit changes the import to use the corresponding
'private' play which does not call the init code.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1542855
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Automatic merge from submit-queue.
Redeploy router certificates during upgrade only when secure.
Wrap the upgrade logic for redeploying certificates into another block so that insecure registries do not perform any certificate tasks.
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Determine which host is the etcd CA host
|
| | | | |
| | | | |
| | | | |
| | | | | |
first host in the etcd host group.
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| |_|_|/ /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Automatic merge from submit-queue.
Changing the check for the number of etcd nodes
This playbook is called (via std_include.yml) when the scale up playbook for either master or etcd is called. In the scenario where you are scaling up the number of masters/etcd nodes it is feasible, if not likely, that the number of etcd nodes is not 1, 3 or 5 and this check therefore causes a scale up to fail.
The two example scenarios that are driving this change are:
You have a cluster with 3 master nodes (each running etcd) and one of those masters fail. The master node is removed from both the OpenShift cluster and the etcd cluster and the inventory updated to reflect the state of the cluster minus the failed master node. You would then run the scale up playbook to add a new master / etcd master into the cluster using an inventory containing and etcd group of just 2 nodes.
As above but the cluster has 5 master nodes. If you lose a master node and update the inventory to reflect that then the inventory will contain an etcd group with 4 nodes.
@sdodson
Previously submitted as https://github.com/openshift/openshift-ansible/pull/6979
|
| | | | |
| | | | |
| | | | | |
In making the initial change I introduce some spaces at the beginning of the line. Removing them.
|
| | | | | |
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Use wait_for_connection to validate ssh transport is alive
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
Setup docker excluder if requested before container_runtime is installed
That would prevent possible container runtime upgrades during cluster
config
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1540800
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
That would prevent possible container runtime upgrades during cluster
config
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
[1540537] Add base package installation to upgrade playbooks
Hosts will need python ipaddress module installed if it was not
installed during initial installation.
Bug 1540537
https://bugzilla.redhat.com/show_bug.cgi?id=1540537
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Hosts will need python ipaddress module installed if it was not
installed during initial installation.
Bug 1540537
https://bugzilla.redhat.com/show_bug.cgi?id=1540537
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
Fix uninstall using openshift_prometheus_state=absent
This was broken in https://github.com/openshift/openshift-ansible/pull/6811
bz: https://bugzilla.redhat.com/show_bug.cgi?id=1540806
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This was broken in https://github.com/openshift/openshift-ansible/pull/6811
bz: https://bugzilla.redhat.com/show_bug.cgi?id=1540806
|
|\ \ \ \ \
| |_|/ / /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Automatic merge from submit-queue.
3.9 upgrade: fix typos in restart masters procedure
* 'rolling_restart_mode' should be 'services', not 'service'
* use 'state: restarted' to properly restart services
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1540054
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* 'rolling_restart_mode' should be 'services', not 'service'
* use 'state: restarted' to properly restart services
Signed-off-by: Vadim Rutkovsky <vrutkovs@redhat.com>
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Automatic merge from submit-queue.
Make sure to include upgrade_pre when upgrading master nodes
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1542399
|
| |/ / / |
|
|\ \ \ \
| |/ / /
|/| | | |
add deprovisioning for ELB (and IAM certs)
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
add playbooks to handle deleting ELBs and any IAM certs that may have been created during provisioning.
redo ELB creation to remove arbitrary wait and just retry until ELB creation succeeds
|
|\ \ \ \
| | | | |
| | | | | |
Initial support for 3.10
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Upgrades: pass openshift_manage_node_is_master to master nodes during upgrade
|
| |/ / / /
| | | | |
| | | | |
| | | | | |
This ensures required labels for master would be set
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
Es 5.x opt in
FYI @richm @jcantrill
|
| | |_|_|/
| |/| | | |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
Move cert SAN update logic to openshift-etcd
Recent additions for checking certificate SAN validation were added to the upgrade playbooks and should be moved to the openshift-etcd playbooks to ensure this check is performed when the openshift-etcd upgrade playbook is run directly, vice only when running a full control plane upgrade. Additionally, the formerly included playbook for redeploying certificates called the main entry point playbook which caused the initialization playbooks to be called twice.
|
| | |/ / /
| |/| | | |
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
Use rollout instead of deploy (deprecated)
|
| |/ / / |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
add S3 bucket cleanup
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | | |
Default to just cleaning out all the objects in the S3 bucket (IFF openshift_aws_create_s3 is 'true').
If you really, trully want to delete the S3 bucket and free up the bucket name, you can set openshift_aws_really_delete_s3_bucket to 'true' ('false' by default).
|