| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\
| |
| | |
Added prerequisity for python-openstackclient installation
|
| | |
|
| | |
|
| |
| |
| |
| | |
dependencies
|
| |
| |
| |
| | |
python-openstackclient installation
|
| |
| |
| |
| |
| |
| | |
* openshift-prep: bash-completion and vim-enhanced packages are now optional under install_debug_packages switch
* openshift-prep: new line removal
|
|\ \
| | |
| | | |
Set ansible_become for the OSEv3 group
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Because openshift-ansible requires root on the cluster nodes, but it
doesn't explicitly set it in the playbooks (like we do), let's set it
in our inventory instead of requiring to pass `--become` to
`ansible-playbook`.
That will simplify the installation steps as well as let us include
the provisioning and openshift-ansible playbooks in a single playbook.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add roles to create and delete empty image (workaround)
GCE API does not allow you to specify empty disks in instance templates. This is a workaround to that limitation. The version of cloudilb currently available as an RPM on my build system also doesn’t allow me to specify a family for this image. The impact of this is limited because GCE API has a bug where as if we specify the image using the family it doesn’t work as expected.
* Refactor disk creation to instance templates
There is currently a bug in GCE API that when you specify a non-boot disk sourceImage as a family it will instead use the sourceImage from the boot disk. To workaround this we don’t use a family to specify this sourceImage even though it is more appropriate to do so.
* Instance group related pauses
We introduce two pauses:
1) Immediately after creating the “core” deployment. This is to give time to the instance groups to become “complete”. Ideally we would poll the API instead of waiting a fixed amount of time but this is better than nothing.
2) The second waits for the newly spawned instances to be reachable. Ideally we would use wait_for_connection to achieve this but the following bug keeps this from working for instances behind a bastion host: https://github.com/ansible/ansible/issues/23774
* Use cloud-init to configure attached data disks
* Cosmetics cleanup, removed some values which are default..
Also let's forget about empty image family, no need to version this image.
* Query instance group manager to see if instances are ready
* Empty image archive is very small, no need for composite upload
* Use more robust check if instances are ready for ssh
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Check if deployment exists in failed state
and delete it before continuing, if it does.
Resolves: #438
* Differentiate gold image deployment when deploying origin
So there can be both gold images present in one gcp project.
|
|
|
|
|
|
| |
* Add the static-inventory role that configures the inventory/hosts
file by the given path, or creates it for you.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
* subscription manager: added 10 retries after 1 second delay
* subscription manager: added untils
* sub manager: typo
|
| |
|
|\
| |
| | |
Azure logging metrics and logging deployment in post installation step
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* App logging enabled by default
* Ops logging disabled by default
* Elasticsearch HA by default
* Fluentd on all nodes/masters
* All the rest of the components deployed on infra nodes
* Dynamic storage
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add ssh keys to ovirt VM template
* Adjusted role path
* Adding .example to list of ignored inventory files
* Fixed ssh-key placement
* instance group and install code for OCP
* Added info about certs and qcow to README
* Not a Vsphere
* Added load balancer to instance groups
* Added check for installing local satellite katello rpm
* Reorganized variables
* Formatting
* Playbook to output DNS entries in nsupdate format
* Hosts commented out for publishing
* Added variables file for user edit
* Moved variables around for centralized management by user
* Updated documentation
* Formatting
* Renaming to match style of repo
* Changing underscores to dashes for style
* Updated naming convention to match rest of repo
* Updated naming convention to match rest of repo
* Fixed link
* Resolving Lint issues
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Set up NetworkManager automatically
This removes the extra step of running the
`openshift-ansible/playbooks/byo/openshift-node/network_manager.yml`
before installing openshift. In addition, the playbook relies on a
host group that the provisioning doesn't provide (oo_all_hosts).
Instead, we set up NetworkManager on CentOS nodes automatically. And
we restart it on RHEL (which is necessary for the nodes to pick up the
new DNS we configured the subnet with).
This makes the provisioning easier and more resilient.
* Apply the node-network-manager role to every node
It makes the code simpler and more consistent across distros.
|
|\ \
| | |
| | | |
Replace greaterthan and equalto in openstack-stack
|
| | |
| | |
| | |
| | |
| | |
| | | |
These two Jinja filters were added in 2.8 which is notably not packaged in
CentOS and RHEL. This removes them in favour of the `==` and `>` operators
which are available in Jinja 2.7.
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
* Refactor gcloud.sh script for DRY
Introduce run_playbook() fn so the rest of the script can be simplified.
* Move OCP variables to one place
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Switch the sample inventory to CentOS
This changes the image name and deployment types to use centos instead
of rhel and sets `rhsm_register` to false.
With these changes, the inventory should be immediately deployable
using the default values (assuming the image, network and flavor names
match).
Ideally, the upstream CI will just end up using this inventory with
little to no changes, too at some point.
* Specify the origin openshift_release
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add defaults values for some openstack vars
Ansible shows errors when the `rhsm_register` and
`openstack_flat_secgrp` values are not present in the inventory even
though they have sensible default values.
This makes them both default to false when they're not specified.
* Comment out the flat security group option in inv
It's no longer required to be there so let's comment it out.
|
|/
|
| |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
provisioning (#518)
* prerequisites.yml: check prerequisites on localhost needed for provisioning
provision.yml: includes prerequisites.yml
* prerequisites: indentation fixed
* prerequisites.yml: used ansible_version variable, openstack modules for ansible
* prerequisites.yml: os_keypair is not suitable for this purpose
* prerequisites.yml: openstack keypair command exchanged for shade
- there is no Ansible module for this now
- os_keypair is not suitable for this purpose
- python-openstackclient dependency is not desirable
|
|\
| |
| | |
setting enabled=yes for heketi
|
| | |
|
|/ |
|
|\
| |
| | |
adding some fixes for annette issues
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
* GCP: Allow for custom VPC subnet
* Couple of cosmetic fixes to the PR #500
* Better description of config value
|
|\ \
| | |
| | | |
Add ISSUE/PR github templates
|
| | |
| | |
| | |
| | | |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
| | | |
|
|\ \ \
| | | |
| | | | |
Disable swap on nodes
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Enable dnsmasq or it fails resolving k8s svc
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Manage packages to install/update for openstack provider
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Allow required packages and yum update all steps to be optionally
disabled.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Persist DNS configuration for nodes for openstack provider
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Firstly, provision a Heat stack with given public resolvers.
* After the DNS node configured as an authoritative server,
switch the Heat stack's Neutron subnet to that resolver
(private_dns_server) the way it to become the first entry pushed
into the hosts /etc/resolv.conf. It will be serving the cluster
domain requests for OpenShift nodes and workloads.
* Drop post-provision /etc/reslov.conf nameserver hacks as not
needed anymore.
* Fix dns floating IPs output and add the priv IPs output as well.
* Update docs, clarify localhost vs servers requirements, add
required Network Manager setup step.
* Use post-provision task names instead of comments.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Use wait_for_connection for the Heat nodes
|