| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
For now, we should restrict the quick installer to a single master.
This should change in the near future.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
|\
| |
| | |
atomic-openshift-installer: Remove question for container install
|
| | |
|
| |
| |
| |
| |
| | |
Removing the option for a container-based install from the quick
installer with it is in tech preview.
|
|/
|
|
|
|
|
| |
If this file exists on disk, the installer will use it if the user didn't
specify an ansible config file on the CLI.
Rename share directory to match the rpm name. (utils vs util)
|
|\
| |
| | |
Test fixes related to connect_to
|
| |
| |
| |
| |
| | |
There the tests didn't know anything about connect_to and we had a case where
we weren't handling the migration from the 3.0 installer config format to 3.1
|
|/
|
|
|
| |
This generates the ansible inventory based on the pruned list of non-installed
hosts we've created rather than the full host list provided in installer.cfg.yaml
|
|\
| |
| | |
Updating the atomic-openshift-isntaller local connection logic for th…
|
| |
| |
| |
| | |
connect_to addition.
|
|\ \
| |/
|/| |
Upgrade enhancements
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Push config dir logic out of module and use host variables instead.
- Backup master config with ansible utility.
- Add error handling for the upgrade config module.
- Add verbose option to installer.
- Return details on what we changed when upgrading config.
- Cleanup use of first master.
- Don't install upgrade rpms to check what version we'll upgrade to.
|
|/
|
|
|
| |
Changes to installer.cfg.yaml to allow for better defaults in unattended mode.
Update example in the docs.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
TODO: We desparately need tests cases for:
- interactive with no config file
- interactive with config file and all installed hosts
- interactive with config file and no installed hosts
- interactive with config file and some installed some uninstalled hosts
- unattended with config file and all installed hosts (with and without --force)
- unattended with config file and no installed hosts (with and without --force)
- unattended with config file and some installed some uninstalled hosts (with and without --force)
|
|
|
|
|
| |
Previously the output was a little confusing. We didn't display anything about
the uninstalled hosts.
|
|
|
|
| |
all cases
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were writing out a inventory like this:
~~~
[OSEv3:children]
masters
nodes
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise
ansible_connection=local
[masters]
ose3-master.example.com openshift_hostname=ose3-master.example.com
[nodes]
ose3-master.example.com openshift_hostname=ose3-master.example.com
ose3-node1.example.com openshift_hostname=ose3-node1.example.com
ose3-node2.example.com openshift_hostname=ose3-node2.example.com
~~~
The problem with that is now all the hosts are consider local connections. In
addition our sudo check wasn't working as expected. We would check that we
have sudo, but the playbooks were not running with root privileges. When
gathering facts you'd hit:
~~~
__main__.OpenShiftFactsFileWriteError: Could not create fact file: /etc/ansible/facts.d/openshift.fact, error: [Errno 13] Permission denied: '/etc/ansible/facts.d/openshift.fact'
~~~
Instead the test for locale connections needs to be per host. Anytime we're not running as root we need `ansible_become` set:
~~~
ose3-master.example.com openshift_hostname=ose3-master.example.com ansible_connection=local ansible_become=true
~~~
|
|
|
|
| |
https://bugzilla.redhat.com/show_bug.cgi?id=1274201#c13
|
|
|
|
|
|
|
|
| |
Removing the full call to config resulted in rpms not getting upgraded. Config
was doing a yum update of everything, which picks up the
atomic-openshift-master obsoleting openshift-master. The actual yum call
changed here would not. Instead we switch to a direct call to yum which
correctly picks up the obsoletes and updates to atomic-openshift packages.
|
| |
|
| |
|
|\ |
|
| |\
| | |
| | | |
atomic-openshift-installer: Add default openshift-ansible-playbook
|
| | |
| | |
| | |
| | |
| | | |
This adds a default value to the openshift-ansible-playbook directory and also
removes the requirement that it be writable.
|
| |\ \
| | | |
| | | | |
atomic-openshift-installer: Correct inaccurate prompt
|
| | |/ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Update to check both hostname and public_hostname.
Remove ansible_sudo=no as I failed to notice we were already checking
if ansible_ssh_user == 'root' and setting it there.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a check to see if the host the installer is running on is one of
the hosts to be installed and sets i
ansible_connection=local
ansible_sudo=no
in the inventory file.
|
| |\
| | |
| | | |
atomic-openshift-installer: Text improvements
|
| | |
| | |
| | |
| | |
| | | |
Improvements to some of the installer text based on suggestions from
the doc team.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| |
| | |
enterprise is being phased out in favor of openshift-enterprise, you need to
specify where you wish to go.
|
| |
| |
| |
| |
| | |
Because we're now installing from an rpm, we have a good idea where to find the
default playbooks and shouldn't require the user to tell us.
|
| | |
|
|/ |
|