| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
docker-buildvm-rhose is dead
|
| |
| |
| |
| |
| |
| | |
Both the original and the new hosts are both internal hosts used only for
testing of un-released content. One should not expect to use these outside
of Red Hat.
|
|\ \
| | |
| | | |
Removes hardcoded python2
|
| | |
| | |
| | |
| | |
| | |
| | | |
* replaces the hard coded items in favor of pulling a users environment
* resolves #383
* Feedback and/or additional testing is more than welcome
|
|\ \ \
| | | |
| | | | |
Enable htpasswd by default in the example hosts file.
|
| | | | |
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Custom cors configuration
|
| | | | |
|
| | | | |
|
| |/ /
|/| | |
|
|/ / |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Ability to specify multiple masters
- configures the CA only a single time on the first master
- creates and distributes additional certs for additional master hosts
- Depending on the status of openshift_master_cluster_defer_ha (defaults to
False) one of two actions are taken when multiple masters are defined
1. If openshift_master_cluster_defer_ha is true
a. Certs/configs for all masters are deployed
b. openshift-master service is only started and enabled on the master
c. HA configuration is expected to be handled by the user manually after
the completion of the playbook run.
2. If oepnshift_master_cluster_defer_ha is false or undefined
a. Certs/configs for all masters are deployed
b. a Pacemaker/RHEL HA cluster is configured
i. VIPs are configured based on the values of
openshift_master_cluster_vip and
openshift_master_cluster_plublic_vip
ii. The openshift-master service is configured as an active/passive
cluster service
|
|
|
|
|
|
| |
- move the inventory/byo/hosts file to inventory/byo/hosts.example
- add a .gitignore to inventory/byo to avoid a inventory/byo/hosts file from
being re-added to the repo.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add support to bin/cluster for specifying etcd hosts
- defaults to 0, if no etcd hosts are selected, then configures embedded
etcd
- Updates for the byo inventory file for etcd and master as node by default
- Consolidation of cluster logic more centrally into common playbook
- Added etcd config support to playbooks
- Restructured byo playbooks to leverage the common openshift-cluster playbook
- Added support to common master playbook to generate and apply external etcd
client certs from the etcd ca
- start of refactor for better handling of master certs in a multi-master
environment.
- added the openshift_master_ca and openshift_master_certificates roles to
manage master certs instead of generating them in the openshift_master
role
- added etcd host groups to the cluster update playbooks
- aded better handling of host groups when they are either not present or are
empty.
- Update AWS readme
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove openshift-deployer.kubeconfig from master template
Sync config template
Update enterprise image names
Switch to node auto registration
Add deployer to list of serviceAccountConfig.managedNames
Move package installation before registering facts
change default kubeconfig location
Change system:openshift-client to system:openshift-master
Rename node cert/key/kubeconfig per openshift/origin#3160
Update references to /var/lib/openshift/openshift.local.certificates
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Templatize node config
- Templatize master config
- Integrated sdn changes
- Updates for openshift_facts
- Added support for node, master and sdn related changes
- registry_url
- added identity provider facts
- Removed openshift_sdn_* roles
- Install httpd-tools if configuring htpasswd auth
- Remove references to external_id
- Setting external_id interferes with nodes associating with the generated
node object when pre-registering nodes.
- osc/oc and osadm/oadm binary detection in openshift_facts
Misc Changes:
- make non-errata puddle default for byo example
- comment out master in list of nodes in inventory/byo/hosts
- remove non-error errors from fluentd_* roles
- Use admin kubeconfig instead of openshift-client
|
| |
|
| |
|
| |
|
|
|
|
| |
* rename option_images to _{oreg|ortr}_images
|
| |
|
| |
|
|
|
|
|
| |
Query libvirt’s DHCP leases rather than inspecting the host’s ARP cache
to find the VMs’ IPs.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
not dictating what the ec2.ini file should look like.
|
| |
|
| |
|
|\
| |
| | |
Fixed a variable naming bug due to rename.
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Do not attempt to fetch file to same file location when playbooks are run
locally on master
- Fix for openshift_facts when run against a host in a VPC that does not assign internal/external hostnames or ips
- Fix setting of labels and annotations on node instances and in
openshift_facts
- converted openshift_facts to use json for local_fact storage instead of
an ini file, included code that should migrate existing ini users to json
- added region/zone setting to byo inventory
- Fix fact related bug where deployment_type was being set on node role
instead of common role for node hosts
|
| |
|
|
|
|
| |
aws/hosts/ being added.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace.
Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag.
As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group.
This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]`
The symptom is that `oo_hosts_to_terminate` was not defined.
In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters:
```
TASK: [Destroy VMs] ***********************************************************
failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]}
msg: virtual machine g not found
failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]}
msg: virtual machine g not found
failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]}
msg: virtual machine r not found
failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]}
msg: virtual machine r not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]}
msg: virtual machine u not found
failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]}
msg: virtual machine u not found
failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]}
msg: virtual machine p not found
failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]}
msg: virtual machine p not found
failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]}
msg: virtual machine s not found
failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]}
msg: virtual machine s not found
failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]}
msg: virtual machine [ not found
failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]}
msg: virtual machine [ not found
failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]}
msg: virtual machine ' not found
failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]}
msg: virtual machine ' not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
etc…
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
|
| |
|
|
|
|
|
|
|
|
| |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
| |
|