| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Move `virsh pool-refresh`
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `pool-refresh` command is used to ask libvirt to rescan the content of a volume pool.
This is used to make `libvirt` take into account volumes that were created outside of livirt control
i.e.: not with a `virsh` command.
`pool-refresh` is useless after a `pool-create` as the content is scanned at creation.
`pool-refresh` is mandatory after having created files inside an existing pool.
|
|\ \
| | |
| | | |
Make the error message checks locale proof
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On a computer which has a locale set, the error messages look like this:
```
$ virsh net-info foo
erreur :impossible de récupérer le réseau « foo »
erreur :Réseau non trouvé : no network with matching name 'foo'
```
```
$ virsh pool-info foo
erreur :impossible de récupérer le pool « foo »
erreur :Pool de stockage introuvable : no storage pool with matching name 'foo'
```
The classical way to make those tests locale proof is to force a given locale.
Like this:
```
$ LANG=POSIX virsh net-info foo
error: failed to get network 'foo'
error: Réseau non trouvé : no network with matching name 'foo'
```
```
$ LANG=POSIX virsh pool-info foo
error: failed to get pool 'foo'
error: Pool de stockage introuvable : no storage pool with matching name 'foo'
```
It looks like the "Network not found" or "Storage pool not found" parts of the message
are generated by the `libvirtd` daemon and are not subject to the locale of the `virsh`
client.
The clean fix consists in patching `libvirt` so that `virsh` sends its locale to the
`libvirtd` daemon.
But in the mean time, it is safer to have our playbook match the part of the message
which is not subject to the daemon locale.
|
|\ \
| | |
| | | |
Fix libvirt metadata used to store ansible tags
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace.
Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag.
As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group.
This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]`
The symptom is that `oo_hosts_to_terminate` was not defined.
In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters:
```
TASK: [Destroy VMs] ***********************************************************
failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]}
msg: virtual machine g not found
failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]}
msg: virtual machine g not found
failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]}
msg: virtual machine r not found
failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]}
msg: virtual machine r not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]}
msg: virtual machine u not found
failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]}
msg: virtual machine u not found
failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]}
msg: virtual machine p not found
failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]}
msg: virtual machine p not found
failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]}
msg: virtual machine s not found
failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]}
msg: virtual machine s not found
failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]}
msg: virtual machine [ not found
failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]}
msg: virtual machine [ not found
failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]}
msg: virtual machine ' not found
failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]}
msg: virtual machine ' not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
etc…
```
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
|
|\
| |
| | |
Launch openshift on AWS issues
|
| |
| |
| | |
Make security group an environment variable with default to ‘public’
|
| | |
|
|\ \
| | |
| | | |
fixed bug in opssh where it wouldn't actually run pssh
|
|/ / |
|
| | |
|
|\ \
| | |
| | | |
added the ability to run opssh and ohi on all hosts in an environment, as well as all hosts of the same host-type regardless of environment
|
|/ /
| |
| |
| | |
well as all hosts of the same host-type regardless of environment
|
|\ \
| | |
| | | |
added ohi
|
|/ / |
|
|\ \
| | |
| | | |
Add libvirt as a provider for openshift-ansible
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Adding openshift_ansible_inventory role to configure multi_ec2
|
| | | |
|
|\ \ \
| | | |
| | | | |
added sebools for ansible tower
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
refactor yum_repo role to handle multiple repos/files
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- Rename yum_repo role to yum_repos
- Update yum_repos to take a more complex datastructure to describe multiple
repo files and multiple repos within those files
- Update the template to support multiple repos within the repo file
- Update the template to allow for any key, value pair passed in instead of a
hard coded list.
- Add assertions to verify the repo_files variable is properly defined
- Convert the legacy variables to the new repo_files variable
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
fixed bug where opssh would throw an exception if pssh returned a non-zero exit code
|
|/ / / /
| | | |
| | | |
| | | | |
exit code
|
|\| | |
| |/ /
|/| | |
added more options for yum repos
|
|/ / |
|
|\ \
| |/
|/| |
Adding yum_repo role
|
|/ |
|
|\
| |
| | |
move zbxapi module to a new os_zabbix role
|
| |
| |
| |
| | |
- cleans up repo root a bit
|
|/ |
|
|\
| |
| | |
fixed the opssh default output behavior to be consistent with pssh. Also fixed a bug in how directories are named for --outdir and --errdir.
|
|/
|
|
| |
fixed a bug in how directories are named for --outdir and --errdir.
|
|\
| |
| | |
Node registration changes master
|
| |
| |
| |
| |
| |
| |
| |
| | |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
|\
| |
| | |
Adding zabbix ansible module with a generic playbook example to fetch problem triggers. Also added oo_flatten to filters.
|
| | |
|
| | |
|
| | |
|
|/
|
|
| |
problem triggers. Also added oo_flatten to filters for arrays of arrays.
|
| |
|
|\
| |
| | |
Fixing bash completion for ossh/oscp. Adding for opssh.
|
|/ |
|