| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
- Rename yum_repo role to yum_repos
- Update yum_repos to take a more complex datastructure to describe multiple
repo files and multiple repos within those files
- Update the template to support multiple repos within the repo file
- Update the template to allow for any key, value pair passed in instead of a
hard coded list.
- Add assertions to verify the repo_files variable is properly defined
- Convert the legacy variables to the new repo_files variable
|
| |
|
| |
|
|
|
|
| |
- cleans up repo root a bit
|
|
|
|
|
|
|
|
| |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
| |
|
| |
|
|
|
|
| |
on inventory/playbook variables for openshift_hostname
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove default value for openshift_hostname and make it required
- Remove workarounds that are no longer needed
- Remove resources parameter from openshift_register_node module
- pre-create node certificates for each node before registering node
- distribute created node certificates to each node
- Move node registration logic to a new openshift_register_nodes role
- This is because we now have to run the steps on a master as opposed to on
the nodes like we were previously doing.
- Rename openshift_register_node module to kubernetes_register_node, one more
step to genericizing enough for upstreaming, however there are still plenty
of openshift specific commands that still need to be genericized.
|
| |
|
| |
|
|
|
|
|
|
|
| |
- Does not install or start docker, since the openshift-node role will handle
that for us
- Only add root to the dockerroot group and configures the enter-container
script.
|
|
|
|
|
|
| |
- Add verify_chain action to os_firewall_manage_iptables module
- Update os_firewall module to use os_firewall_manage_iptables for creating
the DOCKER chain.
|
| |
|
|
|
|
| |
os_update_latest after repo config
|
|
|
|
|
| |
* Added playbooks/gce/openshift-cluster
* Added bin/cluster (will replace cluster.sh)
|
| |
|
|
|
|
|
| |
* Added playbooks/gce/openshift-cluster
* Added bin/cluster (will replace cluster.sh)
|
|\
| |
| | |
Rename repos role to openshift_repos
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Rename repos role to openshift_repos
- Make openshift_repos a dependency of openshift_common
- Add README and metadata for openshift_repos
- Playbook updates for role rename
- Verify libselinux-python is installed, otherwise some of the bulit-in
modules we use fail
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Set --hostname flag in node config in openshift_node role
- Support some additional node attributes in openshift_node role
- podCIDR
- labels
- annotations
- Support both output types for openshift ex config view in
openshift_register_node module
- Support multiple api versions in openshift_register_node module
- Support additional attributes in openshift_register_node module
- annotations
- labels
- pod_cidr
- external_ips (v1beta3, will be available after next kube rebase)
- internal_ips (v1beta3, will be available after next kube rebase)
- hostnames (v1beta3, will be available after next kube rebase)
- external_id (v1beta3, will be available after next kube rebase)
|
|/
|
|
|
| |
- always set hostname if hostname does not match openshift_hostname
- Use local IP instead of public IP as hostname for workaround
|
|\
| |
| | |
Add workaround for openshift-master startup timeout
|
| | |
|
|/
|
|
| |
following latest kubernetes rebase
|
|
|
|
|
|
|
|
|
|
| |
- add variable openshift_node_resources to openshift_node role
- set default value for openshift_node_resources to
{ capacity: { cpu: ,memory: }}
- If cpu is not set, then the default value will be chosen by the
openshift_register_node module (num logical cpus)
- If memory is not set, then the default value will be chosen by the
openshift_register_node module (75% MemTotal according to /proc/meminfo)
|
|\
| |
| | |
Random cleanup
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Conditionally set --nodes on master
|
| | |
| | |
| | |
| | |
| | | |
- only add --nodes option to /etc/sysconfig/openshift-master when
openshift_node_ips is not an empty list.
|
|\ \ \
| | | |
| | | | |
Fix permissions on .kube folder
|
| |/ /
| | |
| | |
| | | |
- missing leading 0 on mode
|
|/ /
| |
| |
| |
| |
| |
| | |
- Fix variable references to os_firewall_{allow,deny} instead of {allow, deny}
- Fix ordering of service stop/start to ensure firewall rules are properly
initiated after service startup
- Add test for package installed before attempting to disable or mask services
|
| |
| |
| |
| |
| | |
- Fix missed references to old firewall scripts
- Fix variable name references that didn't get updated
|
|\ \
| | |
| | | |
Fix issues with openshift_sdn_node
|
| |/
| |
| |
| |
| |
| |
| | |
- Use openshift_hostname (set from openshift_common) instead of calculating it
again using the openshift_common variables
- Fix the task setting facts for openshift_sdn_node that was using references
to master instead
|
|\ \
| | |
| | | |
openshift_register_node module fixes
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Set parameters resources and cpu/memory as mutually exclusive
- Add parameters for setting the client_user, client_context and client_cluster
- This allows the module to ensure it is using the proper context for operation
- Node resources weren't properly being registered
- wrapped node definition object in a config object to rectify
- Reduce default to 75% Total Memory instead of 80%
- Don't bother running osc create node if node is already in osc get nodes
output
|
|\ \
| | |
| | | |
Do not set KUBECONFIG for root user
|
| |/
| |
| |
| |
| |
| |
| | |
- instead of setting KUBECONFIG, copy the admin kubeconfig to
/root/.kube/.kubeconfig in the openshift_master and openshift_node roles
- pause for 30 seconds if the openshift-master service has changed state,
since the file we are copying is generated by the master
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- openshift_node_ips now defaults to []
- Previously an empty --nodes in /etc/sysconfig/master would result in the
master creating a node for the localhost. The latest Origin and OSE builds
now only create the implicit localhost node if run as openshift, not
openshift-master. We can now safely default to setting no nodes in
/etc/sysconfig/master and having nodes register themselves with the master
when they come up via the 'Register node (if not already registered)' task
in roles/openshift_node/tasks/main.yml)
- This had an associated change for the byo scripts that had not been merged
into master yet, but this PR changes the behavior of the openshift_master
role to not fail if openshift_node_ips is not set. This also prevents having
the openshift_master service restarted when a node is added.
|
|
|
|
| |
sets environment configs for root user
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add os_firewall role
- Remove firewall settings from base_os, add wait task to os_firewall
- Added a iptables firewall module for maintaining the following (in a mostly
naive manner):
- ensure the OPENSHIFT_ALLOW chain is defined
- ensure that there is a jump rule in the INPUT chain for OPENSHIFT_ALLOW
- adds or removes entries from the OPENSHIFT_ALLOW chain
- issues '/usr/libexec/iptables/iptables.init save' when rules are changed
- Limitations of iptables firewall module
- only allows setting of ports/protocols to open
- no testing on ipv6 support
- made os_firewall a dependency of openshift_common
- Hardcoded openshift_common to use iptables (through the vars directory)
until upstream support is in place for firewalld
|
| |
|
| |
|
|\
| |
| | |
Prefer YAML style datastructures over JSON
|
| |
| |
| |
| | |
- Switch JSON style datastructures to YAML for debuggability
|