| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add flannel support
* Document Flannel SDN use case for a separate data network.
* Add post install step for flannel SDN
* Configure iptables rules as described for OCP 3.4 refarch
https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_red_hat_openshift_container_platform_3.4_on_red_hat_openstack_platform_10/emphasis_manual_deployment_emphasis#run_ansible_installer
* Configure flannel interface options
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Use os_firewall from galaxy for required flannel rules
For flannel SDN:
* Add openshift-ansible as a galaxy dependency module.
* Use openshift-ansible/roles/os_firewall to apply DNS rules
for flanel SDN.
* Apply the remaining advanced rules with direct
iptables commands as os_firewall do not support advanced rules.
* Persist only iptables rules w/o dynamic KUBe rules. Those are
added runtime and need restoration after reboot or iptables restart.
* Configure and enable the masked iptables service on the app nodes.
Enable it to allow the in-memory rules to be persisted.
Disable firewalld, which is the expected default behavior of the
os_firewall module.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Allow access from nodes to masters' port 2379 when using flannel
Flannel requires to gather information from etcd to configure and
assign the subnets in the nodes, therefore, allow access from nodes to port 2379/tcp to the master security group.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
| |
* Added task to stop docker before templating config
* Rearranged storage roles in rhv install
|
|
|
| |
Merge server with nofloating server heat templates
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Support separate data network for Flannel SDN
Document the use case for a separate flannel data network.
Allow Nova servers for openshift cluster to be provisioned
with that isolated data network created and connected to
masters, computes and infra nodes. Do not configure dns
nameservers and router for that network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Fix flannel use cases with provider network
Provider network cannot be used with flannel SDN
as the latter requires a separate isolated network,
while the provider network is an externally managed
single network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Drop unused data_net_name
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#747)
* Allow for the specifying of server policies during OpenStack provisioning
* documentation for openstack server group policies
* add doc link detailing allowed policies
* changed default to anti-affinity
|
| |
|
|
|
|
|
|
| |
Following up on the initial port of the OpenStack roles from
casl-ansible to openshift-ansible-contrib. One of the points that was
brought up in the review was to drop the references to CASL in the
code since the code has now wider reach.
|
| |
|
|
|
|
|
|
|
|
| |
* Required variables to create dedicated lv
https://bugzilla.redhat.com/show_bug.cgi?id=1490910#c11
* Fixed lint and added distribution to checks
|
|
|
|
|
|
| |
* Adding 'openstack-stack-delete' role to allow for easy de-provisioning
* Updated per etsauer's comments
|
|
|
|
|
| |
When using a bastion and a single master, add the bastion node's public IP the public master's IP for the DNS record.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* scale-up: playbook for upscaling app nodes
* scale-up: removed debug
* scale-up: made suggested changes
* scale-up: indentation fix
* upscaling: process split into two playbooks that are executed by a bash script
- upscaling_run.sh: bash script, usage displayed using -h parameter
- upscaling_pre-tasks: check that new value is higher, change inventory variable
- upscaling_scale-up: rerun provisioning and installation, verify change
* upscaling_run: fixed openshift-ansible-contrib directory name
* upscaling_run: inventory can be entered as relative path
* upscaling_scale-up: fixed formatting
* upscaling: minor changes
* upscaling: moved to .../provisioning/openstack directory, README updated, minor changes made
* README: minor changes
* README: formatting
* uspcaling: minor fix
* upscaling: fix
* upscaling: added customisations, fixes
- openshift-ansible-contrib and openshift-ansible paths are customisable
- fixed implicit incrementation by 1
* upscaling: fixes
* upscaling: fixes
* upscaling: another fix
* upscaling: another fix
* upscaling: fix
* upscaling: back to a single playbook, README updated
* minor fix
* pre_tasks: added labels for autoscaling
* scale-up: fixes
* scale-up: fixed host variables, post-verification is only based on labels
* scale-up: added openshift-ansible path customisation
- path has to be absolute, cannot contain '/' at the end
* scale-up: fix
* scale-up: debug removed
* README: added docs on openshift_ansible_dir, note about bastion
* static_inventory: newly added nodes are added to new_nodes group
- note: re-running provisioning fails when trying to install docker
* removing new line
* scale-up: running byo/config.yml or scaleup.yml based on the situation
- (whether there is an existing deployment or not)
* openstack.yml: indentation fix
* added refresh inventory
* upscaling: new_nodes only contains new does, it is not used during the first deployment
* static_inventory: make sure that new nodes end up only in their new_nodes group
* bug fixes
* another fix
* fixed condition
* scale-up, static_inventory role: all app node data gathered before provisioning
* upscaling: bug fixes
* upscaling: another fixes
* fixes
* upscaling: fix
* upscaling: fix
* upscaling: another logic fix
* bug fix for non-scaling deployments
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make `openstack_private_ssh_key` optional
Before this, the deployer could not reasonably rely on their own SSH
configuration or e.g. using the `--private-key` option to
ansible-playbook because we always wrote the `ansible_private_key_file`
value in the static inventory.
This change makes the `openstack_private_ssh_key` variable truly
optional: if it's not set, the static inventory will not configure the
SSH key and will just rely on the existing configuration.
* Update the openstack e2e CI
It no longer sets the SSH keys explicitly -- which should just work with
the previous commit.
* Put back the `openstack_ssh_public_key` in CI
This is the option we actually need to keep. This sholud fix the CI
failures.
|
|\
| |
| | |
Clear the previous inventory during provisioning
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If there was a left-over inventory from a previous run that had nodes
which were subsequently removed, these would still show up in the
Ansible's in-memory inventory and Ansible would fail trying to connect
to them.
This is because Ansible automatically loads the `inventory/hosts` file
if it exists and even if we overwrite it later, every node and group
still remains in the memory.
By removing the inventory file and and calling the `refresh_inventory`
meta task, we make sure that any left-over values are removed.
|
| |
| |
| |
| |
| |
| | |
Deployments without the cinder registry would fail, because the
`cinder_registry_volume` variable is still set even when we don't
actually create the volume.
|
|/ |
|
|
|
|
|
|
| |
* Add ability to support custom api and console ports
* Missed an ingress rule
|
|
|
|
|
|
|
|
| |
This ensures that the ports that the servers were using before this
commit will be parent ports of Neutron trunk ports. Thanks to this,
there can be nested Neutron ports inside the OS::NOva::Server resources
created either in the heat stack or dynamically inside the Instances.
Signed-off-by: Antoni Segura Puimedon <antonisp@celebdor.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Point openshift_master_cluster_public_hostname at master or load balancer if specified
* cleanup
* remove extraneous brackets
* corrections
* added doc section
* add private records
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Allow using a provider network
This adds a new option `openstack_provider_network_name` which will take
a name of an existing network and put the servers there. It will also
prevent creating floating IP addresses as the provider network's IPs
should already be accessible without any additional routing required.
Fixes #622
* Requested changes
Don't fail on external/private networks and use role defaults for the
provider network.
* Add missing endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Document global DNS security options
Related changes:
* Do not create a view if externally managed.
* Allow to specify the recursion settings for public/private
views defined by the dns-view role.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
* Document public_dns_nameservers better
Also use it as the private view forwarder
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Document how to use fully external DNS servers w/o provisioning
dns servers group with Heat.
* Document how to use a mixed servers setup for dynamic records
updates mathing public or private views.
* Allow custom nsupdate key names for OSP10 dns service compatibility.
The osp-dns configures the named service with the fixed key_name
'update-key'. Add optional key_name for the external_nsupdate_keys
public section to allow custom key names.
|
|
|
| |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
Additionally, add the lb group to contain lb nodes to the
static inventory template. Include the lb group into the
OSEv3 group, in order to apply the cluster group vars to it.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
* README, all.yml, stack_params.yaml, openstack-stack: added docker volume size customisation
- app_volume_size changed to node_volume_size (it is node everywhere else)
* all.yml, stack_params.yaml,openstack-stack: added customisation for lb, etcd, dns
* README: updated
* README: updated info about ephemeral volumes
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* README, all.yml, stack_params.yml, heat_stack.yaml.j2: hostname customisation added
* hostnames customisation: default set in stack_params
* heat_stack: bug fix
* fixed commented defaults in group_vars/all.yml
|
|
|
|
|
|
|
|
| |
When using a bastion and a single master, use the lb-secgrp
to access UI port allowed from the ingress bastion node cidr.
For HA (masters>1), UI still should be accessed via
the LB node's ingress cidr, omitting the bastion.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
* all.yml: set up new variables for specifying images for roles
* stack_params.yaml: add image name variables for different roles
* more roles added
* heat_stack.yaml.j2: openstack_image changed to updated image names
* README: updated documentation for specifying image names
|
|
|
|
|
|
| |
Add openstack_private_network_name to filter by a wanted private
network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
For testing cases it's sometimes useful to not create Cinder volumes for
the VMs. It can also sometimes be a little faster and more robust (but
unfit for production).
This adds an option called `ephemeral_volumes` that will use the VM's
storage instead of creating volumes when set to true.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* At the provisioning stage, allow users to auto-generate SSH config,
when using a static inventory.
* Run playbooks to provsion and post-provision as a separate, when
using a bastion. This re-applies the SSH config, which ansible can't
do on the fly.
* Support a pre-installed bastion node, colocated with the 1st infra
node.
* With a bastion enabled, reduce floating IP footprint to infra and
dns nodes only, effectively isolating a cluster in a private
network.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
| |
* Autogenerate SSH config for static inventory and bastion.
* When using bastion, use FQDN for inventory's ansible_host and SSH
config's Hostname. Simplifies accessing nodes by names instead of
private IPs.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
| |
This fixes a regression caused by the move to the static inventory.
The nodes in `oc get nodes` should be (and had been) identified by
their hostnames (e.g. master-0.openshift.example.com), but are
now using their internal IP addresses instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Autogenerate inventory/hosts when 'inventory: static' (Default),
with the shade-inventory tool.
* Drop unused anymore: openstack.py and associated GPL notes,
an example static inventory, omit manual updates for the
inventory DNS names in the deployment guide.
* Switch openstack.py formatted inventory hostvars
to the shade-inventory format (omit openstack.* from hostvars).
* Populate node labels from inventory vars instead of the heat
templates combined with inventory vars.
* Add app (k8s minions) nodes group for primary node labels.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
| |
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
| |
* openshift-prep: bash-completion and vim-enhanced packages are now optional under install_debug_packages switch
* openshift-prep: new line removal
|
|
|
|
|
|
| |
* Add the static-inventory role that configures the inventory/hosts
file by the given path, or creates it for you.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
| |
* subscription manager: added 10 retries after 1 second delay
* subscription manager: added untils
* sub manager: typo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Set up NetworkManager automatically
This removes the extra step of running the
`openshift-ansible/playbooks/byo/openshift-node/network_manager.yml`
before installing openshift. In addition, the playbook relies on a
host group that the provisioning doesn't provide (oo_all_hosts).
Instead, we set up NetworkManager on CentOS nodes automatically. And
we restart it on RHEL (which is necessary for the nodes to pick up the
new DNS we configured the subnet with).
This makes the provisioning easier and more resilient.
* Apply the node-network-manager role to every node
It makes the code simpler and more consistent across distros.
|
|
|
|
|
|
| |
These two Jinja filters were added in 2.8 which is notably not packaged in
CentOS and RHEL. This removes them in favour of the `==` and `>` operators
which are available in Jinja 2.7.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add defaults values for some openstack vars
Ansible shows errors when the `rhsm_register` and
`openstack_flat_secgrp` values are not present in the inventory even
though they have sensible default values.
This makes them both default to false when they're not specified.
* Comment out the flat security group option in inv
It's no longer required to be there so let's comment it out.
|
|\
| |
| | |
Manage packages to install/update for openstack provider
|
| |
| |
| |
| |
| |
| |
| | |
Allow required packages and yum update all steps to be optionally
disabled.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Firstly, provision a Heat stack with given public resolvers.
* After the DNS node configured as an authoritative server,
switch the Heat stack's Neutron subnet to that resolver
(private_dns_server) the way it to become the first entry pushed
into the hosts /etc/resolv.conf. It will be serving the cluster
domain requests for OpenShift nodes and workloads.
* Drop post-provision /etc/reslov.conf nameserver hacks as not
needed anymore.
* Fix dns floating IPs output and add the priv IPs output as well.
* Update docs, clarify localhost vs servers requirements, add
required Network Manager setup step.
* Use post-provision task names instead of comments.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|\
| |
| | |
Modify sec groups for provisioned openstack servers
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Drop ingress DNS rules from the common secgrp.
Add an ingress ICMP rule, restricted by the ssh ingress cidr,
to the common secgrp. This allows to ping servers from the
control node (ansible admin node).
Add dns servers into the common secgrp as well.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|