diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/best_practices_guide.adoc | 495 | ||||
-rw-r--r-- | docs/core_concepts_guide.adoc | 43 | ||||
-rw-r--r-- | docs/proposals/README.md | 27 | ||||
-rw-r--r-- | docs/proposals/playbook_consolidation.md | 178 | ||||
-rw-r--r-- | docs/proposals/proposal_template.md | 30 | ||||
-rw-r--r-- | docs/proposals/role_decomposition.md | 353 | ||||
-rw-r--r-- | docs/pull_requests.md | 95 | ||||
-rw-r--r-- | docs/repo_structure.md | 61 | ||||
-rw-r--r-- | docs/style_guide.adoc | 174 |
9 files changed, 1456 insertions, 0 deletions
diff --git a/docs/best_practices_guide.adoc b/docs/best_practices_guide.adoc new file mode 100644 index 000000000..e66c5addb --- /dev/null +++ b/docs/best_practices_guide.adoc @@ -0,0 +1,495 @@ +// vim: ft=asciidoc + += openshift-ansible Best Practices Guide + +The purpose of this guide is to describe the preferred patterns and best practices used in this repository (both in Ansible and Python). + +It is important to note that this repository may not currently comply with all best practices, but the intention is that it will. + +All new pull requests created against this repository MUST comply with this guide. + +This guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119]. + + + +== Python + +=== Method Signatures + +''' +[[When-adding-a-new-parameter-to-an-existing-method-a-default-value-SHOULD-be-used]] +[cols="2v,v"] +|=== +| <<When-adding-a-new-parameter-to-an-existing-method-a-default-value-SHOULD-be-used, Rule>> +| When adding a new parameter to an existing method, a default value SHOULD be used +|=== +The purpose of this rule is to make it so that method signatures are backwards compatible. + +If this rule isn't followed, it will be necessary for the person who changed the method to search out all callers and make sure that they're able to use the new method signature. + +.Before: +[source,python] +---- +def add_person(first_name, last_name): +---- + +.After: +[source,python] +---- +def add_person(first_name, last_name, age=None): +---- + + +=== PyLint +http://www.pylint.org/[PyLint] is used in an attempt to keep the Python code as clean and as manageable as possible. The build bot runs each pull request through PyLint and any warnings or errors cause the build bot to fail the pull request. + +''' +[[PyLint-rules-MUST-NOT-be-disabled-on-a-whole-file]] +[cols="2v,v"] +|=== +| <<PyLint-rules-MUST-NOT-be-disabled-on-a-whole-file, Rule>> +| PyLint rules MUST NOT be disabled on a whole file. +|=== + +Instead, http://docs.pylint.org/faq.html#is-it-possible-to-locally-disable-a-particular-message[disable the PyLint check on the line where PyLint is complaining]. + +''' +[[PyLint-rules-MUST-NOT-be-disabled-unless-they-meet-one-of-the-following-exceptions]] +[cols="2v,v"] +|=== +| <<PyLint-rules-MUST-NOT-be-disabled-unless-they-meet-one-of-the-following-exceptions, Rule>> +| PyLint rules MUST NOT be disabled unless they meet one of the following exceptions +|=== + +.Exceptions: +1. When PyLint fails because of a dependency that can't be installed on the build bot +1. When PyLint fails because of including a module that is outside of control (like Ansible) +1. When PyLint fails, but the code makes more sense the way it is formatted (stylistic exception). For this exception, the description of the PyLint disable MUST state why the code is more clear, AND the person reviewing the PR will decide if they agree or not. The reviewer may reject the PR if they disagree with the reason for the disable. + +''' +[[All-PyLint-rule-disables-MUST-be-documented-in-the-code]] +[cols="2v,v"] +|=== +| <<All-PyLint-rule-disables-MUST-be-documented-in-the-code, Rule>> +| All PyLint rule disables MUST be documented in the code. +|=== + +The purpose of this rule is to inform future developers about the disable. + +.Specifically, the following MUST accompany every PyLint disable: +1. Why is the check being disabled? +1. Is disabling this check meant to be permanent or temporary? + +.Example: +[source,python] +---- +# Reason: disable pylint maybe-no-member because overloaded use of +# the module name causes pylint to not detect that 'results' +# is an array or hash +# Status: permanently disabled unless a way is found to fix this. +# pylint: disable=maybe-no-member +metadata[line] = results.pop() +---- + + +== Ansible + +=== Yaml Files (Playbooks, Roles, Vars, etc) + +''' +[[Ansible-files-SHOULD-NOT-use-JSON-use-pure-YAML-instead]] +[cols="2v,v"] +|=== +| <<Ansible-files-SHOULD-NOT-use-JSON-use-pure-YAML-instead, Rule>> +| Ansible files SHOULD NOT use JSON (use pure YAML instead). +|=== + +YAML is a superset of JSON, which means that Ansible allows JSON syntax to be interspersed. Even though YAML (and by extension Ansible) allows for this, JSON SHOULD NOT be used. + +.Reasons: +* Ansible is able to give clearer error messages when the files are pure YAML +* YAML reads nicer (preference held by several team members) +* YAML makes for nicer diffs as YAML tends to be multi-line, whereas JSON tends to be more concise + +.Exceptions: +* Ansible static inventory files are INI files. To pass in variables for specific hosts, Ansible allows for these variables to be put inside of the static inventory files. These variables can be in JSON format, but can't be in YAML format. This is an acceptable use of JSON, as YAML is not allowed in this case. + +Every effort should be made to keep our Ansible YAML files in pure YAML. + +=== Modules +''' +[[Custom-Ansible-modules-SHOULD-be-embedded-in-a-role]] +[cols="2v,v"] +|=== +| <<Custom-Ansible-modules-SHOULD-be-embedded-in-a-role, Rule>> +| Custom Ansible modules SHOULD be embedded in a role. +|=== + +.Context +* http://docs.ansible.com/ansible/playbooks_roles.html#embedding-modules-in-roles[Ansible doc on how to embed modules in roles] + +The purpose of this rule is to make it easy to include custom modules in our playbooks and share them on Ansible Galaxy. + +.Custom module `openshift_facts.py` is embedded in the `openshift_facts` role. +---- +> ll openshift-ansible/roles/openshift_facts/library/ +-rwxrwxr-x. 1 user group 33616 Jul 22 09:36 openshift_facts.py +---- + +.Custom module `openshift_facts` can be used after `openshift_facts` role has been referenced. +[source,yaml] +---- +- hosts: openshift_hosts + gather_facts: no + roles: + - role: openshift_facts + post_tasks: + - openshift_facts + role: common + hostname: host + public_hostname: host.example.com +---- + + +''' +[[Parameters-to-Ansible-modules-SHOULD-use-the-Yaml-dictionary-format-when-3-or-more-parameters-are-being-passed]] +[cols="2v,v"] +|=== +| <<Parameters-to-Ansible-modules-SHOULD-use-the-Yaml-dictionary-format-when-3-or-more-parameters-are-being-passed, Rule>> +| Parameters to Ansible modules SHOULD use the Yaml dictionary format when 3 or more parameters are being passed +|=== + +When a module has several parameters that are being passed in, it's hard to see exactly what value each parameter is getting. It is preferred to use the Ansible Yaml syntax to pass in parameters so that it's more clear what values are being passed for each parameter. + +.Bad: +[source,yaml] +---- +- file: src=/file/to/link/to dest=/path/to/symlink owner=foo group=foo state=link +---- + +.Good: +[source,yaml] +---- +- file: + src: /file/to/link/to + dest: /path/to/symlink + owner: foo + group: foo + state: link +---- + + +''' +[[Parameters-to-Ansible-modules-SHOULD-use-the-Yaml-dictionary-format-when-the-line-length-exceeds-120-characters]] +[cols="2v,v"] +|=== +| <<Parameters-to-Ansible-modules-SHOULD-use-the-Yaml-dictionary-format-when-the-line-length-exceeds-120-characters, Rule>> +| Parameters to Ansible modules SHOULD use the Yaml dictionary format when the line length exceeds 120 characters +|=== + +Lines that are long quickly become a wall of text that isn't easily parsable. It is preferred to use the Ansible Yaml syntax to pass in parameters so that it's more clear what values are being passed for each parameter. + +.Bad: +[source,yaml] +---- +- get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf sha256sum=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c +---- + +.Good: +[source,yaml] +---- +- get_url: + url: http://example.com/path/file.conf + dest: /etc/foo.conf + sha256sum: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c +---- + +''' +[[The-Ansible-command-module-SHOULD-be-used-instead-of-the-Ansible-shell-module]] +[cols="2v,v"] +|=== +| <<The-Ansible-command-module-SHOULD-be-used-instead-of-the-Ansible-shell-module, Rule>> +| The Ansible `command` module SHOULD be used instead of the Ansible `shell` module. +|=== +.Context +* http://docs.ansible.com/shell_module.html#notes[Ansible doc on why using the command module is a best practice] + +The Ansible `shell` module can run most commands that can be run from a bash CLI. This makes it extremely powerful, but it also opens our playbooks up to being exploited by attackers. + +.Bad: +[source,yaml] +---- +- shell: "/bin/echo {{ cli_var }}" +---- + +.Better: +[source,yaml] +---- +- command: "/bin/echo {{ cli_var }}" +---- + +''' +[[The-Ansible-quote-filter-MUST-be-used-with-any-variable-passed-into-the-shell-module]] +[cols="2v,v"] +|=== +| <<The-Ansible-quote-filter-MUST-be-used-with-any-variable-passed-into-the-shell-module, Rule>> +| The Ansible `quote` filter MUST be used with any variable passed into the shell module. +|=== +.Context +* http://docs.ansible.com/shell_module.html#notes[Ansible doc describing why to use the quote filter] + +It is recommended not to use the `shell` module. However, if it absolutely must be used, all variables passed into the `shell` module MUST use the `quote` filter to ensure they are shell safe. + +.Bad: +[source,yaml] +---- +- shell: "/bin/echo {{ cli_var }}" +---- + +.Good: +[source,yaml] +---- +- shell: "/bin/echo {{ cli_var | quote }}" +---- + +=== Defensive Programming + +.Context +* http://docs.ansible.com/fail_module.html[Ansible Fail Module] + +''' +[[Ansible-playbooks-MUST-begin-with-checks-for-any-variables-that-they-require]] +[cols="2v,v"] +|=== +| <<Ansible-playbooks-MUST-begin-with-checks-for-any-variables-that-they-require, Rule>> +| Ansible playbooks MUST begin with checks for any variables that they require. +|=== + +If an Ansible playbook requires certain variables to be set, it's best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the playbook. + +.Example: +[source,yaml] +---- +--- +- hosts: localhost + gather_facts: no + tasks: + - fail: msg="This playbook requires g_environment to be set and non empty" + when: g_environment is not defined or g_environment == '' +---- + +''' +[[Ansible-roles-tasks-main-yml-file-MUST-begin-with-checks-for-any-variables-that-they-require]] +[cols="2v,v"] +|=== +| <<Ansible-roles-tasks-main-yml-file-MUST-begin-with-checks-for-any-variables-that-they-require, Rule>> +| Ansible roles tasks/main.yml file MUST begin with checks for any variables that they require. +|=== + +If an Ansible role requires certain variables to be set, it's best to check for these up front before any other actions have been performed. In this way, the user knows exactly what needs to be passed into the role. + +.Example: +[source,yaml] +---- +--- +# tasks/main.yml +- fail: msg="This role requires arl_environment to be set and non empty" + when: arl_environment is not defined or arl_environment == '' +---- + +=== Tasks +''' +[[Ansible-tasks-SHOULD-NOT-be-used-in-ansible-playbooks-Instead-use-pre_tasks-and-post_tasks]] +[cols="2v,v"] +|=== +| <<Ansible-tasks-SHOULD-NOT-be-used-in-ansible-playbooks-Instead-use-pre_tasks-and-post_tasks, Rule>> +| Ansible tasks SHOULD NOT be used in Ansible playbooks. Instead, use pre_tasks and post_tasks. +|=== +An Ansible play is defined as a Yaml dictionary. Because of that, Ansible doesn't know if the play's tasks list or roles list was specified first. Therefore Ansible always runs tasks after roles. + +This can be quite confusing if the tasks list is defined in the playbook before the roles list because people assume in order execution in Ansible. + +Therefore, we SHOULD use pre_tasks and post_tasks to make it more clear when the tasks will be run. + +.Context +* https://docs.ansible.com/playbooks_roles.html[Ansible documentation on pre_tasks and post_tasks] + +.Bad: +[source,yaml] +---- +--- +# playbook.yml +- hosts: localhost + gather_facts: no + tasks: + - name: This will execute AFTER the example_role, so it's confusing + debug: msg="in tasks list" + roles: + - role: example_role + +# roles/example_role/tasks/main.yml +- debug: msg="in example_role" +---- + +.Good: +[source,yaml] +---- +--- +# playbook.yml +- hosts: localhost + gather_facts: no + pre_tasks: + - name: This will execute BEFORE the example_role, so it makes sense + debug: msg="in pre_tasks list" + roles: + - role: example_role + +# roles/example_role/tasks/main.yml +- debug: msg="in example_role" +---- + + +=== Roles + +''' +[[All-tasks-in-a-role-SHOULD-be-tagged-with-the-role-name]] +[cols="2v,v"] +|=== +| <<All-tasks-in-a-role-SHOULD-be-tagged-with-the-role-name, Rule>> +| All tasks in a role SHOULD be tagged with the role name. +|=== + +.Context +* http://docs.ansible.com/playbooks_tags.html[Ansible doc explaining tags] + +Ansible tasks can be tagged, and then these tags can be used to either _run_ or _skip_ the tagged tasks using the `--tags` and `--skip-tags` ansible-playbook options respectively. + +This is very useful when developing and debugging new tasks. It can also significantly speed up playbook runs if the user specifies only the roles that changed. + +.Example: +[source,yaml] +---- +--- +# roles/example_role/tasks/main.yml +- debug: msg="in example_role" + tags: + - example_role +---- + + +''' +[[The-Ansible-roles-directory-MUST-maintain-a-flat-structure]] +[cols="2v,v"] +|=== +| <<The-Ansible-roles-directory-MUST-maintain-a-flat-structure, Rule>> +| The Ansible roles directory MUST maintain a flat structure. +|=== + +.Context +* http://docs.ansible.com/playbooks_best_practices.html#directory-layout[Ansible Suggested Directory Layout] + +.The purpose of this rule is to: +* Comply with the upstream best practices +* Make it familiar for new contributors +* Make it compatible with Ansible Galaxy + +''' +[[Ansible-Roles-SHOULD-be-named-like-technology_component_subcomponent]] +[cols="2v,v"] +|=== +| <<Ansible-Roles-SHOULD-be-named-like-technology_component_subcomponent, Rule>> +| Ansible Roles SHOULD be named like technology_component[_subcomponent]. +|=== + +For consistency, role names SHOULD follow the above naming pattern. It is important to note that this is a recommendation for role naming, and follows the pattern used by upstream. + +Many times the `technology` portion of the pattern will line up with a package name. It is advised that whenever possible, the package name should be used. + +.Examples: +* The role to configure a master is called `openshift_master` +* The role to configure OpenShift specific yum repositories is called `openshift_repos` + +=== Filters +.Context: +* https://docs.ansible.com/playbooks_filters.html[Ansible Playbook Filters] +* http://jinja.pocoo.org/docs/dev/templates/#builtin-filters[Jinja2 Builtin Filters] + +''' +[[The-default-filter-SHOULD-replace-empty-strings-lists-etc]] +[cols="2v,v"] +|=== +| <<The-default-filter-SHOULD-replace-empty-strings-lists-etc, Rule>> +| The `default` filter SHOULD replace empty strings, lists, etc. +|=== + +When using the jinja2 `default` filter, unless the variable is a boolean, specify `true` as the second parameter. This will cause the default filter to replace empty strings, lists, etc with the provided default. + +This is because it is preferable to either have a sane default set than to have an empty string, list, etc. For example, it is preferable to have a config value set to a sane default than to have it simply set as an empty string. + +.From the http://jinja.pocoo.org/docs/dev/templates/[Jinja2 Docs]: +[quote] +If you want to use default with variables that evaluate to false you have to set the second parameter to true + +.Example: +[source,yaml] +---- +--- +- hosts: localhost + gather_facts: no + vars: + somevar: '' + tasks: + - debug: var=somevar + + - name: "Will output 'somevar: []'" + debug: "msg='somevar: [{{ somevar | default('the string was empty') }}]'" + + - name: "Will output 'somevar: [the string was empty]'" + debug: "msg='somevar: [{{ somevar | default('the string was empty', true) }}]'" +---- + + +In other words, normally the `default` filter will only replace the value if it's undefined. By setting the second parameter to `true`, it will also replace the value if it defaults to a false value in Python, so None, empty list, empty string, etc. + +This is almost always more desirable than an empty list, string, etc. + +=== Yum and DNF +''' +[[Package-installation-MUST-use-ansible-package-module-to-abstract-away-dnf-yum]] +[cols="2v,v"] +|=== +| <<Package-installation-MUST-use-ansible-package-module-to-abstract-away-dnf-yum, Rule>> +| Package installation MUST use Ansible `package` module to abstract away dnf/yum. +|=== + +The Ansible `package` module calls the associated package manager for the underlying OS. + +.Reference +* https://docs.ansible.com/ansible/package_module.html[Ansible package module] + +.Bad: +[source,yaml] +---- +--- +# tasks.yml +- name: Install etcd (for etcdctl) + yum: name=etcd state=latest + when: ansible_pkg_mgr == yum + register: install_result + +- name: Install etcd (for etcdctl) + dnf: name=etcd state=latest + when: ansible_pkg_mgr == dnf + register: install_result +---- + + +.Good: +[source,yaml] +---- +--- +# tasks.yml +- name: Install etcd (for etcdctl) + package: name=etcd state=latest + register: install_result +---- diff --git a/docs/core_concepts_guide.adoc b/docs/core_concepts_guide.adoc new file mode 100644 index 000000000..b29e467e2 --- /dev/null +++ b/docs/core_concepts_guide.adoc @@ -0,0 +1,43 @@ +// vim: ft=asciidoc + += openshift-ansible Core Concepts Guide + +The purpose of this guide is to describe core concepts used in this repository. + +It is important to note that this repository may not currently implement all of the concepts, but the intention is that it will. + +== Logical Grouping Concepts +The following are the concepts used to logically group OpenShift cluster instances. + +These groupings are used to perform operations specifically against instances in the specified group. + +For example, run an Ansible playbook against all instances in the `production` environment, or run an adhoc command against all instances in the `acme-corp` cluster group. + +=== Cluster +A Cluster is a complete install of OpenShift (master, nodes, registry, router, etc). + +Example: Acme Corp has sales and marketing departments that both want to use OpenShift for their internal applications, but they do not want to share resources because they have different cost centers. Each department could have their own completely separate install of OpenShift. Each install is a separate OpenShift cluster. + +Defined Clusters: +`acme-sales` +`acme-marketing` + +=== Cluster Group +A cluster group is a logical grouping of one or more clusters. Which clusters are in which cluster groups is determined by the OpenShift administrators. + +Example: Extending the example above, both marketing and sales clusters are part of Acme Corp. Let's say that Acme Corp contracts with Hosting Corp to host their OpenShift clusters. Hosting Corp could create an Acme Corp cluster group. + +This would logically group Acme Corp resources from other Hosting Corp customers, which would enable the Hosting Corp's OpenShift administrators to run operations specifically targeting Acme Corp instances. + +Defined Cluster Group: +`acme-corp` + +=== Environment +An environment is a logical grouping of one or more cluster groups. How the environment is defined is determined by the OpenShift administrators. + +Example: Extending the two examples above, Hosting Corp is upgrading to the latest version of OpenShift. Before deploying it to their clusters in the Production environment, they want to test it out. So, Hosting Corp runs an Ansible playbook specifically against all of the cluster groups in the Staging environment in order to do the OpenShift upgrade. + + +Defined Environments: +`production` +`staging` diff --git a/docs/proposals/README.md b/docs/proposals/README.md new file mode 100644 index 000000000..89bbe5163 --- /dev/null +++ b/docs/proposals/README.md @@ -0,0 +1,27 @@ +# OpenShift-Ansible Proposal Process + +## Proposal Decision Tree +TODO: Add details about when a proposal is or is not required. + +## Proposal Process +The following process should be followed when a proposal is needed: + +1. Create a pull request with the initial proposal + * Use the [proposal template][template] + * Name the proposal using two or three topic words with underscores as a separator (i.e. proposal_template.md) + * Place the proposal in the docs/proposals directory +2. Notify the development team of the proposal and request feedback +3. Review the proposal on the OpenShift-Ansible Architecture Meeting +4. Update the proposal as needed and ask for feedback +5. Approved/Closed Phase + * If 75% or more of the active development team give the proposal a :+1: it is Approved + * If 50% or more of the active development team disagrees with the proposal it is Closed + * If the person proposing the proposal no longer wishes to continue they can request it to be Closed + * If there is no activity on a proposal, the active development team may Close the proposal at their discretion + * If none of the above is met the cycle can continue to Step 4. +6. For approved proposals, the current development lead(s) will: + * Update the Pull Request with the result and merge the proposal + * Create a card on the Cluster Lifecycle [Trello board][trello] so it may be scheduled for implementation. + +[template]: proposal_template.md +[trello]: https://trello.com/b/wJYDst6C diff --git a/docs/proposals/playbook_consolidation.md b/docs/proposals/playbook_consolidation.md new file mode 100644 index 000000000..98aedb021 --- /dev/null +++ b/docs/proposals/playbook_consolidation.md @@ -0,0 +1,178 @@ +# OpenShift-Ansible Playbook Consolidation + +## Description +The designation of `byo` is no longer applicable due to being able to deploy on +physical hardware or cloud resources using the playbooks in the `byo` directory. +Consolidation of these directories will make maintaining the code base easier +and provide a more straightforward project for users and developers. + +The main points of this proposal are: +* Consolidate initialization playbooks into one set of playbooks in + `playbooks/init`. +* Collapse the `playbooks/byo` and `playbooks/common` into one set of + directories at `playbooks/openshift-*`. + +This consolidation effort may be more appropriate when the project moves to +using a container as the default installation method. + +## Design + +### Initialization Playbook Consolidation +Currently there are two separate sets of initialization playbooks: +* `playbooks/byo/openshift-cluster/initialize_groups.yml` +* `playbooks/common/openshift-cluster/std_include.yml` + +Although these playbooks are located in the `openshift-cluster` directory they +are shared by all of the `openshift-*` areas. These playbooks would be better +organized in a `playbooks/init` directory collocated with all their related +playbooks. + +In the example below, the following changes have been made: +* `playbooks/byo/openshift-cluster/initialize_groups.yml` renamed to + `playbooks/init/initialize_host_groups.yml` +* `playbooks/common/openshift-cluster/std_include.yml` renamed to + `playbooks/init/main.yml` +* `- include: playbooks/init/initialize_host_groups.yml` has been added to the + top of `playbooks/init/main.yml` +* All other related files for initialization have been moved to `playbooks/init` + +The `initialize_host_groups.yml` playbook is only one play with one task for +importing variables for inventory group conversions. This task could be further +consolidated with the play in `evaluate_groups.yml`. + +The new standard initialization playbook would be +`playbooks/init/main.yml`. + + +``` + +> $ tree openshift-ansible/playbooks/init +. +├── evaluate_groups.yml +├── initialize_facts.yml +├── initialize_host_groups.yml +├── initialize_openshift_repos.yml +├── initialize_openshift_version.yml +├── main.yml +├── roles -> ../../roles +├── validate_hostnames.yml +└── vars + └── cluster_hosts.yml +``` + +```yaml +# openshift-ansible/playbooks/init/main.yml +--- +- include: initialize_host_groups.yml + +- include: evaluate_groups.yml + +- include: initialize_facts.yml + +- include: validate_hostnames.yml + +- include: initialize_openshift_repos.yml + +- include: initialize_openshift_version.yml +``` + +### `byo` and `common` Playbook Consolidation +Historically, the `byo` directory coexisted with other platform directories +which contained playbooks that then called into `common` playbooks to perform +common installation steps for all platforms. Since the other platform +directories have been removed this separation is no longer necessary. + +In the example below, the following changes have been made: +* `playbooks/byo/openshift-master` renamed to + `playbooks/openshift-master` +* `playbooks/common/openshift-master` renamed to + `playbooks/openshift-master/private` +* Original `byo` entry point playbooks have been updated to include their + respective playbooks from `private/`. +* Symbolic links have been updated as necessary + +All user consumable playbooks are in the root of `openshift-master` and no entry +point playbooks exist in the `private` directory. Maintaining the separation +between entry point playbooks and the private playbooks allows individual pieces +of the deployments to be used as needed by other components. + +``` +openshift-ansible/playbooks/openshift-master +> $ tree +. +├── config.yml +├── private +│ ├── additional_config.yml +│ ├── config.yml +│ ├── filter_plugins -> ../../../filter_plugins +│ ├── library -> ../../../library +│ ├── lookup_plugins -> ../../../lookup_plugins +│ ├── restart_hosts.yml +│ ├── restart_services.yml +│ ├── restart.yml +│ ├── roles -> ../../../roles +│ ├── scaleup.yml +│ └── validate_restart.yml +├── restart.yml +└── scaleup.yml +``` + +```yaml +# openshift-ansible/playbooks/openshift-master/config.yml +--- +- include: ../init/main.yml + +- include: private/config.yml +``` + +With the consolidation of the directory structure and component installs being +removed from `openshift-cluster`, that directory is no longer necessary. To +deploy an entire OpenShift cluster, a playbook would be created to tie together +all of the different components. The following example shows how multiple +components would be combined to perform a complete install. + +```yaml +# openshift-ansible/playbooks/deploy_cluster.yml +--- +- include: init/main.yml + +- include: openshift-etcd/private/config.yml + +- include: openshift-nfs/private/config.yml + +- include: openshift-loadbalancer/private/config.yml + +- include: openshift-master/private/config.yml + +- include: openshift-node/private/config.yml + +- include: openshift-glusterfs/private/config.yml + +- include: openshift-hosted/private/config.yml + +- include: openshift-service-catalog/private/config.yml +``` + +## User Story +As a developer of OpenShift-Ansible, +I want simplify the playbook directory structure +so that users can easily find deployment playbooks and developers know where new +features should be developed. + +## Implementation +Given the size of this refactoring effort, it should be broken into smaller +steps which can be completed independently while still maintaining a functional +project. + +Steps: +1. Update and merge consolidation of the initialization playbooks. +2. Update each merge consolidation of each `openshift-*` component area +3. Update and merge consolidation of `openshift-cluster` + +## Acceptance Criteria +* Verify that all entry points playbooks install or configure as expected. +* Verify that CI is updated for testing new playbook locations. +* Verify that repo documentation is updated +* Verify that user documentation is updated + +## References diff --git a/docs/proposals/proposal_template.md b/docs/proposals/proposal_template.md new file mode 100644 index 000000000..ece288037 --- /dev/null +++ b/docs/proposals/proposal_template.md @@ -0,0 +1,30 @@ +# Proposal Title + +## Description +<Short introduction> + +## Rationale +<Summary of main points of Design> + +## Design +<Main content goes here> + +## Checklist +* Item 1 +* Item 2 +* Item 3 + +## User Story +As a developer on OpenShift-Ansible, +I want ... +so that ... + +## Acceptance Criteria +* Verify that ... +* Verify that ... +* Verify that ... + +## References +* Link +* Link +* Link diff --git a/docs/proposals/role_decomposition.md b/docs/proposals/role_decomposition.md new file mode 100644 index 000000000..b6c1d8c5b --- /dev/null +++ b/docs/proposals/role_decomposition.md @@ -0,0 +1,353 @@ +# Scaffolding for decomposing large roles + +## Why? + +Currently we have roles that are very large and encompass a lot of different +components. This makes for a lot of logic required within the role, can +create complex conditionals, and increases the learning curve for the role. + +## How? + +Creating a guide on how to approach breaking up a large role into smaller, +component based, roles. Also describe how to develop new roles, to avoid creating +large roles. + +## Proposal + +Create a new guide or append to the current contributing guide a process for +identifying large roles that can be split up, and how to compose smaller roles +going forward. + +### Large roles + +A role should be considered for decomposition if it: + +1) Configures/installs more than one product. +1) Can configure multiple variations of the same product that can live +side by side. +1) Has different entry points for upgrading and installing a product + +Large roles<sup>1</sup> should be responsible for: +> 1 or composing playbooks + +1) Composing smaller roles to provide a full solution such as an Openshift Master +1) Ensuring that smaller roles are called in the correct order if necessary +1) Calling smaller roles with their required variables +1) Performing prerequisite tasks that small roles may depend on being in place +(openshift_logging certificate generation for example) + +### Small roles + +A small role should be able to: + +1) Be deployed independently of other products (this is different than requiring +being installed after other base components such as OCP) +1) Be self contained and able to determine facts that it requires to complete +1) Fail fast when facts it requires are not available or are invalid +1) "Make it so" based on provided variables and anything that may be required +as part of doing such (this should include data migrations) +1) Have a minimal set of dependencies in meta/main.yml, just enough to do its job + +### Example using decomposition of openshift_logging + +The `openshift_logging` role was created as a port from the deployer image for +the `3.5` deliverable. It was a large role that created the service accounts, +configmaps, secrets, routes, and deployment configs/daemonset required for each +of its different components (Fluentd, Kibana, Curator, Elasticsearch). + +It was possible to configure any of the components independently of one another, +up to a point. However, it was an all of nothing installation and there was a +need from customers to be able to do things like just deploy Fluentd. + +Also being able to support multiple versions of configuration files would become +increasingly messy with a large role. Especially if the components had changes +at different intervals. + +#### Folding of responsibility + +There was a duplicate of work within the installation of three of the four logging +components where there was a possibility to deploy both an 'operations' and +'non-operations' cluster side-by-side. The first step was to collapse that +duplicate work into a single path and allow a variable to be provided to +configure such that either possibility could be created. + +#### Consolidation of responsibility + +The generation of OCP objects required for each component were being created in +the same task file, all Service Accounts were created at the same time, all secrets, +configmaps, etc. The only components that were not generated at the same time were +the deployment configs and the daemonset. The second step was to make the small +roles self contained and generate their own required objects. + +#### Consideration for prerequisites + +Currently the Aggregated Logging stack generates its own certificates as it has +some requirements that prevent it from utilizing the OCP cert generation service. +In order to make sure that all components were able to trust one another as they +did previously, until the cert generation service can be used, the certificate +generation is being handled within the top level `openshift_logging` role and +providing the location of the generated certificates to the individual roles. + +#### Snippets + +[openshift_logging/tasks/install_logging.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/tasks/install_logging.yaml) +```yaml +- name: Gather OpenShift Logging Facts + openshift_logging_facts: + oc_bin: "{{openshift.common.client_binary}}" + openshift_logging_namespace: "{{openshift_logging_namespace}}" + +- name: Set logging project + oc_project: + state: present + name: "{{ openshift_logging_namespace }}" + +- name: Create logging cert directory + file: + path: "{{ openshift.common.config_base }}/logging" + state: directory + mode: 0755 + changed_when: False + check_mode: no + +- include: generate_certs.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +## Elasticsearch +- include_role: + name: openshift_logging_elasticsearch + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include_role: + name: openshift_logging_elasticsearch + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_es_ops_deployment: true + when: + - openshift_logging_use_ops | bool + + +## Kibana +- include_role: + name: openshift_logging_kibana + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}" + openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_replica_count }}" + openshift_logging_kibana_es_host: "{{ openshift_logging_es_host }}" + openshift_logging_kibana_es_port: "{{ openshift_logging_es_port }}" + openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + +- include_role: + name: openshift_logging_kibana + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_kibana_ops_deployment: true + openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}" + openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + openshift_logging_kibana_es_host: "{{ openshift_logging_es_ops_host }}" + openshift_logging_kibana_es_port: "{{ openshift_logging_es_ops_port }}" + openshift_logging_kibana_nodeselector: "{{ openshift_logging_kibana_ops_nodeselector }}" + openshift_logging_kibana_cpu_limit: "{{ openshift_logging_kibana_ops_cpu_limit }}" + openshift_logging_kibana_memory_limit: "{{ openshift_logging_kibana_ops_memory_limit }}" + openshift_logging_kibana_hostname: "{{ openshift_logging_kibana_ops_hostname }}" + openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_ops_replica_count }}" + openshift_logging_kibana_proxy_debug: "{{ openshift_logging_kibana_ops_proxy_debug }}" + openshift_logging_kibana_proxy_cpu_limit: "{{ openshift_logging_kibana_ops_proxy_cpu_limit }}" + openshift_logging_kibana_proxy_memory_limit: "{{ openshift_logging_kibana_ops_proxy_memory_limit }}" + openshift_logging_kibana_cert: "{{ openshift_logging_kibana_ops_cert }}" + openshift_logging_kibana_key: "{{ openshift_logging_kibana_ops_key }}" + openshift_logging_kibana_ca: "{{ openshift_logging_kibana_ops_ca}}" + when: + - openshift_logging_use_ops | bool + + +## Curator +- include_role: + name: openshift_logging_curator + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + +- include_role: + name: openshift_logging_curator + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_curator_ops_deployment: true + openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + openshift_logging_curator_cpu_limit: "{{ openshift_logging_curator_ops_cpu_limit }}" + openshift_logging_curator_memory_limit: "{{ openshift_logging_curator_ops_memory_limit }}" + openshift_logging_curator_nodeselector: "{{ openshift_logging_curator_ops_nodeselector }}" + when: + - openshift_logging_use_ops | bool + + +## Fluentd +- include_role: + name: openshift_logging_fluentd + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include: update_master_config.yaml +``` + +[openshift_logging_elasticsearch/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging_elasticsearch/meta/main.yaml) +```yaml +--- +galaxy_info: + author: OpenShift Red Hat + description: OpenShift Aggregated Logging Elasticsearch Component + company: Red Hat, Inc. + license: Apache License, Version 2.0 + min_ansible_version: 2.2 + platforms: + - name: EL + versions: + - 7 + categories: + - cloud +dependencies: +- role: lib_openshift +``` + +[openshift_logging/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/meta/main.yaml) +```yaml +--- +galaxy_info: + author: OpenShift Red Hat + description: OpenShift Aggregated Logging + company: Red Hat, Inc. + license: Apache License, Version 2.0 + min_ansible_version: 2.2 + platforms: + - name: EL + versions: + - 7 + categories: + - cloud +dependencies: +- role: lib_openshift +- role: openshift_facts +``` + +[openshift_logging/tasks/install_support.yaml - old](https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_logging/tasks/install_support.yaml) +```yaml +--- +# This is the base configuration for installing the other components +- name: Check for logging project already exists + command: > + {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig get project {{openshift_logging_namespace}} --no-headers + register: logging_project_result + ignore_errors: yes + when: not ansible_check_mode + changed_when: no + +- name: "Create logging project" + command: > + {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig new-project {{openshift_logging_namespace}} + when: not ansible_check_mode and "not found" in logging_project_result.stderr + +- name: Create logging cert directory + file: path={{openshift.common.config_base}}/logging state=directory mode=0755 + changed_when: False + check_mode: no + +- include: generate_certs.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- name: Create temp directory for all our templates + file: path={{mktemp.stdout}}/templates state=directory mode=0755 + changed_when: False + check_mode: no + +- include: generate_secrets.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include: generate_configmaps.yaml + +- include: generate_services.yaml + +- name: Generate kibana-proxy oauth client + template: src=oauth-client.j2 dest={{mktemp.stdout}}/templates/oauth-client.yaml + vars: + secret: "{{oauth_secret}}" + when: oauth_secret is defined + check_mode: no + changed_when: no + +- include: generate_clusterroles.yaml + +- include: generate_rolebindings.yaml + +- include: generate_clusterrolebindings.yaml + +- include: generate_serviceaccounts.yaml + +- include: generate_routes.yaml +``` + +# Limitations + +There will always be exceptions for some of these rules, however the majority of +roles should be able to fall within these guidelines. + +# Additional considerations + +## Playbooks including playbooks +In some circumstances it does not make sense to have a composing role but instead +a playbook would be best for orchestrating the role flow. Decisions made regarding +playbooks including playbooks will need to be taken into consideration as part of +defining this process. +Ref: (link to rteague's presentation?) + +## Role dependencies +We want to make sure that our roles do not have any extra or unnecessary dependencies +in meta/main.yml without: + +1. Proposing the inclusion in a team meeting or as part of the PR review and getting agreement +1. Documenting in meta/main.yml why it is there and when it was agreed to (date) + +## Avoiding overly verbose roles +When we are splitting our roles up into smaller components we want to ensure we +avoid creating roles that are, for a lack of a better term, overly verbose. What +do we mean by that? If we have `openshift_master` as an example, and we were to +split it up, we would have a component for `etcd`, `docker`, and possibly for +its rpms/configs. We would want to avoid creating a role that would just create +certificates as those would make sense to be contained with the rpms and configs. +Likewise, when it comes to being able to restart the master, we wouldn't have a +role where that was its sole purpose. + +The same would apply for the `etcd` and `docker` roles. Anything that is required +as part of installing `etcd` such as generating certificates, installing rpms, +and upgrading data between versions should all be contained within the single +`etcd` role. + +## Enforcing standards +Certain naming standards like variable names could be verified as part of a Travis +test. If we were going to also enforce that a role either has tasks or includes +(for example) then we could create tests for that as well. + +## CI tests for individual roles +If we are able to correctly split up roles, it should be possible to test role +installations/upgrades like unit tests (assuming they would be able to be installed +independently of other components). diff --git a/docs/pull_requests.md b/docs/pull_requests.md new file mode 100644 index 000000000..45ae01a9d --- /dev/null +++ b/docs/pull_requests.md @@ -0,0 +1,95 @@ +# Pull Request process + +Pull Requests in the `openshift-ansible` project follow a +[Continuous](https://en.wikipedia.org/wiki/Continuous_integration) +[Integration](https://martinfowler.com/articles/continuousIntegration.html) +process that is similar to the process observed in other repositories such as +[`origin`](https://github.com/openshift/origin). + +Whenever a +[Pull Request is opened](../CONTRIBUTING.md#submitting-contributions), some +automated test jobs must be successfully run before the PR can be merged. + +Some of these jobs are automatically triggered, e.g., Travis, PAPR, and +Coveralls. Other jobs need to be manually triggered by a member of the +[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors). + +## Triggering tests + +We have two different Jenkins infrastructures, and, while that holds true, there +are two commands that trigger a different set of test jobs. We are working on +simplifying the workflow towards a single infrastructure in the future. + +- **Test jobs on the older infrastructure** + + Members of the [OpenShift organization](https://github.com/orgs/openshift/people) + can trigger the set of test jobs in the older infrastructure by writing a + comment with the exact text `aos-ci-test` and nothing else. + + The Jenkins host is not publicly accessible. Test results are posted to S3 + buckets when complete, and links are available both at the bottom of the Pull + Request page and as comments posted by + [@openshift-bot](https://github.com/openshift-bot). + +- **Test jobs on the newer infrastructure** + + Members of the + [Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors) + can trigger the set of test jobs in the newer infrastructure by writing a + comment containing `[test]` anywhere in the comment body. + + The [Jenkins host](https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible/) + is publicly accessible. Like for the older infrastructure, the result of each + job is also posted to the Pull Request as comments and summarized at the + bottom of the Pull Request page. + +### Fedora tests + +There are a set of tests that run on Fedora infrastructure. They are started +automatically with every pull request. + +They are implemented using the [`PAPR` framework](https://github.com/projectatomic/papr). + +To re-run tests, write a comment containing only `bot, retest this please`. + +## Triggering merge + +After a PR is properly reviewed and a set of +[required jobs](https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml) +reported successfully, it can be tagged for merge by a member of the +[Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors) +by writing a comment containing `[merge]` anywhere in the comment body. + +Tagging a Pull Request for merge puts it in an automated merge queue. The +[@openshift-bot](https://github.com/openshift-bot) monitors the queue and merges +PRs that pass all of the required tests. + +### Manual merges + +The normal process described above should be followed: `aos-ci-test` and +`[test]` / `[merge]`. + +In exceptional cases, such as when known problems with the merge queue prevent +PRs from being merged, a PR may be manually merged if _all_ of these conditions +are true: + +- [ ] Travis job must have passed (as enforced by GitHub) +- [ ] Must have passed `aos-ci-test` (as enforced by GitHub) +- [ ] Must have a positive review (as enforced by GitHub) +- [ ] Must have failed the `[merge]` queue with a reported flake at least twice +- [ ] Must have [issues labeled kind/test-flake](https://github.com/openshift/origin/issues?q=is%3Aopen+is%3Aissue+label%3Akind%2Ftest-flake) in [Origin](https://github.com/openshift/origin) linked in comments for the failures +- [ ] Content must not have changed since all of the above conditions have been met (no rebases, no new commits) + +This exception is temporary and should be completely removed in the future once +the merge queue has become more stable. + +Only members of the +[Team OpenShift Ansible Committers](https://github.com/orgs/openshift/teams/team-openshift-ansible-committers) +can perform manual merges. + +## Useful links + +- Repository containing Jenkins job definitions: https://github.com/openshift/aos-cd-jobs +- List of required successful jobs before merge: https://github.com/openshift/aos-cd-jobs/blob/master/sjb/test_status_config.yml +- Source code of the bot responsible for testing and merging PRs: https://github.com/openshift/test-pull-requests/ +- Trend of the time taken by merge jobs: https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_openshift_ansible/buildTimeTrend diff --git a/docs/repo_structure.md b/docs/repo_structure.md new file mode 100644 index 000000000..49300f80c --- /dev/null +++ b/docs/repo_structure.md @@ -0,0 +1,61 @@ +# Repository structure + +### Ansible + +``` +. +├── inventory Contains dynamic inventory scripts, and examples of +│ Ansible inventories. +├── library Contains Python modules used by the playbooks. +├── playbooks Contains Ansible playbooks targeting multiple use cases. +└── roles Contains Ansible roles, units of shared behavior among + playbooks. +``` + +#### Ansible plugins + +These are plugins used in playbooks and roles: + +``` +. +├── ansible-profile +├── callback_plugins +├── filter_plugins +└── lookup_plugins +``` + +### Scripts + +``` +. +└── utils Contains the `atomic-openshift-installer` command, an + interactive CLI utility to install OpenShift across a + set of hosts. +``` + +### Documentation + +``` +. +└── docs Contains documentation for this repository. +``` + +### Tests + +``` +. +└── test Contains tests. +``` + +### CI + +These files are used by [PAPR](https://github.com/projectatomic/papr), +It is very similar in workflow to Travis, with the test +environment and test scripts defined in a YAML file. + +``` +. +├── .papr.yml +├── .papr.sh +└── .papr.inventory +``` diff --git a/docs/style_guide.adoc b/docs/style_guide.adoc new file mode 100644 index 000000000..2c2cb8610 --- /dev/null +++ b/docs/style_guide.adoc @@ -0,0 +1,174 @@ +// vim: ft=asciidoc + += openshift-ansible Style Guide + +The purpose of this guide is to describe the preferred coding conventions used in this repository (both in ansible and python). + +It is important to note that this repository may not currently comply with all style guide rules, but the intention is that it will. + +All new pull requests created against this repository MUST comply with this guide. + +This style guide complies with https://www.ietf.org/rfc/rfc2119.txt[RFC2119]. + +== Python + + +=== Python Maximum Line Length + +.Context: +* https://www.python.org/dev/peps/pep-0008/#maximum-line-length[Python Pep8 Line Length] + +''' +[[All-lines-SHOULD-be-no-longer-than-80-characters]] +[cols="2v,v"] +|=== +| <<All-lines-SHOULD-be-no-longer-than-80-characters, Rule>> +| All lines SHOULD be no longer than 80 characters. +|=== + +Every attempt SHOULD be made to comply with this soft line length limit, and only when it makes the code more readable should this be violated. + +Code readability is subjective, therefore pull-requests SHOULD still be merged, even if they violate this soft limit as it is up to the individual contributor to determine if they should violate the 80 character soft limit. + + +''' +[[All-lines-MUST-be-no-longer-than-120-characters]] +[cols="2v,v"] +|=== +| <<All-lines-MUST-be-no-longer-than-120-characters, Rule>> +| All lines MUST be no longer than 120 characters. +|=== + +This is a hard limit and is enforced by the build bot. This check MUST NOT be disabled. + + + +== Ansible + + +=== Ansible Yaml file extension +''' +[[All-Ansible-Yaml-files-MUST-have-a-yml-extension-and-NOT-YML-yaml-etc]] +[cols="2v,v"] +|=== +| <<All-Ansible-Yaml-files-MUST-have-a-yml-extension-and-NOT-YML-yaml-etc, Rule>> +| All Ansible Yaml files MUST have a .yml extension (and NOT .YML, .yaml etc). +|=== + +Ansible tooling (like `ansible-galaxy init`) create files with a .yml extension. Also, the Ansible documentation website references files with a .yml extension several times. Because of this, it is normal in the Ansible community to use a .yml extension for all Ansible Yaml files. + +Example: `tasks.yml` + + +=== Ansible CLI Variables +''' +[[Variables-meant-to-be-passed-in-from-the-ansible-CLI-MUST-have-a-prefix-of-cli]] +[cols="2v,v"] +|=== +| <<Variables-meant-to-be-passed-in-from-the-ansible-CLI-MUST-have-a-prefix-of-cli, Rule>> +| Variables meant to be passed in from the ansible CLI MUST have a prefix of cli_ +|=== + +Ansible allows variables to be passed in on the command line using the `-e` option. These variables MUST have a prefix of cli_ so that it's clear where they came from, and that they could be exploited. + + +.Example: +[source] +---- +ansible-playbook -e cli_foo=bar someplays.yml +---- + +=== Ansible Global Variables +''' +[[Global-variables-MUST-have-a-prefix-of-g]] +[cols="2v,v"] +|=== +| <<Global-variables-MUST-have-a-prefix-of-g, Rule>> +| Global variables MUST have a prefix of g_ +|=== +Ansible global variables are defined as any variables outside of ansible roles. Examples include playbook variables, variables passed in on the cli, etc. + + +.Example: +[source] +---- +g_environment: someval +---- + +=== Ansible Role Variables +Ansible role variables are defined as variables contained in (or passed into) a role. + +''' +[[Role-variables-MUST-have-a-prefix-of-atleast-3-characters-See.below.for.specific.naming.rules]] +[cols="2v,v"] +|=== +| <<Role-variables-MUST-have-a-prefix-of-atleast-3-characters-See.below.for.specific.naming.rules, Rule>> +| Role variables MUST have a prefix of at least 3 characters. See below for specific naming rules. +|=== + +==== Role with 3 (or more) words in the name + +Take the first letter of each of the words. + +.3 word example: +* Role name: made_up_role +* Prefix: mur +[source] +---- +mur_var1: value_one +---- + +.4 word example: +* Role name: totally_made_up_role +* Prefix: tmur +[source] +---- +tmur_var1: value_one +---- + + + +==== Role with 2 (or less) words in the name + +Make up a prefix that makes sense. + +.1 word example: +* Role name: ansible +* Prefix: ans +[source] +---- +ans_var1: value_one +---- + +.2 word example: +* Role name: ansible_tower +* Prefix: tow +[source] +---- +tow_var1: value_one +---- + + +==== Role name prefix conflicts +If two role names contain words that start with the same letters, it will seem like their prefixes would conflict. + +Role variables are confined to the roles themselves, so this is actually only a problem if one of the roles depends on the other role (or uses includes into the other role). + +.Same prefix example: +* First Role Name: made_up_role +* First Role Prefix: mur +* Second Role Name: my_uber_role +* Second Role Prefix: mur +[source] +---- +- hosts: localhost + roles: + - { role: made_up_role, mur_var1: val1 } + - { role: my_uber_role, mur_var1: val2 } +---- + +Even though both roles have the same prefix (mur), and even though both roles have a variable named mur_var1, these two variables never exist outside of their respective roles. This means that this is not a problem. + +This would only be a problem if my_uber_role depended on made_up_role, or vice versa. Or if either of these two roles included things from the other. + +This is enough of a corner case that it is unlikely to happen. If it does, it will be addressed on a case by case basis. |