| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Fix indentation of run_once
|
|/
|
|
| |
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1423430
|
|\
| |
| | |
Consolidate root/utils tests
|
| | |
|
| |
| |
| |
| | |
- Consolidate tests between the root of the repo and utils
|
|\ \
| | |
| | | |
Remove lots of dead code in tests
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
They are not executable anymore, and tests are now meant to be run
through pytest.
|
|/ / |
|
|\ \
| | |
| | | |
Add pre-upgrade check for reserved namespaces
|
|/ /
| |
| |
| | |
Signed-off-by: Monis Khan <mkhan@redhat.com>
|
|\ \
| | |
| | | |
logging needs openshift_master_facts before openshift_facts
|
| |/ |
|
|\ \
| |/
|/| |
Dockerfile and docs to run containerized playbooks
|
| |
| |
| |
| |
| |
| |
| |
| | |
Update openshift-ansible's Dockerfile to use playbook2image as a base, with the
goal to run an arbitrary playbook from a container.
The existing Dockerfile is moved to Dockerfile.rhel7 for the productized version
and will be updated to use playbook2image later.
|
|\ \
| |/
|/| |
separate out test tool configs from setup.cfg
|
| |
| |
| |
| |
| | |
Since we are moving away from setuptools for invoking tests, lets
move the configs for the different test tools into their own configs.
|
|\ \
| | |
| | | |
Adding support for multiple router shards.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| |_|/
|/| | |
Adding oc_project to lib_openshift.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Misc cleanup
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
That line is testing Python's list.count method, instead of yedit.
The assertion right above is a superset of it, as it checks for
equality to some expected value.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Instead of checking if a string is True, check if 'found' is True, the
string is the error message.
Also, we can remove the loop and use the simpler Python 'in' construct.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Do not use `print` in unit tests, send messages through the test
framework instead.
- Remove unused import.
- Add spaces around equal sign in assigment.
- Turn method into a function.
- Reorganize imports according to PEP8.
|
| | | |
|
| | |
| | |
| | |
| | | |
Detected by pylint. The fixture indeed doesn't require an argument.
|
|/ / |
|
|\ \
| | |
| | | |
node/sdn: make /var/lib/cni persistent to ensure IPAM allocations stick around across node restart
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
around across node restart
With the move to a CNI plugin, docker no longer handles IPAM, but CNI does through
openshift-sdn's usage of the 'host-local' CNI IPAM plugin. That plugin stores
IPAM allocations under /var/lib/cni/.
If the node container gets restarted, without presreving /var/lib/cni, the IPs
currently allocated to running pods get lost and on restart, openshift-sdn
may allocate those IPs to new pods causing duplicate allocations.
This never happened with docker because it has its own persistent IPAM store that
does not get removed when docker restarts. Also because (historically) when docker
restarted, all the containers died and the IP allocations were released by the
daemon.
Fix this by ensuring that IPAM allocations (which are tied to the life of the pod,
*not* the life of the openshift-node process) persist even if the openshift-node
process restarts.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1427789
|
|\ \ \
| |/ /
|/| | |
Don't install python-ruamel-yaml
|
| | |
| | |
| | |
| | | |
Just rely on PyYAML as a fallback and hope that's there
|
|/ / |
|
|\ \
| | |
| | | |
raise exceptions when walking through object path
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
if we're given path a.b.c and the existing object is:
a:
b:
- item1
raise an exception due to unexpected objects found while traversing the path (ie. b is a list, not a dict)
also, add_entry assumes new dicts for each sub element when creating elements besides the final assignment value.
doing something like a.b.c[0] = 12 where 'c' doesn't exist raises an exception
add test cases to cover:
access path that differs from existing object
create new objects with an embedded list in the path
create new object with a list at the end (define the end list in the passed in 'value' to avoid this exception)
|