| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Prepare the check to support verifying multiple paths, not only /var.
|
|\
| |
| | |
Merged by openshift-bot
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
ingvagabund/set-proper-etcd-data-dir-for-system-container
set proper etcd_data_dir for system container
|
| |/ |
|
|\ \
| | |
| | | |
Merged by openshift-bot
|
| | | |
|
| |/ |
|
|\ \
| | |
| | | |
Merged by openshift-bot
|
| | | |
|
|\ \ \
| | | |
| | | | |
Merged by openshift-bot
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The oc_atomic_container module requires features only available in
atomic versions 1.17.2+.
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1461662
|
| |_|/
|/| | |
|
|\ \ \
| | | |
| | | | |
Merged by openshift-bot
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We cannot assume that 3.5 to 3.6 upgrades were signed with the correct
certs
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We need to sort out how to know that the registry certificate has the
proper hostnames attached to it. It will for 3.6 clean installs but not
for 3.5 to 3.6 upgrades. For now make it opt in and come back to
this.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Configures OPENSHIFT_DEFAULT_REGISTRY=docker-registry.default.svc
Adds 'cluster.local' to dns search on nodes via dispatcher script
Adds '.svc' to NO_PROXY defaults
|
|\ \ \ \
| | | | |
| | | | | |
Merged by openshift-bot
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Some registries are not configured with valid certificates and thus the
check fails with 'http: server gave HTTP response to HTTPS client'.
Since this is not fetching images, but only checking for existence,
trade security for convenience.
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Merged by openshift-bot
|
| | |/ / /
| |/| | |
| | | | |
| | | | |
| | | | | |
This would be the case if for instance they'd upgraded and then
migrated.
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Merged by openshift-bot
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jose A. Rivera <jarrpa@redhat.com>
|
|\ \ \ \ \
| |_|/ / /
|/| | | | |
Merged by openshift-bot
|
| | | | | |
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
Merged by openshift-bot
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Merged by openshift-bot
|
| | |_|_|/
| |/| | |
| | | | |
| | | | |
| | | | | |
If we have no master config assume that we're a clean install.
If we're a clean install and we're 3.6 or greater use etcd v3 storage.
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Merged by openshift-bot
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
bug 1460564. Fixes [BZ #1460564](https://bugzilla.redhat.com/show_bug.cgi?id=1460564).
Unfortunately, the defaults for Elasticsearch prior to v5 allow more
than one "node" to access the same configured storage volume(s).
This change forces this value to 1 to ensure we don't have an ES pod
starting up accessing a volume while another ES pod is shutting down
when reploying. This can lead to "1" directories being created in
`/elasticsearch/persistent/${CLUSTER_NAME}/data/${CLUSTER_NAME}/nodes/`.
By default ES uses a "0" directory there when only one node is accessing
it.
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
Rename cockpit-shell -> cockpit-system
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The package name has changed.
See
https://bugzilla.redhat.com/show_bug.cgi?id=1461689
https://bugzilla.redhat.com/show_bug.cgi?id=1419718
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
Update CloudForms templates for CF 4.5/CF 4.2
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
'cloudforms42' for CF 4.2.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
'cloudforms45' for CF 4.5.
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Merged by openshift-bot
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
port the code that creates the external Elasticsearch routes to the
new logging roles
Have to suppress this error message:
SSL Problem illegal change cipher spec msg, conn state = 6, handshake state = 1
which is coming from the router health check, until
https://github.com/openshift/origin/issues/14515
is fixed - otherwise, the es log is spammed relentlessly
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Merged by openshift-bot
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
We cannot rely on the `watch.Until` call in the `rollout status`
subcommand for the time being, so we need to ignore the result of this
call. This will make the rollout status check best-effort, so we need to
follow it with a poll for the actual status of the rollout, which we can
extract from the `openshift.io/deployment.phase` annotation on the
ReplicationControllers. This annotation can have only three values --
`Running`, `Complete` and `Failed`. If we poll on this attribute until
we stop seeing `Running`, we can then inspect the last result for
`Failed`; if it's present, we have failed the deployment.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|