Hello,
It seems that the work for including ovirt as a provider in the master
branch of openshift installer has been done. I compiled the master code
and ovirt does appear in the survey.
I don't have much time to test it for now but is it operationnal? If
yes, I will prior to have a look to it.
Thanks.
Le 06/01/2020 à 21:30, Roy Golan a écrit :
The merge window is now open for the masters branches of the various
origin components.
Post merge there should be an OKD release - this is not under my
control, but when it will be available I'll let you know.
On Mon, 6 Jan 2020 at 20:54, Nathanaël Blanchet <blanchet(a)abes.fr
<mailto:blanchet@abes.fr>> wrote:
Hello Roy
Le 21/11/2019 à 13:57, Roy Golan a écrit :
>
>
> On Thu, 21 Nov 2019 at 08:48, Roy Golan <rgolan(a)redhat.com
> <mailto:rgolan@redhat.com>> wrote:
>
>
>
> On Wed, 20 Nov 2019 at 09:49, Nathanaël Blanchet
> <blanchet(a)abes.fr <mailto:blanchet@abes.fr>> wrote:
>
>
> Le 19/11/2019 à 19:23, Nathanaël Blanchet a écrit :
>>
>>
>> Le 19/11/2019 à 13:43, Roy Golan a écrit :
>>>
>>>
>>> On Tue, 19 Nov 2019 at 14:34, Nathanaël Blanchet
>>> <blanchet(a)abes.fr <mailto:blanchet@abes.fr>> wrote:
>>>
>>> Le 19/11/2019 à 08:55, Roy Golan a écrit :
>>>> oc get -o json clusterversion
>>>
>>> This is the output of the previous failed
>>> deployment, I'll give a try to a newer one when
>>> I'll have a minute to test
>>>
>> Without changing nothing with template, I gave a new
>> try and... nothing works anymore now, none of provided
>> IPs can be pingued : dial tcp 10.34.212.51:6443
>> <
http://10.34.212.51:6443>: connect: no route to host",
>> so none of masters can be provisonned by bootstrap.
>>
>> I tried with the latest rhcos and latest ovirt 4.3.7, it
>> is the same. Obviously something changed since my first
>> attempt 12 days ago... is your docker image for
>> openshift-installer up to date?
>>
>> Are you still able to your side to deploy a valid cluster ?
>>
> I investigated looking at bootstrap logs (attached) and
> it seems that every containers die immediately after been
> started.
>
> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
> 07:02:33.60107571 +0000 UTC m=+0.794838407 container init
> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
> (
image=registry.svc.ci.openshift.org/origin/release:4.3
> <
http://registry.svc.ci.openshift.org/origin/release:4.3>,
> name=eager_cannon)
> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
> 07:02:33.623197173 +0000 UTC m=+0.816959853 container
> start
> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
> (
image=registry.svc.ci.openshift.org/origin/release:4.3
> <
http://registry.svc.ci.openshift.org/origin/release:4.3>,
> name=eager_cannon)
> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
> 07:02:33.623814258 +0000 UTC m=+0.817576965 container
> attach
> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
> (
image=registry.svc.ci.openshift.org/origin/release:4.3
> <
http://registry.svc.ci.openshift.org/origin/release:4.3>,
> name=eager_cannon)
> Nov 20 07:02:34 localhost systemd[1]:
>
libpod-446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603.scope:
> Consumed 814ms CPU time
> Nov 20 07:02:34 localhost podman[2024]: 2019-11-20
> 07:02:34.100569998 +0000 UTC m=+1.294332779 container
> died
> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
> (
image=registry.svc.ci.openshift.org/origin/release:4.3
> <
http://registry.svc.ci.openshift.org/origin/release:4.3>,
> name=eager_cannon)
> Nov 20 07:02:35 localhost podman[2024]: 2019-11-20
> 07:02:35.138523102 +0000 UTC m=+2.332285844 container
> remove
> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
> (
image=registry.svc.ci.openshift.org/origin/release:4.3
> <
http://registry.svc.ci.openshift.org/origin/release:4.3>,
> name=eager_cannon)
>
> and this:
>
> Nov 20 07:04:16 localhost hyperkube[1909]: E1120
> 07:04:16.489527 1909 remote_runtime.go:200]
> CreateContainer in sandbox
>
"58f2062aa7b6a5b2bdd6b9cf7b41a9f94ca2b30ad5a20e4fa4dec8a9b82f05e5"
> from runtime service failed: rpc error: code = Unknown
> desc = container create failed: container_linux.go:345:
> starting container process caused "exec: \"runtimecfg\":
> executable file not found in $PATH"
> Nov 20 07:04:16 localhost hyperkube[1909]: E1120
> 07:04:16.489714 1909 kuberuntime_manager.go:783] init
> container start failed: CreateContainerError: container
> create failed: container_linux.go:345: starting container
> process caused "exec: \"runtimecfg\": executable file not
> found in $PATH"
>
> What do you think about this?
>
>
> I'm seeing the same now, checking...
>
>
> Because of the move upstream to release OKD the release-image
> that comes with the installer I gave you are no longer valid.
>
> I need to prepare an installer version with the preview of OKD,
> you can find the details here
>
https://mobile.twitter.com/smarterclayton/status/1196477646885965824
I tested your last openshift-installer container on quay.io
<
http://quay.io>, but the ovirt provider is not available anymore.
Will ovirt be supported as an OKD 4.2 iaas provider ?
>
>
>
>>> (do I need to use the terraform-workers tag instead
>>> of latest?)
>>>
>>> docker
pullquay.io/rgolangh/openshift-installer:terraform-workers
<
http://quay.io/rgolangh/openshift-installer:terraform-workers>
>>>
>>>
>>> [root@openshift-installer
>>> openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit]#
>>> ./oc get -o json clusterversion
>>> {
>>> "apiVersion": "v1",
>>> "items": [
>>> {
>>> "apiVersion":
"config.openshift.io/v1
>>> <
http://config.openshift.io/v1>",
>>> "kind": "ClusterVersion",
>>> "metadata": {
>>> "creationTimestamp":
"2019-11-07T12:23:06Z",
>>> "generation": 1,
>>> "name": "version",
>>> "namespace": "",
>>> "resourceVersion": "3770202",
>>> "selfLink":
>>> "/apis/config.openshift.io/v1/clusterversions/version
>>>
<
http://config.openshift.io/v1/clusterversions/version>",
>>> "uid":
>>> "77600bba-6e71-4b35-a60b-d8ee6e0f545c"
>>> },
>>> "spec": {
>>> "channel": "stable-4.3",
>>> "clusterID":
>>> "6f87b719-e563-4c0b-ab5a-1144172bc983",
>>> "upstream":
>>>
"https://api.openshift.com/api/upgrades_info/v1/graph"
>>> <
https://api.openshift.com/api/upgrades_info/v1/graph>
>>> },
>>> "status": {
>>> "availableUpdates": null,
>>> "conditions": [
>>> {
>>> "lastTransitionTime":
"2019-11-07T12:23:12Z",
>>> "status": "False",
>>> "type": "Available"
>>> },
>>> {script
>>> "lastTransitionTime":
"2019-11-07T12:56:15Z",
>>> "message": "Cluster operator image-registry
is
>>> still updating",
>>> "reason": "ClusterOperatorNotAvailable",
>>> "status": "True",
>>> "type": "Failing"
>>> },
>>> {
>>> "lastTransitionTime":
"2019-11-07T12:23:12Z",
>>> "message": "Unable to apply
>>> 4.3.0-0.okd-2019-10-29-180250: the cluster operator
>>> image-registry has not yet successfully rolled out",
>>> "reason": "ClusterOperatorNotAvailable",
>>> "status": "True",
>>> "type": "Progressing"
>>> },
>>> {
>>> "lastTransitionTime":
"2019-11-07T12:23:12Z",
>>> "message": "Unable to retrieve available
updates:
>>> currently installed version
>>> 4.3.0-0.okd-2019-10-29-180250 not found in the
>>> \"stable-4.3\" channel",
>>> "reason": "RemoteFailed",
>>> "status": "False",
>>> "type": "RetrievedUpdates"
>>> }
>>> ],
>>> "desired": {
>>> "force": false,
>>> "image":
>>>
"registry.svc.ci.openshift.org/origin/release@sha256:68286e07f7d68ebc8a067389aabf38dee9f9b810c5520d6ee4593c38eb48ddc9
>>>
<
http://registry.svc.ci.openshift.org/origin/release@sha256:68286e07f7d68e...;,
>>> "version":
>>> "4.3.0-0.okd-2019-10-29-180250"
>>>
>>>
>>> Indeed this version is not the latest and is missing
>>> the aforementioned fix for the registry.
>>>
>>> },
>>> "history": [
>>> {
>>> "completionTime": null,
>>> "image":
>>>
"registry.svc.ci.openshift.org/origin/release@sha256:68286e07f7d68ebc8a067389aabf38dee9f9b810c5520d6ee4593c38eb48ddc9
>>>
<
http://registry.svc.ci.openshift.org/origin/release@sha256:68286e07f7d68e...;,
>>> "startedTime": "2019-11-07T12:23:12Z",
>>> "state": "Partial",
>>> "verified": false,
>>> "version":
"4.3.0-0.okd-2019-10-29-180250"
>>> }
>>> ],
>>> "observedGeneration": 1,
>>> "versionHash":
"-3onP9QpPTg="
>>> }
>>> }
>>> ],
>>> "kind": "List",
>>> "metadata": {
>>> "resourceVersion": "",
>>> "selfLink": ""
>>> }
>>>
>>> }
>>>
>>>
>>> Can you answer to these few questions please?
>>>
>>> * The latest stable OKD version is 4.2.4. Is it
>>> possible to chose the version of okd when
>>> deploying (seems to use 4.3) or does the
>>> installer always download the latest OKD?
>>>
>>>
>>> * Can we use FCOS instead of RHCOS?
>>>
>>>
>>> * About the pull secret, do we absolutely need a
>>> redhat login to get this file to deploy an
>>> upstream OKD cluster and not downstream openshift?
>>>
>>>
>>> To answer the 3 of those, this specific build is not
>>> really OKD, and will use 4.3 and Red Hat artifact and
>>> must use RHCOs, hence the pull secret thing.
>>> I frankly don't know when OKD 4.3 is going to be
>>> released, I guess it will be on top FCOS.
>>> I'll update the list once we have the oVirt installer
>>> for OKD ready for testing (on FCOS)
>>>
>>>
>>>
>>> --
>>> Nathanaël Blanchet
>>>
>>> Supervision réseau
>>> Pôle Infrastrutures Informatiques
>>> 227 avenue Professeur-Jean-Louis-Viala
>>> 34193 MONTPELLIER CEDEX 5
>>> Tél. 33 (0)4 67 54 84 55
>>> Fax 33 (0)4 67 54 84 14
>>> blanchet(a)abes.fr <mailto:blanchet@abes.fr>
>>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax 33 (0)4 67 54 84 14
>> blanchet(a)abes.fr <mailto:blanchet@abes.fr>
>>
>> _______________________________________________
>> Users mailing list --users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email tousers-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
>> Privacy
Statement:https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of
Conduct:https://www.ovirt.org/community/about/community-guidelines/
>> List
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/ML...
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax 33 (0)4 67 54 84 14
> blanchet(a)abes.fr <mailto:blanchet@abes.fr>
>
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr <mailto:blanchet@abes.fr>
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr