Hi,
I think I've found the root of the problem, it is a bug in vdsm. Gluster 10 produces xml description without the stripeCount tag, while vdsm expects it to be present.
I've tried to fix it simply adding a check in /usr/lib/python3.6/site-packages/vdsm/gluster/cli.py
429c429
< if (el.find('stripeCount')): value['stripeCount'] =
el.find('stripeCount').text
---
> value['stripeCount'] = el.find('stripeCount').text
In this way, after restarting vdsmd and supervdsmd, I'm able to connect to gluster 10 volumes.
I guess this should be fixed in possibly a more proper way upstream.
Cheers,
Alessandro
Hi,thanks, unfortunately I’ve done it already, otherwise it would not even start the engine. This error appears after the engine is up with the downgraded postgresql-jdbc.Cheers,
Alessandro
Il giorno 25 apr 2022, alle ore 06:11, Strahil Nikolov <hunter86_bg@yahoo.com> ha scritto:
Mybe it's worth trying to downgrade postgresql-jdbc and try again.
Best Regards,Strahil Nikolov
On Mon, Apr 25, 2022 at 4:52, Alessandro De SalvoTo complete the diagnosis, in vdsm.log I see the following error:
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=()
err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n
<opErrstr />\n <volInfo>\n <volumes>\n <volume>\n
<name>vm-01</name>\n <id>d77d9a24-5f30-4acb-962c-559e63917229</id>\n
<status>1</status>\n <statusStr>Started</statusStr>\n
<snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n
<distCount>1</distCount>\n <replicaCount>3</replicaCount>\n
<arbiterCount>1</arbiterCount>\n <disperseCount>0</disperseCount>\n
<redundancyCount>0</redundancyCount>\n <type>2</type>\n
<typeStr>Replicate</typeStr>\n <transport>0</transport>\n
<bricks>\n <brick
uuid="09e78070-4d55-4a96-ada7-658e7e2799a6">host1:/gluster/vm/01/data<name>host1:/gluster/vm/01/data</name><hostUuid>09e78070-4d55-4a96-ada7-658e7e2799a6</hostUuid><isArbiter>0</isArbiter></brick>\n
<brick
uuid="fb9eb3ab-a260-4ef7-94cf-f03c630d7b97">host2:/gluster/vm/01/data<name>host2:/gluster/vm/01/data</name><hostUuid>fb9eb3ab-a260-4ef7-94cf-f03c630d7b97</hostUuid><isArbiter>0</isArbiter></brick>\n
<brick
uuid="cabe4f02-eb45-486e-97e0-3e2466415fd0">host3:/gluster/vm/01/data<name>host3:/gluster/vm/01/data</name><hostUuid>cabe4f02-eb45-486e-97e0-3e2466415fd0</hostUuid><isArbiter>1</isArbiter></brick>\n
</bricks>\n <optCount>24</optCount>\n <options>\n <option>\n
<name>nfs.disable</name>\n <value>on</value>\n </option>\n
<option>\n <name>transport.address-family</name>\n <value>inet</value>\n
</option>\n <option>\n <name>performance.quick-read</name>\n
<value>off</value>\n </option>\n <option>\n
<name>performance.read-ahead</name>\n <value>off</value>\n
</option>\n <option>\n <name>performance.io-cache</name>\n
<value>off</value>\n </option>\n <option>\n
<name>performance.stat-prefetch</name>\n <value>off</value>\n
</option>\n <option>\n
<name>performance.low-prio-threads</name>\n <value>32</value>\n
</option>\n <option>\n <name>network.remote-dio</name>\n
<value>enable</value>\n </option>\n <option>\n
<name>cluster.eager-lock</name>\n <value>enable</value>\n
</option>\n <option>\n <name>cluster.quorum-type</name>\n
<value>auto</value>\n </option>\n <option>\n
<name>cluster.server-quorum-type</name>\n <value>server</value>\n
</option>\n <option>\n
<name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n
</option>\n <option>\n <name>cluster.locking-scheme</name>\n
<value>granular</value>\n </option>\n <option>\n
<name>cluster.shd-max-threads</name>\n <value>8</value>\n
</option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n
<value>10000</value>\n </option>\n <option>\n
<name>features.shard</name>\n <value>on</value>\n </option>\n
<option>\n <name>user.cifs</name>\n <value>off</value>\n
</option>\n <option>\n <name>features.shard-block-size</name>\n
<value>512MB</value>\n </option>\n <option>\n
<name>storage.owner-uid</name>\n <value>36</value>\n
</option>\n <option>\n <name>storage.owner-gid</name>\n
<value>36</value>\n </option>\n <option>\n
<name>features.cache-invalidation</name>\n <value>off</value>\n
</option>\n <option>\n
<name>performance.client-io-threads</name>\n <value>off</value>\n
</option>\n <option>\n <name>nfs-ganesha</name>\n
<value>disable</value>\n </option>\n <option>\n
<name>cluster.enable-shared-storage</name>\n <value>disable</value>\n
</option>\n </options>\n </volume>\n <count>1</count>\n
</volumes>\n </volInfo>\n</cliOutput>']
Thanks,
Alessandro
Il 25/04/22 01:02, Alessandro De Salvo ha scritto:
> Hi,
>
> I'm trying to install a new self-hosted engine 4.5.0 on an upgraded
> gluster v10.1, but the deployment fails at the domain activation
> stage, with this error:
>
>
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
> detail is "[]". HTTP response code is 400.
>
>
> Looking at the server.log in the engine I see the follwing error:
>
>
> 2022-04-25 00:55:58,266+02 ERROR
> [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1)
> RESTEASY002010: Failed to execute:
> javax.ws.rs.WebApplicationException: HTTP 404 Not Found
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BaseBackendResource.handleError(BaseBackendResource.java:236)
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:119)
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:99)
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:34)
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:30)
> at
> org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendAttachedStorageDomainResource.get(BackendAttachedStorageDomainResource.java:35)
> at
> org.ovirt.engine.api.restapi-definition//org.ovirt.engine.api.resource.AttachedStorageDomainResource.doGet(AttachedStorageDomainResource.java:81)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>
>
> The gluster volume itself is working fine and has the storage uid/gid
> set to 36 as it should be, and if I use a server with gluster 8 the
> installation works, while it fails with gluster 10 servers.
>
> Any help is appreciated, thanks,
>
>
> Alessandro
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZG3UARP2NFC4PZPMD743JSTPTEOWZMK/https://lists.ovirt.org/archives/list/users@ovirt.org/message/GSILHJKGTXKRWPOONWFNGPFIYZSWDY2U/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQ4RFROJXP4JHGEX2UGKMBJOD3VMHUH5/