<div dir="ltr"><div><div><div><div>Hello,<br></div>a system upgraded from 4.1.7 (with libgfapi not enabled) to 4.2.</div><div>3 hosts in a HC configuration</div><div><br></div>Now I try to enable libgfapi:<br><br></div>Before a CentOS 6 VM booted with a qemu-kvm line of type:<br><br> -drive file=/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/02731d5e-c222-4697-8f1f-d26a6a23ec79/1836df76-835b-4625-9ce8-0856176dc30c,format=raw,if=none,id=drive-virtio-disk0,serial=02731d5e-c222-4697-8f1f-d26a6a23ec79,cache=none,werror=stop,rerror=stop,aio=thread<br><br></div>Shutdown VM named centos6<br><br>Setup engine<br>root@ovengine log]# engine-config -s LibgfApiSupported=true<br>Please select a version:<br>1. 3.6<br>2. 4.0<br>3. 4.1<br>4. 4.2<br>4<br>[root@ovengine log]# engine-config -g LibgfApiSupported<br>LibgfApiSupported: false version: 3.6<br>LibgfApiSupported: false version: 4.0<br>LibgfApiSupported: false version: 4.1<br>LibgfApiSupported: true version: 4.2<br><br>Restart engine<br><br>[root@ovengine log]# systemctl restart ovirt-engine<br>[root@ovengine log]# <br><div><br></div><div>reconnect to web admin portal <br></div><div><br></div><div>Power on the centos6 VM</div><div>I get &quot;Failed to run VM&quot; on all the 3 configured hosts</div><div><br></div><div>Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 (User: admin@internal-authz).<br>Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host ovirt02.localdomain.local.<br>Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host ovirt03.localdomain.local.<br>Jan 1, 2018, 11:53:35 PM Failed to run VM centos6 on Host ovirt01.localdomain.local.<br></div><div><br></div><div>In engine.log</div><div>2018-01-01 23:53:34,996+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed in &#39;CreateBrokerVDS&#39; method, for vds: &#39;ovirt01.localdomain.local&#39;; host: &#39;ovirt01.localdomain.local&#39;: 1<br>2018-01-01 23:53:34,996+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] Command &#39;CreateBrokerVDSCommand(HostName = ovirt01.localdomain.local, CreateVDSCommandParameters:{hostId=&#39;e5079118-1147-469e-876f-e20013276ece&#39;, vmId=&#39;64da5593-1022-4f66-ae3f-b273deda4c22&#39;, vm=&#39;VM [centos6]&#39;})&#39; execution failed: 1<br>2018-01-01 23:53:34,996+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] FINISH, CreateBrokerVDSCommand, log id: e3bbe56<br>2018-01-01 23:53:34,996+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed to create VM: 1<br>2018-01-01 23:53:34,997+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] Command &#39;CreateVDSCommand( CreateVDSCommandParameters:{hostId=&#39;e5079118-1147-469e-876f-e20013276ece&#39;, vmId=&#39;64da5593-1022-4f66-ae3f-b273deda4c22&#39;, vm=&#39;VM [centos6]&#39;})&#39; execution failed: java.lang.ArrayIndexOutOfBoundsException: 1<br>2018-01-01 23:53:34,997+01 INFO  [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] FINISH, CreateVDSCommand, return: Down, log id: ab299ce<br>2018-01-01 23:53:34,997+01 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-2885) [8d7c68c6-b236-4e76-b7b2-f000e2b07425] Failed to run VM &#39;centos6&#39;: EngineException: java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 1 (Failed with error ENGINE and code 5001)<br></div><div><br></div><div>All engine.log file here:</div><div><a href="https://drive.google.com/file/d/1UZ9dWnGrBaFVnfx1E_Ch52CtYDDtzT3p/view?usp=sharing">https://drive.google.com/file/d/1UZ9dWnGrBaFVnfx1E_Ch52CtYDDtzT3p/view?usp=sharing</a></div><div><br></div><div>The VM fails to start on all 3 hosts, but I don&#39;t see particular error on them; eg on ovirt01 vdsm.log.1.xz here:</div><div><a href="https://drive.google.com/file/d/1yIlKtRtvftJVzWNlzV3WhJ3DaP4ksQvw/view?usp=sharing">https://drive.google.com/file/d/1yIlKtRtvftJVzWNlzV3WhJ3DaP4ksQvw/view?usp=sharing</a></div><div><br></div><div>The domain where the VM disk is on seems ok;</div><div>[root@ovirt01 vdsm]# gluster volume info data<br> <br>Volume Name: data<br>Type: Replicate<br>Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt01.localdomain.local:/gluster/brick2/data<br>Brick2: ovirt02.localdomain.local:/gluster/brick2/data<br>Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter)<br>Options Reconfigured:<br>transport.address-family: inet<br>performance.readdir-ahead: on<br>performance.quick-read: off<br>performance.read-ahead: off<br>performance.io-cache: off<br>performance.stat-prefetch: off<br>cluster.eager-lock: enable<br>network.remote-dio: off<br>cluster.quorum-type: auto<br>cluster.server-quorum-type: server<br>storage.owner-uid: 36<br>storage.owner-gid: 36<br>features.shard: on<br>features.shard-block-size: 512MB<br>performance.low-prio-threads: 32<br>cluster.data-self-heal-algorithm: full<br>cluster.locking-scheme: granular<br>cluster.shd-wait-qlength: 10000<br>cluster.shd-max-threads: 6<br>network.ping-timeout: 30<br>user.cifs: off<br>nfs.disable: on<br>performance.strict-o-direct: on<br>[root@ovirt01 vdsm]# <br></div><div><br></div><div>[root@ovirt01 vdsm]# gluster volume heal data info<br>Brick ovirt01.localdomain.local:/gluster/brick2/data<br>Status: Connected<br>Number of entries: 0<br><br>Brick ovirt02.localdomain.local:/gluster/brick2/data<br>Status: Connected<br>Number of entries: 0<br><br>Brick ovirt03.localdomain.local:/gluster/brick2/data<br>Status: Connected<br>Number of entries: 0<br><br>[root@ovirt01 vdsm]# <br></div><div><br></div><div>I followed here:</div><div><a href="https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/">https://www.ovirt.org/develop/release-management/features/storage/glusterfs-storage-domain/</a></div><div><br></div><div>Any other action to do at host side?</div><div>The cluster and DC are already at 4.2 compatibility version.</div><div><br></div><div><br></div><div>Thanks</div><div>Gianluca</div><div><br></div></div>