
Hi Brian, =20 We'll need your engine & host (full) logs at the very least to look = into the problem. Can you try it with nfs3 and tell us if it works? =20 Note, more comments in the email body. =20 Regards, Vered =20 ----- Original Message -----
From: "Brian Vetter" <bjvetter@gmail.com> To: users@ovirt.org Sent: Tuesday, October 23, 2012 5:06:06 AM Subject: [Users] Error creating the first storage domain (NFS) =20 =20 I have reinstalled my ovirt installation using the nightly builds so that I can try out non-admin REST API access to ovirt. After installing the engine, connecting to my directory system, creating a domain, and adding a host (all successfully), I tried to add my first storage domain (NFS). =20 =20 While creating the storage domain, I get an error at the end along with a couple of events that say: =20 =20 =20 =20 "Failed to attach Storage Domains to Data Center DCC. (User: admin@internal)" =20 =20 followed by: =20 =20 =20 =20 "Failed to attach Storage Domain DCVMStorage to Data Center DCC. (User: admin@internal)" =20 =20 I see the following in the engine.log file: =20 =20 =20 =20 =20 2012-10-22 20:17:57,617 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8009-7) [7d1ffd97] Weird return value: Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc =20 mCode 661 =20 mMessage Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 =20 =20 =20 2012-10-22 20:17:57,619 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8009-7) [7d1ffd97] Weird return value: Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc =20 mCode 661 =20 mMessage Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 =20 =20 =20 2012-10-22 20:17:57,620 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8009-7) [7d1ffd97] Failed in CreateStoragePoolVDS method =20 2012-10-22 20:17:57,620 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8009-7) [7d1ffd97] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error =3D Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 =20 On the host where it tried to install from, I see the following in the vdsm.log: =20 =20 =20 =20 =20 Thread-243::INFO::2012-10-22 20:17:56,624::safelease::156::SANLock::(acquireHostId) Acquiring host id for domain b97019e9-bd43-46d8-afd0-421d6768271b (id: 250) =20 Thread-243::ERROR::2012-10-22 20:17:57,628::task::853::TaskManager.Task::(_setError) Task=3D`1ead54dc-407c-4d0b-96f4-8dc56c74d4cf`::Unexpected error =20 Traceback (most recent call last): =20 File "/usr/share/vdsm/storage/task.py", line 861, in _run =20 return fn(*args, **kargs) =20 File "/usr/share/vdsm/logUtils.py", line 38, in wrapper =20 res =3D f(*args, **kwargs) =20 File "/usr/share/vdsm/storage/hsm.py", line 790, in createStoragePool =20 return sp.StoragePool(spUUID, self.taskMng).create(poolName, masterDom, domList, masterV =20 ersion, safeLease) =20 File "/usr/share/vdsm/storage/sp.py", line 567, in create =20 self._acquireTemporaryClusterLock(msdUUID, safeLease) =20 File "/usr/share/vdsm/storage/sp.py", line 508, in _acquireTemporaryClusterLock =20 msd.acquireHostId(self.id) =20 File "/usr/share/vdsm/storage/sd.py", line 407, in acquireHostId =20 self._clusterLock.acquireHostId(hostId) =20 File "/usr/share/vdsm/storage/safelease.py", line 162, in acquireHostId =20 raise se.AcquireHostIdFailure(self._sdUUID, e) =20 AcquireHostIdFailure: Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 =20 After I get this error, I logged into the host and see that the nfs mount is present: =20 =20 =20 =20 =20 eos.dcc.mobi:/home/vmstorage on /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage type nfs4 = (rw,relatime,vers=3D4,rsize=3D1048576,wsize=3D1048576,namlen=3D255,soft,no= sharecache,proto=3Dtcp,port=3D0,timeo=3D600,retrans=3D6,sec=3Dsys,clientad= dr=3D10.1.1.12,minorversion=3D0,local_lock=3Dnone,addr=3D10.1.1.11) =20 =20 And when I look at the directory, I see the following: =20 =20 =20 =20 =20 [root@mech ~]# ls -laR /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage =20 /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage: =20 total 12 =20 drwxr-xr-x. 3 vdsm kvm 4096 Oct 22 20:17 . =20 drwxr-xr-x. 6 vdsm kvm 4096 Oct 22 20:17 .. =20 drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 b97019e9-bd43-46d8-afd0-421d6768271b =20 =20 =20 =20 = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-bd43-46d8-afd0= -421d6768271b: =20 total 16 =20 drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 . =20 drwxr-xr-x. 3 vdsm kvm 4096 Oct 22 20:17 .. =20 drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 dom_md =20 drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 images =20 =20 =20 =20 = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-bd43-46d8-afd0= -421d6768271b/dom_md: =20 total 2060 =20 drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 . =20 drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 .. =20 -rw-rw----. 1 vdsm kvm 1048576 Oct 22 20:17 ids =20 -rw-rw----. 1 vdsm kvm 0 Oct 22 20:17 inbox =20 -rw-rw----. 1 vdsm kvm 1048576 Oct 22 20:17 leases =20 -rw-r--r--. 1 vdsm kvm 308 Oct 22 20:17 metadata =20 -rw-rw----. 1 vdsm kvm 0 Oct 22 20:17 outbox =20 =20 =20 =20 = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-bd43-46d8-afd0= -421d6768271b/images: =20 total 8 =20 drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 . =20 drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 .. =20 =20 It looks like it was able to mount the directory and create a bunch of files and directories owned by vdsm:kvm. =20 =20 So after all this, I was stuck with a Storage domain that wasn't assigned to my data center. When I tried to attach it to my Data Center, I got another error: =20 =20 =20 =20 "Failed to attach Storage Domains to Data Center dcc. (User: admin@internal)" =20 =20 And I saw this in engine.log: =20 =20 =20 2012-10-22 21:30:53,788 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4eaa9670] Failed in CreateStoragePoolVDS method =20 2012-10-22 21:30:53,789 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4eaa9670] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error =3D Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 2012-10-22 21:30:53,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4eaa9670] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand return value =20 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc =20 mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc =20 mCode 661 =20 mMessage Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 =20 =20 =20 =20 =20 =20 2012-10-22 21:30:53,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-50) [4eaa9670] Vds: mechis3 =20 2012-10-22 21:30:53,792 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-50) [4eaa9670] Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error =3D Cannot acquire host id: ('b97019e9-bd43-46d8-afd0-421d6768271b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) =20 2012-10-22 21:30:53,793 INFO = [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (pool-3-thread-50) [4eaa9670] FINISH, CreateStoragePoolVDSCommand, log id: 4015ca0d =20 =20 This all looks familiar - as does the vdsm.log file (not repeated). =20 =20 Now, my system is in a different state. It now shows that the storage domain is associated with my Data Center (if I click on the data center in the ui and look at the storage tab below, I see that the nfs storage domain is listed with this data center. I also see that it reports its status in the data center as "locked". I don't see any way to "unlock" it, although I suspect that if I did, I'd get the same error as above (SanlockException). =20 =20 If I try to destroy/delete the storage domain, I get an error that says that I can't destroy the master storage domain. =20 Make sure your destruction request is included in the logs. This issue = seems unrelated to the previous one, make sure you're not using a =
--Apple-Mail=_CB2F6FA5-EEF8-496A-B347-F78DB3B30E7C Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii I reinstalled my system and ran the setup again. This time I configured = both my host and the ovirt-engine systems to use nfs3 (it was using nfs4 = by default). After getting all of the iptables straightened out (nfs3 = apparently ignores the port settings in /etc/sysconfig/nfs and instead = looks at /etc/services), I was able to do mounts between the two = systems.=20 When I attempt to add the storage domain, I am getting the same error as = before (from sanlock.log): 2012-10-23 18:45:16-0500 8418 [979]: s1 lockspace = 42c7d146-86e1-403f-97de-1da0dcbf95ec:250:/rhev/data-center/mnt/eos.dcc.mob= i:_home_vmstorage/42c7d146-86e1-403f-97de-1da0dcbf95ec/dom_md/ids:0 2012-10-23 18:45:16-0500 8418 [4285]: open error -13 = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/42c7d146-86e1-403f-97de= -1da0dcbf95ec/dom_md/ids 2012-10-23 18:45:16-0500 8418 [4285]: s1 open_disk = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/42c7d146-86e1-403f-97de= -1da0dcbf95ec/dom_md/ids error -13 It all goes downhill from there. So it doesn't appear to be nfs4 vs = nfsv3 related. I can send more logs, but they are pretty much the same as what I sent = before. Also, if it wasn't clear from before, I'm running the = ovirt-engine on a full fedora 17 system and I am running the host on a = minimal fc17 system with the kernel at version 3.3.4-5.fc17 (to avoid = the prior nfs hanging issues). Brian On Oct 23, 2012, at 4:38 AM, Vered Volansky wrote: pool/mounted pool while trying to destroy the
=20
=20 =20 So how do I get out of this mess? =20 =20 As to versions, I see the following ovirt packages when I dump the ovirt version info for my ovirt-engine system: =20 =20 =20 =20 =20 ovirt-engine.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-backend.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-cli.noarch 3.2.0.5-1.20121015.git4189352.fc17 =20 @ovirt-nightly =20 ovirt-engine-config.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-dbscripts.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-genericapi.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-notification-service.noarch =20 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-restapi.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-sdk.noarch 3.2.0.2-1.20120927.git663b765.fc17 =20 @ovirt-nightly =20 ovirt-engine-setup.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-tools-common.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-userportal.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-engine-webadmin-portal.noarch 3.1.0-3.1345126685.git7649eed.fc17 =20 @ovirt-nightly =20 ovirt-image-uploader.noarch 3.1.0-0.git9c42c8.fc17 @ovirt-stable =20 ovirt-iso-uploader.noarch 3.1.0-0.git1841d9.fc17 @ovirt-stable =20 ovirt-log-collector.noarch 3.1.0-0.git10d719.fc17 @ovirt-stable =20 ovirt-release-fedora.noarch 4-2 @/ovirt-release-fedora.noarch =20 =20 This is a few of the packages on my vm host: =20 =20 =20 libvirt.x86_64 0.9.11.5-3.fc17 @updates =20 =20 libvirt-client.x86_64 0.9.11.5-3.fc17 @updates libvirt-daemon.x86_64 0.9.11.5-3.fc17 @updates libvirt-daemon-config-network.x86_64 0.9.11.5-3.fc17 @updates libvirt-daemon-config-nwfilter.x86_64 0.9.11.5-3.fc17 @updates libvirt-lock-sanlock.x86_64 0.9.11.5-3.fc17 @updates libvirt-python.x86_64 0.9.11.5-3.fc17 @updates =20 sanlock.x86_64 2.4-2.fc17 @updates sanlock-lib.x86_64 2.4-2.fc17 @updates sanlock-python.x86_64 2.4-2.fc17 @updates =20 =20 vdsm.x86_64 4.10.0-10.fc17 @updates =20 vdsm-cli.noarch 4.10.0-10.fc17 @updates =20 vdsm-python.x86_64 4.10.0-10.fc17 @updates =20 vdsm-xmlrpc.noarch 4.10.0-10.fc17 @updates =20 =20 =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
</blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">mCode 661<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">mMessage Cannot = acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 20:17:57,619 WARN<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(ajp--127.0.0.1-8009-7) = [7d1ffd97] Weird return value: Class Name:<br></blockquote><blockquote = type=3D"cite">org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc<br= </blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">mCode 661<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">mMessage Cannot = acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 20:17:57,620 ERROR<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(ajp--127.0.0.1-8009-7) = [7d1ffd97] Failed in CreateStoragePoolVDS<br></blockquote><blockquote = type=3D"cite">method<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 20:17:57,620 ERROR<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(ajp--127.0.0.1-8009-7) = [7d1ffd97] Error code unexpected and error<br></blockquote><blockquote = type=3D"cite">message VDSGenericException: VDSErrorException: Failed = to<br></blockquote><blockquote type=3D"cite">CreateStoragePoolVDS, error = =3D Cannot acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">On the host = where it tried to install from, I see the following = in<br></blockquote><blockquote type=3D"cite">the = vdsm.log:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">Thread-243::INFO::2012-10-22<br></blockquote><blockquote = type=3D"cite">20:17:56,624::safelease::156::SANLock::(acquireHostId) = Acquiring<br></blockquote><blockquote type=3D"cite">host id for domain = b97019e9-bd43-46d8-afd0-421d6768271b (id: = 250)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">Thread-243::ERROR::2012-10-22<br></blockquote><blockquote = type=3D"cite">20:17:57,628::task::853::TaskManager.Task::(_setError)<br></=
</blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">mCode 661<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">mMessage Cannot = acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,791 INFO<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(pool-3-thread-50) = [4eaa9670] Vds: mechis3<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,792 ERROR<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.VDSCommandBase] = (pool-3-thread-50)<br></blockquote><blockquote type=3D"cite">[4eaa9670] = Command CreateStoragePoolVDS execution failed. = Exception:<br></blockquote><blockquote type=3D"cite">VDSErrorException: = VDSGenericException: VDSErrorException: Failed = to<br></blockquote><blockquote type=3D"cite">CreateStoragePoolVDS, error = =3D Cannot acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,793 INFO<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePool= VDSCommand]<br></blockquote><blockquote type=3D"cite">(pool-3-thread-50) = [4eaa9670] FINISH, = CreateStoragePoolVDSCommand,<br></blockquote><blockquote type=3D"cite">log= id: 4015ca0d<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">This all looks = familiar - as does the vdsm.log file (not = repeated).<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">Now, my system = is in a different state. It now shows that the = storage<br></blockquote><blockquote type=3D"cite">domain is associated = with my Data Center (if I click on the data<br></blockquote><blockquote = type=3D"cite">center in the ui and look at the storage tab below, I see =
--Apple-Mail=_CB2F6FA5-EEF8-496A-B347-F78DB3B30E7C Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I = reinstalled my system and ran the setup again. This time I configured = both my host and the ovirt-engine systems to use nfs3 (it was using nfs4 = by default). After getting all of the iptables straightened out (nfs3 = apparently ignores the port settings in /etc/sysconfig/nfs and instead = looks at /etc/services), I was able to do mounts between the two = systems. <div><br></div><div>When I attempt to add the storage = domain, I am getting the same error as before (from = sanlock.log):</div><div><br></div><blockquote = class=3D"webkit-indent-blockquote" style=3D"margin: 0 0 0 40px; border: = none; padding: 0px;"><div><div>2012-10-23 18:45:16-0500 8418 [979]: s1 = lockspace = 42c7d146-86e1-403f-97de-1da0dcbf95ec:250:/rhev/data-center/mnt/eos.dcc.mob= i:_home_vmstorage/42c7d146-86e1-403f-97de-1da0dcbf95ec/dom_md/ids:0</div><= /div><div><div>2012-10-23 18:45:16-0500 8418 [4285]: open error -13 = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/42c7d146-86e1-403f-97de= -1da0dcbf95ec/dom_md/ids</div></div><div><div>2012-10-23 18:45:16-0500 = 8418 [4285]: s1 open_disk = /rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/42c7d146-86e1-403f-97de= -1da0dcbf95ec/dom_md/ids error = -13</div></div></blockquote><div><div><br></div><div>It all goes = downhill from there. So it doesn't appear to be nfs4 vs nfsv3 = related.</div><div><br></div><div>I can send more logs, but they are = pretty much the same as what I sent before. Also, if it wasn't clear = from before, I'm running the ovirt-engine on a full fedora 17 system and = I am running the host on a minimal fc17 system with the kernel at = version 3.3.4-5.fc17 (to avoid the prior nfs hanging = issues).</div><div><br></div><div>Brian</div><div><br><div><div>On Oct = 23, 2012, at 4:38 AM, Vered Volansky wrote:</div><br = class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><div>Hi = Brian,<br><br>We'll need your engine & host (full) logs at the very = least to look into the problem.<br>Can you try it with nfs3 and tell us = if it works?<br><br>Note, more comments in the email = body.<br><br>Regards,<br>Vered<br><br>----- Original Message = -----<br><blockquote type=3D"cite">From: "Brian Vetter" <<a = href=3D"mailto:bjvetter@gmail.com">bjvetter@gmail.com</a>><br></blockqu= ote><blockquote type=3D"cite">To: <a = href=3D"mailto:users@ovirt.org">users@ovirt.org</a><br></blockquote><block= quote type=3D"cite">Sent: Tuesday, October 23, 2012 5:06:06 = AM<br></blockquote><blockquote type=3D"cite">Subject: [Users] Error = creating the first storage domain (NFS)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">I have = reinstalled my ovirt installation using the nightly builds = so<br></blockquote><blockquote type=3D"cite">that I can try out = non-admin REST API access to ovirt. After<br></blockquote><blockquote = type=3D"cite">installing the engine, connecting to my directory system, = creating a<br></blockquote><blockquote type=3D"cite">domain, and adding = a host (all successfully), I tried to add my<br></blockquote><blockquote = type=3D"cite">first storage domain (NFS).<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">While creating = the storage domain, I get an error at the end = along<br></blockquote><blockquote type=3D"cite">with a couple of events = that say:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">"Failed to = attach Storage Domains to Data Center DCC. = (User:<br></blockquote><blockquote = type=3D"cite">admin@internal)"<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">followed = by:<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote= type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">"Failed to = attach Storage Domain DCVMStorage to Data Center = DCC.<br></blockquote><blockquote type=3D"cite">(User: = admin@internal)"<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">I see the = following in the engine.log file:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 20:17:57,617 WARN<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(ajp--127.0.0.1-8009-7) = [7d1ffd97] Weird return value: Class Name:<br></blockquote><blockquote = type=3D"cite">org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc<br= blockquote><blockquote = type=3D"cite">Task=3D`1ead54dc-407c-4d0b-96f4-8dc56c74d4cf`::Unexpected = error<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">Traceback (most = recent call last):<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/task.py", line 861, in = _run<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">return = fn(*args, **kargs)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/logUtils.py", line 38, in = wrapper<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">res =3D = f(*args, **kwargs)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/hsm.py", line 790, in = createStoragePool<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">return = sp.StoragePool(spUUID, = self.taskMng).create(poolName,<br></blockquote><blockquote = type=3D"cite">masterDom, domList, masterV<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">ersion, = safeLease)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/sp.py", line 567, in = create<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">self._acquireTemporaryClusterLock(msdUUID, = safeLease)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/sp.py", line 508, = in<br></blockquote><blockquote = type=3D"cite">_acquireTemporaryClusterLock<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">msd.acquireHostId(self.id)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/sd.py", line 407, in = acquireHostId<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">self._clusterLock.acquireHostId(hostId)<br></blockquote><blo= ckquote type=3D"cite"><br></blockquote><blockquote type=3D"cite">File = "/usr/share/vdsm/storage/safelease.py", line 162, = in<br></blockquote><blockquote = type=3D"cite">acquireHostId<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">raise = se.AcquireHostIdFailure(self._sdUUID, e)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">AcquireHostIdFailure: Cannot acquire host = id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">After I get = this error, I logged into the host and see that the = nfs<br></blockquote><blockquote type=3D"cite">mount is = present:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">eos.dcc.mobi:/home/vmstorage = on<br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage type = nfs4<br></blockquote><blockquote = type=3D"cite">(rw,relatime,vers=3D4,rsize=3D1048576,wsize=3D1048576,namlen= =3D255,soft,nosharecache,proto=3Dtcp,port=3D0,timeo=3D600,retrans=3D6,sec=3D= sys,clientaddr=3D10.1.1.12,minorversion=3D0,local_lock=3Dnone,addr=3D10.1.= 1.11)<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">And when I look = at the directory, I see the following:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">[root@mech ~]# = ls -laR<br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage<br></bloc= kquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage:<br></blo= ckquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">total 12<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 3 = vdsm kvm 4096 Oct 22 20:17 .<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 6 = vdsm kvm 4096 Oct 22 20:17 ..<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 4 = vdsm kvm 4096 Oct 22 20:17<br></blockquote><blockquote = type=3D"cite">b97019e9-bd43-46d8-afd0-421d6768271b<br></blockquote><blockq= uote type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-= bd43-46d8-afd0-421d6768271b:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">total = 16<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 = .<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">drwxr-xr-x. 3 vdsm kvm 4096 Oct 22 20:17 = ..<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 = dom_md<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 2 = vdsm kvm 4096 Oct 22 20:17 images<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-= bd43-46d8-afd0-421d6768271b/dom_md:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">total = 2060<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 2 = vdsm kvm 4096 Oct 22 20:17 .<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">drwxr-xr-x. 4 = vdsm kvm 4096 Oct 22 20:17 ..<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">-rw-rw----. 1 = vdsm kvm 1048576 Oct 22 20:17 ids<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">-rw-rw----. 1 = vdsm kvm 0 Oct 22 20:17 inbox<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">-rw-rw----. 1 = vdsm kvm 1048576 Oct 22 20:17 leases<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">-rw-r--r--. 1 = vdsm kvm 308 Oct 22 20:17 metadata<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">-rw-rw----. 1 = vdsm kvm 0 Oct 22 20:17 outbox<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">/rhev/data-center/mnt/eos.dcc.mobi:_home_vmstorage/b97019e9-= bd43-46d8-afd0-421d6768271b/images:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">total = 8<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">drwxr-xr-x. 2 vdsm kvm 4096 Oct 22 20:17 = .<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">drwxr-xr-x. 4 vdsm kvm 4096 Oct 22 20:17 = ..<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">It looks like = it was able to mount the directory and create a = bunch<br></blockquote><blockquote type=3D"cite">of files and directories = owned by vdsm:kvm.<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">So after all = this, I was stuck with a Storage domain that = wasn't<br></blockquote><blockquote type=3D"cite">assigned to my data = center. When I tried to attach it to my Data<br></blockquote><blockquote = type=3D"cite">Center, I got another error:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">"Failed to = attach Storage Domains to Data Center dcc. = (User:<br></blockquote><blockquote = type=3D"cite">admin@internal)"<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">And I saw this = in engine.log:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,788 ERROR<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(pool-3-thread-50) = [4eaa9670] Failed in CreateStoragePoolVDS = method<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,789 ERROR<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(pool-3-thread-50) = [4eaa9670] Error code unexpected and error<br></blockquote><blockquote = type=3D"cite">message VDSGenericException: VDSErrorException: Failed = to<br></blockquote><blockquote type=3D"cite">CreateStoragePoolVDS, error = =3D Cannot acquire host id:<br></blockquote><blockquote = type=3D"cite">('b97019e9-bd43-46d8-afd0-421d6768271b', = SanlockException(19,<br></blockquote><blockquote type=3D"cite">'Sanlock = lockspace add failure', 'No such device'))<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">2012-10-22 = 21:30:53,790 INFO<br></blockquote><blockquote = type=3D"cite">[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase= ]<br></blockquote><blockquote type=3D"cite">(pool-3-thread-50) = [4eaa9670] Command<br></blockquote><blockquote = type=3D"cite">org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolV= DSCommand<br></blockquote><blockquote type=3D"cite">return = value<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">Class = Name:<br></blockquote><blockquote = type=3D"cite">org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnFo= rXmlRpc<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">mStatus Class = Name:<br></blockquote><blockquote = type=3D"cite">org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc<br= that the<br></blockquote><blockquote type=3D"cite">nfs storage domain is = listed with this data center. I also see = that<br></blockquote><blockquote type=3D"cite">it reports its status in = the data center as "locked". I don't see<br></blockquote><blockquote = type=3D"cite">any way to "unlock" it, although I suspect that if I did, = I'd get<br></blockquote><blockquote type=3D"cite">the same error as = above (SanlockException).<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">If I try to = destroy/delete the storage domain, I get an error = that<br></blockquote><blockquote type=3D"cite">says that I can't destroy = the master storage domain.<br></blockquote><br>Make sure your = destruction request is included in the logs. This issue seems unrelated = to the previous one, make sure you're not using a pool/mounted pool = while trying to destroy the<br><br><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">So how do I get = out of this mess?<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">As to versions, = I see the following ovirt packages when I dump = the<br></blockquote><blockquote type=3D"cite">ovirt version info for my = ovirt-engine system:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-backend.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-cli.noarch = 3.2.0.5-1.20121015.git4189352.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-config.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-dbscripts.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-genericapi.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-notification-service.noarch<br></blockquote><bl= ockquote type=3D"cite"><br></blockquote><blockquote = type=3D"cite">3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquo= te type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-restapi.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-sdk.noarch = 3.2.0.2-1.20120927.git663b765.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-setup.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-tools-common.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-userportal.noarch = 3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-engine-webadmin-portal.noarch<br></blockquote><blockqu= ote = type=3D"cite">3.1.0-3.1345126685.git7649eed.fc17<br></blockquote><blockquo= te type=3D"cite"><br></blockquote><blockquote = type=3D"cite">@ovirt-nightly<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-image-uploader.noarch 3.1.0-0.git9c42c8.fc17 = @ovirt-stable<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-iso-uploader.noarch 3.1.0-0.git1841d9.fc17 = @ovirt-stable<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-log-collector.noarch 3.1.0-0.git10d719.fc17 = @ovirt-stable<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">ovirt-release-fedora.noarch 4-2 = @/ovirt-release-fedora.noarch<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">This is a few = of the packages on my vm host:<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">libvirt.x86_64 = 0.9.11.5-3.fc17 @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">libvirt-client.x86_64 0.9.11.5-3.fc17 = @updates<br></blockquote><blockquote type=3D"cite">libvirt-daemon.x86_64 = 0.9.11.5-3.fc17 @updates<br></blockquote><blockquote = type=3D"cite">libvirt-daemon-config-network.x86_64 0.9.11.5-3.fc17 = @updates<br></blockquote><blockquote = type=3D"cite">libvirt-daemon-config-nwfilter.x86_64 0.9.11.5-3.fc17 = @updates<br></blockquote><blockquote = type=3D"cite">libvirt-lock-sanlock.x86_64 0.9.11.5-3.fc17 = @updates<br></blockquote><blockquote type=3D"cite">libvirt-python.x86_64 = 0.9.11.5-3.fc17 @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">sanlock.x86_64 = 2.4-2.fc17 @updates<br></blockquote><blockquote = type=3D"cite">sanlock-lib.x86_64 2.4-2.fc17 = @updates<br></blockquote><blockquote type=3D"cite">sanlock-python.x86_64 = 2.4-2.fc17 @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">vdsm.x86_64 = 4.10.0-10.fc17 @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote type=3D"cite">vdsm-cli.noarch = 4.10.0-10.fc17 @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">vdsm-python.x86_64 4.10.0-10.fc17 = @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">vdsm-xmlrpc.noarch 4.10.0-10.fc17 = @updates<br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite"><br></blockquote><blockquote = type=3D"cite">_______________________________________________<br></blockqu= ote><blockquote type=3D"cite">Users mailing = list<br></blockquote><blockquote type=3D"cite"><a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br></blockquote><block= quote type=3D"cite"><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.= org/mailman/listinfo/users</a><br></blockquote><blockquote = type=3D"cite"><br></blockquote></div></blockquote></div><br></div></div></= body></html>= --Apple-Mail=_CB2F6FA5-EEF8-496A-B347-F78DB3B30E7C--