From roger.meier at 4synergy.com Tue Nov 24 06:57:37 2015 Content-Type: multipart/mixed; boundary="===============1797247663891329878==" MIME-Version: 1.0 From: Roger Meier To: users at ovirt.org Subject: [ovirt-users] Problem with Adding Pre-Configured Domain Date: Tue, 24 Nov 2015 12:40:17 +0100 Message-ID: <56544CA1.2000706@4synergy.com> --===============1797247663891329878== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------050208060806020800020208 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit Hi All, I don't know if this is a Bug or an error on my side. At the moment, i have a oVirt 3.6 installation with two Nodes and Two Storage Server, which are configured themselfs als master/slave (solaris zfs snapshot copy from master to slave all 2 hours) Now i try to do some tests for some failure use cases like, master storage isn't available anymore or one of the virtual machines must be restored from the snapshot. Because the data n the slave is a snapshot copy, all data which are on the Data Domain NFS Storage, are also on the slave NFS Storage. I tried it to add over WebUI over the option "Import Domain" (Import Pre-Configured Domain) with both Domain Functions (Data and Export) but nothing happens, expect some errors in the vdsm.log Logfile. Something like this Thread-253746::ERROR::2015-11-24 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2545, in disconnectStorageServer conObj.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect return self._mountCon.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect self._mount.umount(True, True) File "/usr/share/vdsm/storage/mount.py", line 256, in umount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';umount: /rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfssh= are1: mountpoint not found\n') I checked with nfs-check.py if all permissions are ok, the tool say this: Konsole output [root(a)lin-ovirt1 contrib]# python ./nfs-check.py 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1 Current hostname: lin-ovirt1 - IP addr 192.168.1.14 Trying to /bin/mount -t nfs 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1... Executing NFS tests.. Removing vdsmTest file.. Status of tests [OK] Disconnecting from NFS Server.. Done! Greetings Roger Meier --------------050208060806020800020208 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit Hi All,

I don't know if this is a Bug or an error on my side.

At the moment,=C2=A0 i have a oVirt 3.6 installation with two Nodes and Two Storage Server, which
are configured themselfs als master/slave (solaris zfs snapshot copy from master to slave all 2 hours)

Now i try to do some tests for some failure use cases like, master storage isn't available anymore or
one of the virtual machines must be restored from the snapshot.

Because the data n the slave is a snapshot copy, all data which are on the Data Domain NFS Storage,
are also on the slave NFS Storage.

I tried it to add over WebUI over the option "Import Domain" (Import Pre-Configured Domain) with both
Domain Functions (Data and Export) but nothing happens, expect some errors in the vdsm.log Logfile.

Something like this

Thread-253746::ERROR::2015-11-24 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
=C2=A0 File "/usr/share/vdsm/storage/hsm.py", line 2545, in disconnectStorageServer
=C2=A0=C2=A0=C2=A0 conObj.disconnect()
=C2=A0 File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect
=C2=A0=C2=A0=C2=A0 return self._mountCon.disconnect()
=C2=A0 File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect
=C2=A0=C2=A0=C2=A0 self._mount.umount(True, True)
=C2=A0 File "/usr/share/vdsm/storage/mount.py", line 256, in umount
=C2=A0=C2=A0=C2=A0 return self._runcmd(cmd, timeout)
=C2=A0 File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
=C2=A0=C2=A0=C2=A0 raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount: /rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_n= fsshare1: mountpoint not found\n')

I checked with nfs-check.py if all permissions are ok, the tool say this:

Konsole output
[root(a)lin-ovi= rt1 contrib]# python ./nfs-check.py 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
Trying to /bin/mount -t nfs 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!

Greetings
Roger Meier
--------------050208060806020800020208-- --===============1797247663891329878== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNTAyMDgwNjA4MDYwMjA4MDAwMjAyMDgKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCkhpIEFsbCwKCkkgZG9u J3Qga25vdyBpZiB0aGlzIGlzIGEgQnVnIG9yIGFuIGVycm9yIG9uIG15IHNpZGUuCgpBdCB0aGUg bW9tZW50LCAgaSBoYXZlIGEgb1ZpcnQgMy42IGluc3RhbGxhdGlvbiB3aXRoIHR3byBOb2RlcyBh bmQgVHdvClN0b3JhZ2UgU2VydmVyLCB3aGljaAphcmUgY29uZmlndXJlZCB0aGVtc2VsZnMgYWxz IG1hc3Rlci9zbGF2ZSAoc29sYXJpcyB6ZnMgc25hcHNob3QgY29weQpmcm9tIG1hc3RlciB0byBz bGF2ZSBhbGwgMiBob3VycykKCk5vdyBpIHRyeSB0byBkbyBzb21lIHRlc3RzIGZvciBzb21lIGZh aWx1cmUgdXNlIGNhc2VzIGxpa2UsIG1hc3RlcgpzdG9yYWdlIGlzbid0IGF2YWlsYWJsZSBhbnlt b3JlIG9yCm9uZSBvZiB0aGUgdmlydHVhbCBtYWNoaW5lcyBtdXN0IGJlIHJlc3RvcmVkIGZyb20g dGhlIHNuYXBzaG90LgoKQmVjYXVzZSB0aGUgZGF0YSBuIHRoZSBzbGF2ZSBpcyBhIHNuYXBzaG90 IGNvcHksIGFsbCBkYXRhIHdoaWNoIGFyZSBvbgp0aGUgRGF0YSBEb21haW4gTkZTIFN0b3JhZ2Us CmFyZSBhbHNvIG9uIHRoZSBzbGF2ZSBORlMgU3RvcmFnZS4KCkkgdHJpZWQgaXQgdG8gYWRkIG92 ZXIgV2ViVUkgb3ZlciB0aGUgb3B0aW9uICJJbXBvcnQgRG9tYWluIiAoSW1wb3J0ClByZS1Db25m aWd1cmVkIERvbWFpbikgd2l0aCBib3RoCkRvbWFpbiBGdW5jdGlvbnMgKERhdGEgYW5kIEV4cG9y dCkgYnV0IG5vdGhpbmcgaGFwcGVucywgZXhwZWN0IHNvbWUKZXJyb3JzIGluIHRoZSB2ZHNtLmxv ZyBMb2dmaWxlLgoKU29tZXRoaW5nIGxpa2UgdGhpcwoKVGhyZWFkLTI1Mzc0Njo6RVJST1I6OjIw MTUtMTEtMjQKMTE6NDQ6NDEsNzU4Ojpoc206OjI1NDk6OlN0b3JhZ2UuSFNNOjooZGlzY29ubmVj dFN0b3JhZ2VTZXJ2ZXIpIENvdWxkCm5vdCBkaXNjb25uZWN0IGZyb20gc3RvcmFnZVNlcnZlcgpU cmFjZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdCk6CiAgRmlsZSAiL3Vzci9zaGFyZS92ZHNt L3N0b3JhZ2UvaHNtLnB5IiwgbGluZSAyNTQ1LCBpbgpkaXNjb25uZWN0U3RvcmFnZVNlcnZlcgog ICAgY29uT2JqLmRpc2Nvbm5lY3QoKQogIEZpbGUgIi91c3Ivc2hhcmUvdmRzbS9zdG9yYWdlL3N0 b3JhZ2VTZXJ2ZXIucHkiLCBsaW5lIDQyNSwgaW4gZGlzY29ubmVjdAogICAgcmV0dXJuIHNlbGYu X21vdW50Q29uLmRpc2Nvbm5lY3QoKQogIEZpbGUgIi91c3Ivc2hhcmUvdmRzbS9zdG9yYWdlL3N0 b3JhZ2VTZXJ2ZXIucHkiLCBsaW5lIDI1NCwgaW4gZGlzY29ubmVjdAogICAgc2VsZi5fbW91bnQu dW1vdW50KFRydWUsIFRydWUpCiAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2UvbW91bnQu cHkiLCBsaW5lIDI1NiwgaW4gdW1vdW50CiAgICByZXR1cm4gc2VsZi5fcnVuY21kKGNtZCwgdGlt ZW91dCkKICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9tb3VudC5weSIsIGxpbmUgMjQx LCBpbiBfcnVuY21kCiAgICByYWlzZSBNb3VudEVycm9yKHJjLCAiOyIuam9pbigob3V0LCBlcnIp KSkKTW91bnRFcnJvcjogKDMyLCAnO3Vtb3VudDoKL3JoZXYvZGF0YS1jZW50ZXIvbW50LzE5Mi4x NjguMS4xMzpfb2ktc3J2Mi1zYXNEYXRhMV9vaS1zcnYxLXNhc0RhdGExX25mc3NoYXJlMToKbW91 bnRwb2ludCBub3QgZm91bmRcbicpCgpJIGNoZWNrZWQgd2l0aCBuZnMtY2hlY2sucHkgaWYgYWxs IHBlcm1pc3Npb25zIGFyZSBvaywgdGhlIHRvb2wgc2F5IHRoaXM6CgpLb25zb2xlIG91dHB1dApb cm9vdEBsaW4tb3ZpcnQxIGNvbnRyaWJdIyBweXRob24gLi9uZnMtY2hlY2sucHkKMTkyLjE2OC4x LjEzOi9vaS1zcnYyLXNhc0RhdGExL29pLXNydjEtc2FzRGF0YTEvbmZzc2hhcmUxCkN1cnJlbnQg aG9zdG5hbWU6IGxpbi1vdmlydDEgLSBJUCBhZGRyIDE5Mi4xNjguMS4xNApUcnlpbmcgdG8gL2Jp bi9tb3VudCAtdCBuZnMKMTkyLjE2OC4xLjEzOi9vaS1zcnYyLXNhc0RhdGExL29pLXNydjEtc2Fz RGF0YTEvbmZzc2hhcmUxLi4uCkV4ZWN1dGluZyBORlMgdGVzdHMuLgpSZW1vdmluZyB2ZHNtVGVz dCBmaWxlLi4KU3RhdHVzIG9mIHRlc3RzIFtPS10KRGlzY29ubmVjdGluZyBmcm9tIE5GUyBTZXJ2 ZXIuLgpEb25lIQoKR3JlZXRpbmdzClJvZ2VyIE1laWVyCgotLS0tLS0tLS0tLS0tLTA1MDIwODA2 MDgwNjAyMDgwMDAyMDIwOApDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD11dGYtOApD b250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0Cgo8aHRtbD4KICA8aGVhZD4KCiAgICA8bWV0 YSBodHRwLWVxdWl2PSJjb250ZW50LXR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11 dGYtOCI+CiAgPC9oZWFkPgogIDxib2R5IHRleHQ9IiMwMDAwMDAiIGJnY29sb3I9IiNGRkZGRkYi PgogICAgSGkgQWxsLDxicj4KICAgIDxicj4KICAgIEkgZG9uJ3Qga25vdyBpZiB0aGlzIGlzIGEg QnVnIG9yIGFuIGVycm9yIG9uIG15IHNpZGUuPGJyPgogICAgPGJyPgogICAgQXQgdGhlIG1vbWVu dCzCoCBpIGhhdmUgYSBvVmlydCAzLjYgaW5zdGFsbGF0aW9uIHdpdGggdHdvIE5vZGVzIGFuZAog ICAgVHdvIFN0b3JhZ2UgU2VydmVyLCB3aGljaDxicj4KICAgIGFyZSBjb25maWd1cmVkIHRoZW1z ZWxmcyBhbHMgbWFzdGVyL3NsYXZlIChzb2xhcmlzIHpmcyBzbmFwc2hvdCBjb3B5CiAgICBmcm9t IG1hc3RlciB0byBzbGF2ZSBhbGwgMiBob3Vycyk8YnI+CiAgICA8YnI+CiAgICBOb3cgaSB0cnkg dG8gZG8gc29tZSB0ZXN0cyBmb3Igc29tZSBmYWlsdXJlIHVzZSBjYXNlcyBsaWtlLCBtYXN0ZXIK ICAgIHN0b3JhZ2UgaXNuJ3QgYXZhaWxhYmxlIGFueW1vcmUgb3I8YnI+CiAgICBvbmUgb2YgdGhl IHZpcnR1YWwgbWFjaGluZXMgbXVzdCBiZSByZXN0b3JlZCBmcm9tIHRoZSBzbmFwc2hvdC48YnI+ CiAgICA8YnI+CiAgICBCZWNhdXNlIHRoZSBkYXRhIG4gdGhlIHNsYXZlIGlzIGEgc25hcHNob3Qg Y29weSwgYWxsIGRhdGEgd2hpY2ggYXJlCiAgICBvbiB0aGUgRGF0YSBEb21haW4gTkZTIFN0b3Jh Z2UsPGJyPgogICAgYXJlIGFsc28gb24gdGhlIHNsYXZlIE5GUyBTdG9yYWdlLiA8YnI+CiAgICA8 YnI+CiAgICBJIHRyaWVkIGl0IHRvIGFkZCBvdmVyIFdlYlVJIG92ZXIgdGhlIG9wdGlvbiAiSW1w b3J0IERvbWFpbiIgKEltcG9ydAogICAgUHJlLUNvbmZpZ3VyZWQgRG9tYWluKSB3aXRoIGJvdGg8 YnI+CiAgICBEb21haW4gRnVuY3Rpb25zIChEYXRhIGFuZCBFeHBvcnQpIGJ1dCBub3RoaW5nIGhh cHBlbnMsIGV4cGVjdCBzb21lCiAgICBlcnJvcnMgaW4gdGhlIHZkc20ubG9nIExvZ2ZpbGUuPGJy PgogICAgPGJyPgogICAgU29tZXRoaW5nIGxpa2UgdGhpczxicj4KICAgIDxicj4KICAgIFRocmVh ZC0yNTM3NDY6OkVSUk9SOjoyMDE1LTExLTI0CiAgICAxMTo0NDo0MSw3NTg6OmhzbTo6MjU0OTo6 U3RvcmFnZS5IU006OihkaXNjb25uZWN0U3RvcmFnZVNlcnZlcikKICAgIENvdWxkIG5vdCBkaXNj b25uZWN0IGZyb20gc3RvcmFnZVNlcnZlcjxicj4KICAgIFRyYWNlYmFjayAobW9zdCByZWNlbnQg Y2FsbCBsYXN0KTo8YnI+CiAgICDCoCBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9oc20u cHkiLCBsaW5lIDI1NDUsIGluCiAgICBkaXNjb25uZWN0U3RvcmFnZVNlcnZlcjxicj4KICAgIMKg wqDCoCBjb25PYmouZGlzY29ubmVjdCgpPGJyPgogICAgwqAgRmlsZSAiL3Vzci9zaGFyZS92ZHNt L3N0b3JhZ2Uvc3RvcmFnZVNlcnZlci5weSIsIGxpbmUgNDI1LCBpbgogICAgZGlzY29ubmVjdDxi cj4KICAgIMKgwqDCoCByZXR1cm4gc2VsZi5fbW91bnRDb24uZGlzY29ubmVjdCgpPGJyPgogICAg wqAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2Uvc3RvcmFnZVNlcnZlci5weSIsIGxpbmUg MjU0LCBpbgogICAgZGlzY29ubmVjdDxicj4KICAgIMKgwqDCoCBzZWxmLl9tb3VudC51bW91bnQo VHJ1ZSwgVHJ1ZSk8YnI+CiAgICDCoCBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9tb3Vu dC5weSIsIGxpbmUgMjU2LCBpbiB1bW91bnQ8YnI+CiAgICDCoMKgwqAgcmV0dXJuIHNlbGYuX3J1 bmNtZChjbWQsIHRpbWVvdXQpPGJyPgogICAgwqAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3Jh Z2UvbW91bnQucHkiLCBsaW5lIDI0MSwgaW4gX3J1bmNtZDxicj4KICAgIMKgwqDCoCByYWlzZSBN b3VudEVycm9yKHJjLCAiOyIuam9pbigob3V0LCBlcnIpKSk8YnI+CiAgICBNb3VudEVycm9yOiAo MzIsICc7dW1vdW50OgogICAgL3JoZXYvZGF0YS1jZW50ZXIvbW50LzE5Mi4xNjguMS4xMzpfb2kt c3J2Mi1zYXNEYXRhMV9vaS1zcnYxLXNhc0RhdGExX25mc3NoYXJlMToKICAgIG1vdW50cG9pbnQg bm90IGZvdW5kXG4nKTxicj4KICAgIDxicj4KICAgIEkgY2hlY2tlZCB3aXRoIG5mcy1jaGVjay5w eSBpZiBhbGwgcGVybWlzc2lvbnMgYXJlIG9rLCB0aGUgdG9vbCBzYXkKICAgIHRoaXM6PGJyPgog ICAgPGJyPgogICAgPHRpdGxlPktvbnNvbGUgb3V0cHV0PC90aXRsZT4KICAgIDxkaXY+CiAgICAg IDxzcGFuIHN0eWxlPSJmb250LWZhbWlseTptb25vc3BhY2UiPjxzcGFuCiAgICAgICAgICBzdHls ZT0iY29sb3I6IzAwMDAwMDtiYWNrZ3JvdW5kLWNvbG9yOiNmZmZmZmY7Ij5bcm9vdEBsaW4tb3Zp cnQxCiAgICAgICAgICBjb250cmliXSMgcHl0aG9uIC4vbmZzLWNoZWNrLnB5CiAgICAgICAgICAx OTIuMTY4LjEuMTM6L29pLXNydjItc2FzRGF0YTEvb2ktc3J2MS1zYXNEYXRhMS9uZnNzaGFyZTEK ICAgICAgICA8L3NwYW4+PGJyPgogICAgICAgIEN1cnJlbnQgaG9zdG5hbWU6IGxpbi1vdmlydDEg LSBJUCBhZGRyIDE5Mi4xNjguMS4xNAogICAgICAgIDxicj4KICAgICAgICBUcnlpbmcgdG8gL2Jp bi9tb3VudCAtdCBuZnMKICAgICAgICAxOTIuMTY4LjEuMTM6L29pLXNydjItc2FzRGF0YTEvb2kt c3J2MS1zYXNEYXRhMS9uZnNzaGFyZTEuLi4KICAgICAgICA8YnI+CiAgICAgICAgRXhlY3V0aW5n IE5GUyB0ZXN0cy4uCiAgICAgICAgPGJyPgogICAgICAgIFJlbW92aW5nIHZkc21UZXN0IGZpbGUu LgogICAgICAgIDxicj4KICAgICAgICBTdGF0dXMgb2YgdGVzdHMgW09LXQogICAgICAgIDxicj4K ICAgICAgICBEaXNjb25uZWN0aW5nIGZyb20gTkZTIFNlcnZlci4uCiAgICAgICAgPGJyPgogICAg ICAgIERvbmUhPGJyPgogICAgICAgIDxicj4KICAgICAgPC9zcGFuPjwvZGl2PgogICAgPG1ldGEg aHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRm LTgiPgogICAgR3JlZXRpbmdzPGJyPgogICAgUm9nZXIgTWVpZXI8YnI+CiAgPC9ib2R5Pgo8L2h0 bWw+CgotLS0tLS0tLS0tLS0tLTA1MDIwODA2MDgwNjAyMDgwMDAyMDIwOC0tCg== --===============1797247663891329878==-- From combuster at gmail.com Tue Nov 24 14:32:57 2015 Content-Type: multipart/mixed; boundary="===============1969065194562107074==" MIME-Version: 1.0 From: Ivan Bulatovic To: users at ovirt.org Subject: Re: [ovirt-users] Problem with Adding Pre-Configured Domain Date: Tue, 24 Nov 2015 20:32:43 +0100 Message-ID: <5654BB5B.5010808@gmail.com> In-Reply-To: 56544CA1.2000706@4synergy.com --===============1969065194562107074== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Roger, you can not import the snapshot because you already have an active = storage domain that contains the same (meta)data. Try detaching the = storage domain that you take snapshots from, and remove it (do not = select to wipe the data), and then try to import the snapshot. You will = see a warning that the domain is already registered within ovirt engine = and you can force the import to continue. After that, you should see the = domain registered in ovirt webadmin. Before detaching the domain, make = sure you have another domain active so it can become a master domain, = and create exports of the VM's that are part of that domain just in case. I had the same problem while trying to import a replicated storage = domain, you can see that oVirt tries to import the domain, but it just = returns to import domain dialog. It actually mounts the domain for a few = seconds, then disconnects and removes the mount point under = /rhev/data-center/ and then it tries to unmount it and fails because = mount point doesn't exist anymore. Mentioned it here recently: = http://lists.ovirt.org/pipermail/users/2015-November/036027.html Maybe there is a workaround for importing the clone/snap of the domain = while the source is still active (messing with the metadata), but I = haven't tried it so far (there are several implications that needs to be = taken into the account). However, I'm also interested if there is a way = to do such a thing, especially when you have two separate datacenters = registered within the same engine, and it would be great for us to be = able to import snaps/replicated storage domains and/or VM's that reside = on it while still having the original VM's active and running. Something similar to the third RFE here (this is only for VM's): = http://www.ovirt.org/Features/ImportUnregisteredEntities#RFEs In any case, I'll try this ASAP, always an interesting topic. Any = insights on this is highly appreciated. Ivan On 11/24/2015 12:40 PM, Roger Meier wrote: > Hi All, > > I don't know if this is a Bug or an error on my side. > > At the moment, i have a oVirt 3.6 installation with two Nodes and Two = > Storage Server, which > are configured themselfs als master/slave (solaris zfs snapshot copy = > from master to slave all 2 hours) > > Now i try to do some tests for some failure use cases like, master = > storage isn't available anymore or > one of the virtual machines must be restored from the snapshot. > > Because the data n the slave is a snapshot copy, all data which are on = > the Data Domain NFS Storage, > are also on the slave NFS Storage. > > I tried it to add over WebUI over the option "Import Domain" (Import = > Pre-Configured Domain) with both > Domain Functions (Data and Export) but nothing happens, expect some = > errors in the vdsm.log Logfile. > > Something like this > > Thread-253746::ERROR::2015-11-24 = > 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could = > not disconnect from storageServer > Traceback (most recent call last): > File "/usr/share/vdsm/storage/hsm.py", line 2545, in = > disconnectStorageServer > conObj.disconnect() > File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect > return self._mountCon.disconnect() > File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect > self._mount.umount(True, True) > File "/usr/share/vdsm/storage/mount.py", line 256, in umount > return self._runcmd(cmd, timeout) > File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd > raise MountError(rc, ";".join((out, err))) > MountError: (32, ';umount: = > /rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfs= share1: = > mountpoint not found\n') > > I checked with nfs-check.py if all permissions are ok, the tool say this: > > Konsole output > [root(a)lin-ovirt1 contrib]# python ./nfs-check.py = > 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1 > Current hostname: lin-ovirt1 - IP addr 192.168.1.14 > Trying to /bin/mount -t nfs = > 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1... > Executing NFS tests.. > Removing vdsmTest file.. > Status of tests [OK] > Disconnecting from NFS Server.. > Done! > > Greetings > Roger Meier > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============1969065194562107074==-- From roger.meier at 4synergy.com Wed Nov 25 04:50:53 2015 Content-Type: multipart/mixed; boundary="===============3625533160084773103==" MIME-Version: 1.0 From: Roger Meier To: users at ovirt.org Subject: Re: [ovirt-users] Problem with Adding Pre-Configured Domain Date: Wed, 25 Nov 2015 10:50:19 +0100 Message-ID: <5655845B.2090403@4synergy.com> In-Reply-To: 5654BB5B.5010808@gmail.com --===============3625533160084773103== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Ivan, The problem is their is no warning in the webUI. I can press the button and nothing happens (i only see the error message in the logfile) no error, i stuck in the config UI. I tried now another approach based on your idea. I turned the master storage domain into maintenance mode and changed the nfs path and activated it again with the new share path. After this ovirt used the slave storage and i can start the VM's of the snapshot and read/write on it. After the test i switched back with modify the nfs path and started the VM on the master storage without problem. This works if master = storage is completly lost and i must switch complete to the slave storage. The only problem now is for the use case if only one VM had a problem and I only need the snapshot data which is on the slave storage. I read your link, and this can perhaps be that what i need, but at the moment it looks like a feature that is on status implementation since october last year. Perhaps it works when i add a new VM on the data storage which create's empty disk images (or use the existing broken VM) and after that i copy the disk images from the slave storage over the empty disk images. But this woul'd be in the console and fault-prone. An way to import snaps/replicated storage domains would be realy cool. Greetings Roger On 24.11.2015 20:32, Ivan Bulatovic wrote: > Hi Roger, > > you can not import the snapshot because you already have an active > storage domain that contains the same (meta)data. Try detaching the > storage domain that you take snapshots from, and remove it (do not > select to wipe the data), and then try to import the snapshot. You > will see a warning that the domain is already registered within ovirt > engine and you can force the import to continue. After that, you > should see the domain registered in ovirt webadmin. Before detaching > the domain, make sure you have another domain active so it can become > a master domain, and create exports of the VM's that are part of that > domain just in case. > > I had the same problem while trying to import a replicated storage > domain, you can see that oVirt tries to import the domain, but it just > returns to import domain dialog. It actually mounts the domain for a > few seconds, then disconnects and removes the mount point under > /rhev/data-center/ and then it tries to unmount it and fails because > mount point doesn't exist anymore. > > Mentioned it here recently: > http://lists.ovirt.org/pipermail/users/2015-November/036027.html > > Maybe there is a workaround for importing the clone/snap of the domain > while the source is still active (messing with the metadata), but I > haven't tried it so far (there are several implications that needs to > be taken into the account). However, I'm also interested if there is a > way to do such a thing, especially when you have two separate > datacenters registered within the same engine, and it would be great > for us to be able to import snaps/replicated storage domains and/or > VM's that reside on it while still having the original VM's active and > running. > > Something similar to the third RFE here (this is only for VM's): > http://www.ovirt.org/Features/ImportUnregisteredEntities#RFEs > > In any case, I'll try this ASAP, always an interesting topic. Any > insights on this is highly appreciated. > > Ivan > > On 11/24/2015 12:40 PM, Roger Meier wrote: >> Hi All, >> >> I don't know if this is a Bug or an error on my side. >> >> At the moment, i have a oVirt 3.6 installation with two Nodes and >> Two Storage Server, which >> are configured themselfs als master/slave (solaris zfs snapshot copy >> from master to slave all 2 hours) >> >> Now i try to do some tests for some failure use cases like, master >> storage isn't available anymore or >> one of the virtual machines must be restored from the snapshot. >> >> Because the data n the slave is a snapshot copy, all data which are >> on the Data Domain NFS Storage, >> are also on the slave NFS Storage. >> >> I tried it to add over WebUI over the option "Import Domain" (Import >> Pre-Configured Domain) with both >> Domain Functions (Data and Export) but nothing happens, expect some >> errors in the vdsm.log Logfile. >> >> Something like this >> >> Thread-253746::ERROR::2015-11-24 >> 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could >> not disconnect from storageServer >> Traceback (most recent call last): >> File "/usr/share/vdsm/storage/hsm.py", line 2545, in >> disconnectStorageServer >> conObj.disconnect() >> File "/usr/share/vdsm/storage/storageServer.py", line 425, in >> disconnect >> return self._mountCon.disconnect() >> File "/usr/share/vdsm/storage/storageServer.py", line 254, in >> disconnect >> self._mount.umount(True, True) >> File "/usr/share/vdsm/storage/mount.py", line 256, in umount >> return self._runcmd(cmd, timeout) >> File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd >> raise MountError(rc, ";".join((out, err))) >> MountError: (32, ';umount: >> /rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nf= sshare1: >> mountpoint not found\n') >> >> I checked with nfs-check.py if all permissions are ok, the tool say >> this: >> >> Konsole output >> [root(a)lin-ovirt1 contrib]# python ./nfs-check.py >> 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1 >> Current hostname: lin-ovirt1 - IP addr 192.168.1.14 >> Trying to /bin/mount -t nfs >> 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1... >> Executing NFS tests.. >> Removing vdsmTest file.. >> Status of tests [OK] >> Disconnecting from NFS Server.. >> Done! >> >> Greetings >> Roger Meier >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > --===============3625533160084773103==--