Hi Ivan,
The problem is their is no warning in the webUI. I can press the button
and nothing happens (i only see the error message in the logfile) no
error, i stuck in the config UI.
I tried now another approach based on your idea. I turned the master
storage domain into maintenance mode and changed the nfs path and
activated it again with the new share path. After this ovirt used the
slave storage and i can start the VM's of the snapshot and read/write on
it. After the test i switched back with modify the nfs path and started
the VM on the master storage without problem. This works if master
storage is completly lost and i must switch complete to the slave storage.
The only problem now is for the use case if only one VM had a problem
and I only need the snapshot data which is on the slave storage. I read
your link, and this can perhaps be that what i need, but at the moment
it looks like
a feature that is on status implementation since october last year.
Perhaps it works when i add a new VM on the data storage which create's
empty disk images (or use the existing broken VM) and after that i copy
the disk images from the slave storage over the empty disk images. But
this woul'd be in the console and fault-prone. An way to import
snaps/replicated storage domains would be realy cool.
Greetings
Roger
On 24.11.2015 20:32, Ivan Bulatovic wrote:
Hi Roger,
you can not import the snapshot because you already have an active
storage domain that contains the same (meta)data. Try detaching the
storage domain that you take snapshots from, and remove it (do not
select to wipe the data), and then try to import the snapshot. You
will see a warning that the domain is already registered within ovirt
engine and you can force the import to continue. After that, you
should see the domain registered in ovirt webadmin. Before detaching
the domain, make sure you have another domain active so it can become
a master domain, and create exports of the VM's that are part of that
domain just in case.
I had the same problem while trying to import a replicated storage
domain, you can see that oVirt tries to import the domain, but it just
returns to import domain dialog. It actually mounts the domain for a
few seconds, then disconnects and removes the mount point under
/rhev/data-center/ and then it tries to unmount it and fails because
mount point doesn't exist anymore.
Mentioned it here recently:
http://lists.ovirt.org/pipermail/users/2015-November/036027.html
Maybe there is a workaround for importing the clone/snap of the domain
while the source is still active (messing with the metadata), but I
haven't tried it so far (there are several implications that needs to
be taken into the account). However, I'm also interested if there is a
way to do such a thing, especially when you have two separate
datacenters registered within the same engine, and it would be great
for us to be able to import snaps/replicated storage domains and/or
VM's that reside on it while still having the original VM's active and
running.
Something similar to the third RFE here (this is only for VM's):
http://www.ovirt.org/Features/ImportUnregisteredEntities#RFEs
In any case, I'll try this ASAP, always an interesting topic. Any
insights on this is highly appreciated.
Ivan
On 11/24/2015 12:40 PM, Roger Meier wrote:
> Hi All,
>
> I don't know if this is a Bug or an error on my side.
>
> At the moment, i have a oVirt 3.6 installation with two Nodes and
> Two Storage Server, which
> are configured themselfs als master/slave (solaris zfs snapshot copy
> from master to slave all 2 hours)
>
> Now i try to do some tests for some failure use cases like, master
> storage isn't available anymore or
> one of the virtual machines must be restored from the snapshot.
>
> Because the data n the slave is a snapshot copy, all data which are
> on the Data Domain NFS Storage,
> are also on the slave NFS Storage.
>
> I tried it to add over WebUI over the option "Import Domain" (Import
> Pre-Configured Domain) with both
> Domain Functions (Data and Export) but nothing happens, expect some
> errors in the vdsm.log Logfile.
>
> Something like this
>
> Thread-253746::ERROR::2015-11-24
> 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could
> not disconnect from storageServer
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/hsm.py", line 2545, in
> disconnectStorageServer
> conObj.disconnect()
> File "/usr/share/vdsm/storage/storageServer.py", line 425, in
> disconnect
> return self._mountCon.disconnect()
> File "/usr/share/vdsm/storage/storageServer.py", line 254, in
> disconnect
> self._mount.umount(True, True)
> File "/usr/share/vdsm/storage/mount.py", line 256, in umount
> return self._runcmd(cmd, timeout)
> File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
> raise MountError(rc, ";".join((out, err)))
> MountError: (32, ';umount:
> /rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
> mountpoint not found\n')
>
> I checked with nfs-check.py if all permissions are ok, the tool say
> this:
>
> Konsole output
> [root@lin-ovirt1 contrib]# python ./nfs-check.py
> 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
> Current hostname: lin-ovirt1 - IP addr 192.168.1.14
> Trying to /bin/mount -t nfs
> 192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
> Executing NFS tests..
> Removing vdsmTest file..
> Status of tests [OK]
> Disconnecting from NFS Server..
> Done!
>
> Greetings
> Roger Meier
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users