On Sun, Aug 14, 2016 at 8:07 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
On Sun, Aug 14, 2016 at 5:55 PM, Siavash Safi
<siavash.safi(a)gmail.com
wrote:
> Hi,
> An unknown bug broke our gluster storage (dom_md/ids is
corrupted) and
oVirt
> no longer activates the storage(I tried to recover it using the similar
> issues reported in mailing list but it didn't work).
Can you explain what you did?
cd /mnt/4697fbde-45fb-4f91-ac4c-5516bc59f683/dom_md/
rm ids
touch ids
sanlock direct init -s 4697fbde-45fb-4f91-ac4c-5516bc59f683:0:ids:1048576
The best way to fix this is to initialize the corrupt id file and
activate the domain.
This would be great!
As I checked VM disk images are still accessible when I mount the gluster
> storage manually.
> How can we manually move the VM disk images to local storage? (oVirt
> complains about gluster storage being inactive when using the web
interface
> for move/copy)
You can easily copy the images to another file based storage (nfs,
gluster) like this:
1. activate other storage domain using engine
2. mount gluster domain manually
3. copy the image from gluster domain to the other domain:
cp -r gluster-domain-mountpoint/images/image-uuid
/rhev/data-center/mnt/server:_path/other-domain-uuid/images/
But the images will not be available since engine does know them.
Maybe this can be
fixed by modifying engine database.
How complicated is it?
Another solution (if you are using ovirt 4.0), is to upload the
images
to a new disk,
and attach the disk to the vm instead of the missing disk.
We are running 3.6
Nir