Not sure if this info would be helpful but I had this happen on my CentOS
w/ gluster install too. Logs are long gone unfortunately.
On Sat, Dec 14, 2013 at 7:41 PM, Vered Volansky <vered(a)redhat.com> wrote:
I've looked at the logs and will have to discuss the issue with some
people, which I will only be able to do tomorrow.
In the mean time, When the message to manually remove the disks appear, it
actually means manually - not through the engine at all, if that helps for
This means through the DB, which it not recommended and I'd prefer to wait
till tomorrow and figure this out.
----- Original Message -----
> From: "huntxu" <mhuntxu(a)gmail.com>
> To: users(a)ovirt.org
> Sent: Friday, December 13, 2013 5:59:37 AM
> Subject: Re: [Users] disks successfully removed with torage failure
> On Thu, 12 Dec 2013 21:39:32 +0800, Gianluca Cecchi
> <gianluca.cecchi(a)gmail.com> wrote:
> > Hello,
> > 3.3.2 beta on Fedora 19 engine with 2 Fedora 19 hosts and GusterFS
> Hi, I think I have met the same problem this Monday and I've worked out a
> Would you please give it a try?
> I didn't verify whether it's exactly the same because I couldn't
> your log due
> to my bad Internet connection.
> Does the log look like below:
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 857, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage
> dom.deleteImage(sdUUID, imgUUID, volsByImg)
> File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage
> currImgDir = getImagePath(sdUUID, imgUUID)
> File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath
> return os.path.join(getDomPath(sdUUID), 'images', imgUUID)
> File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> . http://gerrit.ovirt.org/#/c/22359/
> Users mailing list
Users mailing list