[Users] Ovirt 3.3 removing disk failure

When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN." VM itself was running fine with no errors. DATA_DOMAIN is GlusterFS replicated volume (on ovirt host). ovirt engine comp (fc19) ovirt-engine.noarch 3.3.0.1-1.fc19 ovirt host (fc19) vdsm.x86_64 4.12.1-4.fc19 vdsm-gluster.noarch 4.12.1-4.fc19 glusterfs-server.x86_64 3.4.1-1.fc19 tnx for help

After I changed the log level of vdsm I found the error: Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage dom.deleteImage(sdUUID, imgUUID, volsByImg) File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage currImgDir = getImagePath(sdUUID, imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath return os.path.join(getDomPath(sdUUID), 'images', imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}} And this happens only when I want to delete virtual disk. VM alone works fine. Any clue? tnx Dne 12. 11. 2013 15:54, piše Saša Friedrich:
When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN."
VM itself was running fine with no errors.
DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
ovirt engine comp (fc19) ovirt-engine.noarch 3.3.0.1-1.fc19
ovirt host (fc19) vdsm.x86_64 4.12.1-4.fc19 vdsm-gluster.noarch 4.12.1-4.fc19 glusterfs-server.x86_64 3.4.1-1.fc19
tnx for help

This is a multi-part message in MIME format. --------------060401070507070805070201 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit What I found so far... Function returning error is getDomPath in "/usr/share/vdsm/storage/fileSD.py": def getDomPath(sdUUID): pattern = os.path.join(sd.StorageDomain.storage_repository, sd.DOMAIN_MNT_POINT, '*', sdUUID) # Warning! You need a global proc pool big as the number of NFS domains. domPaths = getProcPool().glob.glob(pattern) if len(domPaths) == 0: raise se.StorageDomainDoesNotExist(sdUUID) elif len(domPaths) > 1: raise se.StorageDomainLayoutError(sdUUID) else: return domPaths[0] When I click remove disk in engine, variable "pattern" gets "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and "domPaths" is empty Dne 12. 11. 2013 20:18, pie Saa Friedrich:
After I changed the log level of vdsm I found the error:
Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage dom.deleteImage(sdUUID, imgUUID, volsByImg) File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage currImgDir = getImagePath(sdUUID, imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath return os.path.join(getDomPath(sdUUID), 'images', imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}}
And this happens only when I want to delete virtual disk. VM alone works fine.
Any clue?
tnx
Dne 12. 11. 2013 15:54, pie Saa Friedrich:
When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN."
VM itself was running fine with no errors.
DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
ovirt engine comp (fc19) ovirt-engine.noarch 3.3.0.1-1.fc19
ovirt host (fc19) vdsm.x86_64 4.12.1-4.fc19 vdsm-gluster.noarch 4.12.1-4.fc19 glusterfs-server.x86_64 3.4.1-1.fc19
tnx for help
--------------060401070507070805070201 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> What I found so far...<br> <br> Function returning error is <meta http-equiv="content-type" content="text/html; charset=windows-1252"> getDomPath in "/usr/share/vdsm/storage/fileSD.py":<br> <br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> def getDomPath(sdUUID):<br> pattern = os.path.join(sd.StorageDomain.storage_repository,<br> sd.DOMAIN_MNT_POINT, '*', sdUUID)<br> # Warning! You need a global proc pool big as the number of NFS domains.<br> domPaths = getProcPool().glob.glob(pattern)<br> if len(domPaths) == 0:<br> raise se.StorageDomainDoesNotExist(sdUUID)<br> elif len(domPaths) > 1:<br> raise se.StorageDomainLayoutError(sdUUID)<br> else:<br> return domPaths[0]<br> <br> <br> When I click remove disk in engine, variable "pattern" gets "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and "domPaths" is empty<br> <br> <br> <br> <br> <div class="moz-cite-prefix">Dne 12. 11. 2013 20:18, pie Saa Friedrich:<br> </div> <blockquote cite="mid:52827EF6.2050204@bitlab.si" type="cite">After I changed the log level of vdsm I found the error: <br> <br> Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error <br> Traceback (most recent call last): <br> File "/usr/share/vdsm/storage/task.py", line 857, in _run <br> return fn(*args, **kargs) <br> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper <br> res = f(*args, **kwargs) <br> File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage <br> dom.deleteImage(sdUUID, imgUUID, volsByImg) <br> File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage <br> currImgDir = getImagePath(sdUUID, imgUUID) <br> File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath <br> return os.path.join(getDomPath(sdUUID), 'images', imgUUID) <br> File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath <br> raise se.StorageDomainDoesNotExist(sdUUID) <br> StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) <br> Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}} <br> <br> <br> And this happens only when I want to delete virtual disk. VM alone works fine. <br> <br> Any clue? <br> <br> <br> tnx <br> <br> <br> Dne 12. 11. 2013 15:54, pie Saa Friedrich: <br> <blockquote type="cite">When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN." <br> <br> VM itself was running fine with no errors. <br> <br> DATA_DOMAIN is GlusterFS replicated volume (on ovirt host). <br> <br> ovirt engine comp (fc19) <br> ovirt-engine.noarch 3.3.0.1-1.fc19 <br> <br> ovirt host (fc19) <br> vdsm.x86_64 4.12.1-4.fc19 <br> vdsm-gluster.noarch 4.12.1-4.fc19 <br> glusterfs-server.x86_64 3.4.1-1.fc19 <br> <br> <br> tnx for help <br> </blockquote> <br> </blockquote> <br> </body> </html> --------------060401070507070805070201--

This is a multi-part message in MIME format. --------------050101000801090507000809 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Just for test I enabled "ovirt-updates-testing" and "ovirt-nightly" repos. Did yum update and error is still there. I created new virtual disk and then tried to delete it... same error! Is there a way to remove disk manualy? I can mount gluster volume and delete the disk. But what about db in engine? Which records should I remove (by hand)? tnx Dne 12. 11. 2013 21:19, pie Saa Friedrich:
What I found so far...
Function returning error is getDomPath in "/usr/share/vdsm/storage/fileSD.py":
def getDomPath(sdUUID): pattern = os.path.join(sd.StorageDomain.storage_repository, sd.DOMAIN_MNT_POINT, '*', sdUUID) # Warning! You need a global proc pool big as the number of NFS domains. domPaths = getProcPool().glob.glob(pattern) if len(domPaths) == 0: raise se.StorageDomainDoesNotExist(sdUUID) elif len(domPaths) > 1: raise se.StorageDomainLayoutError(sdUUID) else: return domPaths[0]
When I click remove disk in engine, variable "pattern" gets "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and "domPaths" is empty
Dne 12. 11. 2013 20:18, pie Saa Friedrich:
After I changed the log level of vdsm I found the error:
Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage dom.deleteImage(sdUUID, imgUUID, volsByImg) File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage currImgDir = getImagePath(sdUUID, imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath return os.path.join(getDomPath(sdUUID), 'images', imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}}
And this happens only when I want to delete virtual disk. VM alone works fine.
Any clue?
tnx
Dne 12. 11. 2013 15:54, pie Saa Friedrich:
When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN."
VM itself was running fine with no errors.
DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
ovirt engine comp (fc19) ovirt-engine.noarch 3.3.0.1-1.fc19
ovirt host (fc19) vdsm.x86_64 4.12.1-4.fc19 vdsm-gluster.noarch 4.12.1-4.fc19 glusterfs-server.x86_64 3.4.1-1.fc19
tnx for help
--------------050101000801090507000809 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Just for test I enabled "ovirt-updates-testing" and "ovirt-nightly" repos. Did yum update and error is still there.<br> <br> I created new virtual disk and then tried to delete it... same error!<br> <br> Is there a way to remove disk manualy? I can mount gluster volume and delete the disk. But what about db in engine? Which records should I remove (by hand)?<br> <br> <br> tnx<br> <br> <br> <br> <div class="moz-cite-prefix">Dne 12. 11. 2013 21:19, pie Saa Friedrich:<br> </div> <blockquote cite="mid:52828D67.2060904@bitlab.si" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> What I found so far...<br> <br> Function returning error is <meta http-equiv="content-type" content="text/html; charset=windows-1252"> getDomPath in "/usr/share/vdsm/storage/fileSD.py":<br> <br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> def getDomPath(sdUUID):<br> pattern = os.path.join(sd.StorageDomain.storage_repository,<br> sd.DOMAIN_MNT_POINT, '*', sdUUID)<br> # Warning! You need a global proc pool big as the number of NFS domains.<br> domPaths = getProcPool().glob.glob(pattern)<br> if len(domPaths) == 0:<br> raise se.StorageDomainDoesNotExist(sdUUID)<br> elif len(domPaths) > 1:<br> raise se.StorageDomainLayoutError(sdUUID)<br> else:<br> return domPaths[0]<br> <br> <br> When I click remove disk in engine, variable "pattern" gets "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and "domPaths" is empty<br> <br> <br> <br> <br> <div class="moz-cite-prefix">Dne 12. 11. 2013 20:18, pie Saa Friedrich:<br> </div> <blockquote cite="mid:52827EF6.2050204@bitlab.si" type="cite">After I changed the log level of vdsm I found the error: <br> <br> Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error <br> Traceback (most recent call last): <br> File "/usr/share/vdsm/storage/task.py", line 857, in _run <br> return fn(*args, **kargs) <br> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper <br> res = f(*args, **kwargs) <br> File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage <br> dom.deleteImage(sdUUID, imgUUID, volsByImg) <br> File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage <br> currImgDir = getImagePath(sdUUID, imgUUID) <br> File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath <br> return os.path.join(getDomPath(sdUUID), 'images', imgUUID) <br> File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath <br> raise se.StorageDomainDoesNotExist(sdUUID) <br> StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) <br> Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}} <br> <br> <br> And this happens only when I want to delete virtual disk. VM alone works fine. <br> <br> Any clue? <br> <br> <br> tnx <br> <br> <br> Dne 12. 11. 2013 15:54, pie Saa Friedrich: <br> <blockquote type="cite">When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN." <br> <br> VM itself was running fine with no errors. <br> <br> DATA_DOMAIN is GlusterFS replicated volume (on ovirt host). <br> <br> ovirt engine comp (fc19) <br> ovirt-engine.noarch 3.3.0.1-1.fc19 <br> <br> ovirt host (fc19) <br> vdsm.x86_64 4.12.1-4.fc19 <br> vdsm-gluster.noarch 4.12.1-4.fc19 <br> glusterfs-server.x86_64 3.4.1-1.fc19 <br> <br> <br> tnx for help <br> </blockquote> <br> </blockquote> <br> </blockquote> <br> </body> </html> --------------050101000801090507000809--

Saša, Please, check this path /rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941 from your host. Does it exist? If not, please, try to access your GlusterFS through SSH. Does it exist that way? ----- Original Message -----
From: "Saša Friedrich" <sasa.friedrich@bitlab.si> To: users@ovirt.org Sent: Wednesday, November 13, 2013 12:10:58 PM Subject: Re: [Users] Ovirt 3.3 removing disk failure
Just for test I enabled "ovirt-updates-testing" and "ovirt-nightly" repos. Did yum update and error is still there.
I created new virtual disk and then tried to delete it... same error!
Is there a way to remove disk manualy? I can mount gluster volume and delete the disk. But what about db in engine? Which records should I remove (by hand)?
tnx
Dne 12. 11. 2013 21:19, piše Saša Friedrich:
What I found so far...
Function returning error is getDomPath in "/usr/share/vdsm/storage/fileSD.py":
def getDomPath(sdUUID): pattern = os.path.join(sd.StorageDomain.storage_repository, sd.DOMAIN_MNT_POINT, '*', sdUUID) # Warning! You need a global proc pool big as the number of NFS domains. domPaths = getProcPool().glob.glob(pattern) if len(domPaths) == 0: raise se.StorageDomainDoesNotExist(sdUUID) elif len(domPaths) > 1: raise se.StorageDomainLayoutError(sdUUID) else: return domPaths[0]
When I click remove disk in engine, variable "pattern" gets "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and "domPaths" is empty
Dne 12. 11. 2013 20:18, piše Saša Friedrich:
After I changed the log level of vdsm I found the error:
Thread-5180::ERROR::2013-11-12 19:44:21,433::task::850::TaskManager.Task::(_setError) Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage dom.deleteImage(sdUUID, imgUUID, volsByImg) File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage currImgDir = getImagePath(sdUUID, imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath return os.path.join(getDomPath(sdUUID), 'images', imgUUID) File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',) Thread-5180::ERROR::2013-11-12 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Storage domain does not exist: ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}}
And this happens only when I want to delete virtual disk. VM alone works fine.
Any clue?
tnx
Dne 12. 11. 2013 15:54, piše Saša Friedrich:
When I try to remove viritual disk from ovirt engine I get error "User admin@internal finished removing disk test_vm with storage failure in domain DATA_DOMAIN."
VM itself was running fine with no errors.
DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
ovirt engine comp (fc19) ovirt-engine.noarch 3.3.0.1-1.fc19
ovirt host (fc19) vdsm.x86_64 4.12.1-4.fc19 vdsm-gluster.noarch 4.12.1-4.fc19 glusterfs-server.x86_64 3.4.1-1.fc19
tnx for help
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Saša Friedrich
-
Sergey Gotliv