[Users] Attaching export domain to dc fails

Dafna Ron dron at redhat.com
Thu Jan 24 17:08:57 UTC 2013


Before you do this be sure that the export domain is *really* *not
attached to* *any* *DC*!
if you look under the storage main tab it should appear as unattached or
it should not be in the setup or under a DC in any other setup at all.

1. go to the export domain's metadata located under the domain dom_md
(example)

72ec1321-a114-451f-bee1-6790cbca1bc6/dom_md/metadata

2. (backup the metadata before you edit it!) 
vim the metadata and remove the pool's uuid value from POOL_UUID field
leaving: 'POOL_UUID='
also remove the SHA_CKSUM (remove entire entry - not just the value)

so for example my metadata was this:

CLASS=Backup
DESCRIPTION=BlaBla
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=0
POOL_UUID=cee3603b-2308-4973-97a8-480f7d6d2132
REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
ROLE=Regular
SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
TYPE=NFS
VERSION=0
_SHA_CKSUM=95bf1c9b8a75b077fe65d782e86b4c4c331a765d


it will be this:

CLASS=Backup
DESCRIPTION=BlaBla
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=0
POOL_UUID=
REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
ROLE=Regular
SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
TYPE=NFS
VERSION=0


you should be able to attach the domain after this change.


On 01/24/2013 06:39 PM, Patrick Hurrelmann wrote:
> Hi list,
>
> in one datacenter I'm facing problems with my export storage. The dc is
> of type single host with local storage. On the host I see that the nfs
> export domain is still connected, but the engine does not show this and
> therefore it cannot be used for exports or detached.
>
> Trying to add attach the export domain again fails. The following is
> logged n vdsm:
>
> Thread-1902159::ERROR::2013-01-24
> 17:11:45,474::task::853::TaskManager.Task::(_setError)
> Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 861, in _run
>     return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>     res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 960, in attachStorageDomain
>     pool.attachSD(sdUUID)
>   File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
>     return f(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
>     dom.attach(self.spUUID)
>   File "/usr/share/vdsm/storage/sd.py", line 442, in attach
>     raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
> StorageDomainAlreadyAttached: Storage domain already attached to pool:
> 'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
> pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'
>
> It won't let me attach the export domain saying that it is already
> attached. Manually umounting the export domain on the host results in
> the same error on subsequent attach.
>
> This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
>
> vdsm.x86_64                                 4.10.0-0.44.14.el6
> vdsm-cli.noarch                             4.10.0-0.44.14.el6
> vdsm-python.x86_64                          4.10.0-0.44.14.el6
> vdsm-xmlrpc.noarch                          4.10.0-0.44.14.el6
>
> Engine:
>
> ovirt-engine.noarch                         3.1.0-3.19.el6
> ovirt-engine-backend.noarch                 3.1.0-3.19.el6
> ovirt-engine-cli.noarch                     3.1.0.7-1.el6
> ovirt-engine-config.noarch                  3.1.0-3.19.el6
> ovirt-engine-dbscripts.noarch               3.1.0-3.19.el6
> ovirt-engine-genericapi.noarch              3.1.0-3.19.el6
> ovirt-engine-jbossas711.x86_64              1-0
> ovirt-engine-notification-service.noarch    3.1.0-3.19.el6
> ovirt-engine-restapi.noarch                 3.1.0-3.19.el6
> ovirt-engine-sdk.noarch                     3.1.0.5-1.el6
> ovirt-engine-setup.noarch                   3.1.0-3.19.el6
> ovirt-engine-tools-common.noarch            3.1.0-3.19.el6
> ovirt-engine-userportal.noarch              3.1.0-3.19.el6
> ovirt-engine-webadmin-portal.noarch         3.1.0-3.19.el6
> ovirt-image-uploader.noarch                 3.1.0-16.el6
> ovirt-iso-uploader.noarch                   3.1.0-16.el6
> ovirt-log-collector.noarch                  3.1.0-16.el6
>
> How can this be recovered to a sane state? If more information is
> needed, please do not hesitate to request it.
>
> Thanks and regards
> Patrick
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Dafna Ron



More information about the Users mailing list