[Users] How to remove storage domain

Gianluca Cecchi gianluca.cecchi at gmail.com
Thu Apr 18 03:09:13 EDT 2013


On Wed, Apr 17, 2013 at 11:56 AM, Gianluca Cecchi  wrote:
> On Wed, Apr 17, 2013 at 11:30 AM, Yeela Kaplan  wrote:
>> Gianluca,
>> You need to first put the domain to maintenance,
>> then detach the storage domain from the data center,
>> and then the 'remove' option will be available.
>>
>> --
>> Yeela
>
> I supposed that, but I didn't find in the gui the place where to put
> the SD into maintenance....
>
> following the guide for rhev 3.1 it should something like this, correct?
>
> Procedure 7.12. Removing a Storage Domain
> 1. Use the Storage resource tab, tree mode, or the search function to
> find and select the
> appropriate storage domain in the results list.
> 2. Move the domain into maintenance mode to deactivate it.
> 3. Detach the domain from the data center.
> 4. Click Remove to open the Rem ove Storage confirmation window.
> 5. Select a host from the list.
> 6. Click OK to remove the storage domain and close the window.
>
> Possibly I selected the wrong place in 1. because I don't remember to
> have seen  a "maintenance" option there.
> I'm going to re-check....
> Or perhaps it was greyed out...
>
> Gianluca

Ok, I found it.... not so intuitive in my opinion...

1. Use the Storage resource tab, tree mode
2. Select the storage domain in the main tab
3. In the details pane at bottom select datacenter tab
4. Select the DC where the storage domain is active
5. Select Maintenance link
6. Select Detach link to detach from the data center.

Now you don't see any more the storage domain in storage resource tab.
7. Go to system --> Storage
and the storage domain now in in an unlinked state
8. Select the storage domain and choose "remove"
9. Select a host from the list.
10. Click OK to remove the storage domain and close the window.

So this seems the detailed workflow.

In my case after step 10 I receive:

Error while executing action Remove Storage Domain: Volume Group remove error

In vdsm.log:
Thread-2043307::DEBUG::2013-04-18
09:08:16,909::task::1151::TaskManager.Task::(prepare)
Task=`c2c45a63-6361-4d68-b93d-b03a
6825cf5f`::finished: {u'be882de7-6c79-413b-85aa-f0b8b77cb59e':
{'delay': '0.0529000759125', 'lastCheck': '4.7', 'code': 0,
 'valid': True}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'delay':
'0.0538139343262', 'lastCheck': '8.6', 'code': 0, 'val
id': True}, u'8573d237-f86f-4b27-be80-479281a53645': {'delay':
'0.0613639354706', 'lastCheck': '5.6', 'code': 0, 'valid':
True}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'delay':
'0.0538330078125', 'lastCheck': '5.1', 'code': 0, 'valid': True}
, u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'delay':
'0.0559990406036', 'lastCheck': '0.2', 'code': 0, 'valid': True}, u'2
aff7dc6-e25b-433b-9681-5541a29bb07c': {'delay': '0.0598528385162',
'lastCheck': '0.2', 'code': 0, 'valid': True}, u'14b516
7c-5883-4920-8236-e8905456b01f': {'delay': '0.0535669326782',
'lastCheck': '4.6', 'code': 0, 'valid': True}}
Thread-2043307::DEBUG::2013-04-18
09:08:16,909::task::568::TaskManager.Task::(_updateState)
Task=`c2c45a63-6361-4d68-b93d-b03a6825cf5f`::moving from state
preparing -> state finished
Thread-2043307::DEBUG::2013-04-18
09:08:16,909::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-2043307::DEBUG::2013-04-18
09:08:16,910::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-2043307::DEBUG::2013-04-18
09:08:16,910::task::957::TaskManager.Task::(_decref)
Task=`c2c45a63-6361-4d68-b93d-b03a6825cf5f`::ref 0 aborting False
VM Channels Listener::DEBUG::2013-04-18
09:08:17,722::vmChannels::61::vds::(_handle_timeouts) Timeout on
fileno 60.
Dummy-1143::DEBUG::2013-04-18
09:08:17,969::misc::84::Storage.Misc.excCmd::(<lambda>) 'dd
if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000' (cwd None)
Dummy-1143::DEBUG::2013-04-18
09:08:18,069::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
= '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied,
0.031044 s, 33.0 MB/s\n'; <rc> = 0
Thread-1928975::ERROR::2013-04-18
09:08:18,136::utils::416::vm.Vm::(collect)
vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Stats function failed:
<AdvancedStatsFunction _highWrite at 0x16f45d0>
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 412, in collect
    statsFunction()
  File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 287, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/libvirtvm.py", line 134, in _highWrite
    self._vm._dom.blockInfo(vmDrive.path, 0)
  File "/usr/share/vdsm/libvirtvm.py", line 541, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1781, in blockInfo
    if ret is None: raise libvirtError ('virDomainGetBlockInfo()
failed', dom=self)
libvirtError: failed to open path
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304':
No such file or directory
Thread-23::DEBUG::2013-04-18
09:08:18,351::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd
iflag=direct if=/dev/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83/metadata
bs=4096 count=1' (cwd None)
Thread-23::DEBUG::2013-04-18
09:08:18,397::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
= '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied,
0.000349033 s, 11.7 MB/s\n'; <rc> = 0

Any hint?


More information about the Users mailing list