[Users] How to remove storage domain

Hello, oVirt 3.2.1 on f18. I have an FC datacenter where I have several storage domains. I want to remove one storage domain, so I move all its disks to other ones (disk --> move) At the end its state is active, but the "remove" option is greyed out. I can only do "destroy". What to do to have the "delete" option possible? Thanks, Gianluca

Gianluca, You need to first put the domain to maintenance, then detach the storage domain from the data center, and then the 'remove' option will be available. -- Yeela ----- Original Message -----
From: "Gianluca Cecchi" <gianluca.cecchi@gmail.com> To: "users" <users@ovirt.org> Sent: Tuesday, April 16, 2013 7:01:54 PM Subject: [Users] How to remove storage domain
Hello, oVirt 3.2.1 on f18.
I have an FC datacenter where I have several storage domains. I want to remove one storage domain, so I move all its disks to other ones (disk --> move) At the end its state is active, but the "remove" option is greyed out. I can only do "destroy".
What to do to have the "delete" option possible?
Thanks, Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Apr 17, 2013 at 11:30 AM, Yeela Kaplan wrote:
Gianluca, You need to first put the domain to maintenance, then detach the storage domain from the data center, and then the 'remove' option will be available.
-- Yeela
I supposed that, but I didn't find in the gui the place where to put the SD into maintenance.... following the guide for rhev 3.1 it should something like this, correct? Procedure 7.12. Removing a Storage Domain 1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list. 2. Move the domain into maintenance mode to deactivate it. 3. Detach the domain from the data center. 4. Click Remove to open the Rem ove Storage confirmation window. 5. Select a host from the list. 6. Click OK to remove the storage domain and close the window. Possibly I selected the wrong place in 1. because I don't remember to have seen a "maintenance" option there. I'm going to re-check.... Or perhaps it was greyed out... Gianluca

On Wed, Apr 17, 2013 at 11:56 AM, Gianluca Cecchi wrote:
On Wed, Apr 17, 2013 at 11:30 AM, Yeela Kaplan wrote:
Gianluca, You need to first put the domain to maintenance, then detach the storage domain from the data center, and then the 'remove' option will be available.
-- Yeela
I supposed that, but I didn't find in the gui the place where to put the SD into maintenance....
following the guide for rhev 3.1 it should something like this, correct?
Procedure 7.12. Removing a Storage Domain 1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list. 2. Move the domain into maintenance mode to deactivate it. 3. Detach the domain from the data center. 4. Click Remove to open the Rem ove Storage confirmation window. 5. Select a host from the list. 6. Click OK to remove the storage domain and close the window.
Possibly I selected the wrong place in 1. because I don't remember to have seen a "maintenance" option there. I'm going to re-check.... Or perhaps it was greyed out...
Gianluca
Ok, I found it.... not so intuitive in my opinion... 1. Use the Storage resource tab, tree mode 2. Select the storage domain in the main tab 3. In the details pane at bottom select datacenter tab 4. Select the DC where the storage domain is active 5. Select Maintenance link 6. Select Detach link to detach from the data center. Now you don't see any more the storage domain in storage resource tab. 7. Go to system --> Storage and the storage domain now in in an unlinked state 8. Select the storage domain and choose "remove" 9. Select a host from the list. 10. Click OK to remove the storage domain and close the window. So this seems the detailed workflow. In my case after step 10 I receive: Error while executing action Remove Storage Domain: Volume Group remove error In vdsm.log: Thread-2043307::DEBUG::2013-04-18 09:08:16,909::task::1151::TaskManager.Task::(prepare) Task=`c2c45a63-6361-4d68-b93d-b03a 6825cf5f`::finished: {u'be882de7-6c79-413b-85aa-f0b8b77cb59e': {'delay': '0.0529000759125', 'lastCheck': '4.7', 'code': 0, 'valid': True}, u'3fb66ba1-cfcb-4341-8960-46f0e8cf6e83': {'delay': '0.0538139343262', 'lastCheck': '8.6', 'code': 0, 'val id': True}, u'8573d237-f86f-4b27-be80-479281a53645': {'delay': '0.0613639354706', 'lastCheck': '5.6', 'code': 0, 'valid': True}, u'596a3408-67d7-4b26-b482-e3a7554a5897': {'delay': '0.0538330078125', 'lastCheck': '5.1', 'code': 0, 'valid': True} , u'e3251723-08e1-4b4b-bde4-c10d6372074b': {'delay': '0.0559990406036', 'lastCheck': '0.2', 'code': 0, 'valid': True}, u'2 aff7dc6-e25b-433b-9681-5541a29bb07c': {'delay': '0.0598528385162', 'lastCheck': '0.2', 'code': 0, 'valid': True}, u'14b516 7c-5883-4920-8236-e8905456b01f': {'delay': '0.0535669326782', 'lastCheck': '4.6', 'code': 0, 'valid': True}} Thread-2043307::DEBUG::2013-04-18 09:08:16,909::task::568::TaskManager.Task::(_updateState) Task=`c2c45a63-6361-4d68-b93d-b03a6825cf5f`::moving from state preparing -> state finished Thread-2043307::DEBUG::2013-04-18 09:08:16,909::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-2043307::DEBUG::2013-04-18 09:08:16,910::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-2043307::DEBUG::2013-04-18 09:08:16,910::task::957::TaskManager.Task::(_decref) Task=`c2c45a63-6361-4d68-b93d-b03a6825cf5f`::ref 0 aborting False VM Channels Listener::DEBUG::2013-04-18 09:08:17,722::vmChannels::61::vds::(_handle_timeouts) Timeout on fileno 60. Dummy-1143::DEBUG::2013-04-18 09:08:17,969::misc::84::Storage.Misc.excCmd::(<lambda>) 'dd if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd None) Dummy-1143::DEBUG::2013-04-18 09:08:18,069::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.031044 s, 33.0 MB/s\n'; <rc> = 0 Thread-1928975::ERROR::2013-04-18 09:08:18,136::utils::416::vm.Vm::(collect) vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x16f45d0> Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 412, in collect statsFunction() File "/usr/lib64/python2.7/site-packages/vdsm/utils.py", line 287, in __call__ retValue = self._function(*args, **kwargs) File "/usr/share/vdsm/libvirtvm.py", line 134, in _highWrite self._vm._dom.blockInfo(vmDrive.path, 0) File "/usr/share/vdsm/libvirtvm.py", line 541, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1781, in blockInfo if ret is None: raise libvirtError ('virDomainGetBlockInfo() failed', dom=self) libvirtError: failed to open path '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304': No such file or directory Thread-23::DEBUG::2013-04-18 09:08:18,351::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/dd iflag=direct if=/dev/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83/metadata bs=4096 count=1' (cwd None) Thread-23::DEBUG::2013-04-18 09:08:18,397::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000349033 s, 11.7 MB/s\n'; <rc> = 0 Any hint?

Further info about the node: [root@f18ovn01 ~]# vgs VG #PV #LV #SN Attr VSize VFree 013bcc40-5f3d-4394-bd3b-971b14852654 1 5 0 wz--n- 99.62g 83.50g 14b5167c-5883-4920-8236-e8905456b01f 1 11 0 wz--n- 99.62g 58.25g 2aff7dc6-e25b-433b-9681-5541a29bb07c 1 11 0 wz--n- 99.62g 21.75g 3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 1 18 0 wz--n- 149.62g 64.38g 596a3408-67d7-4b26-b482-e3a7554a5897 1 13 0 wz--n- 99.62g 57.75g 8573d237-f86f-4b27-be80-479281a53645 1 6 0 wz--n- 49.62g 45.75g VG_ISCSI 1 1 0 wz--n- 50.00g 0 be882de7-6c79-413b-85aa-f0b8b77cb59e 1 6 0 wz--n- 99.62g 95.75g fedora_f18ovn01 1 2 0 wz--n- 67.84g 4.00m vg_drbd0 1 1 0 wz--n- 2.00g 0 [root@f18ovn01 ~]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304 ls: cannot access /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304: No such file or directory [root@f18ovn01 ~]# ll /rhev/data-center/ total 12 drwxr-xr-x. 2 vdsm kvm 4096 Apr 18 09:03 5849b030-626e-47cb-ad90-3ce782d831b3 drwxr-xr-x. 2 vdsm kvm 4096 Mar 11 17:16 hsm-tasks drwxr-xr-x. 5 vdsm kvm 4096 Mar 19 14:18 mnt [root@f18ovn01 ~]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 total 32 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 14b5167c-5883-4920-8236-e8905456b01f -> /rhev/data-center/mnt/blockSD/14b5167c-5883-4920-8236-e8905456b01f lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 2aff7dc6-e25b-433b-9681-5541a29bb07c -> /rhev/data-center/mnt/blockSD/2aff7dc6-e25b-433b-9681-5541a29bb07c lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 -> /rhev/data-center/mnt/blockSD/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 596a3408-67d7-4b26-b482-e3a7554a5897 -> /rhev/data-center/mnt/blockSD/596a3408-67d7-4b26-b482-e3a7554a5897 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 8573d237-f86f-4b27-be80-479281a53645 -> /rhev/data-center/mnt/blockSD/8573d237-f86f-4b27-be80-479281a53645 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 be882de7-6c79-413b-85aa-f0b8b77cb59e -> /rhev/data-center/mnt/blockSD/be882de7-6c79-413b-85aa-f0b8b77cb59e lrwxrwxrwx. 1 vdsm kvm 88 Apr 18 09:03 e3251723-08e1-4b4b-bde4-c10d6372074b -> /rhev/data-center/mnt/f18engine.ceda.polimi.it:_ISO/e3251723-08e1-4b4b-bde4-c10d6372074b lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 mastersd -> /rhev/data-center/mnt/blockSD/2aff7dc6-e25b-433b-9681-5541a29bb07c [root@f18ovn01 ~]# lvs 013bcc40-5f3d-4394-bd3b-971b14852654 LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert 0a6c7300-011a-46f9-9a5d-d22476a7f4c6 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-ao--- 1.00g 2be37d02-b44f-4823-bf26-054d1a1f0c90 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-ao--- 10.00g d477fcba-2110-403e-93fe-15565aae5304 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-ao--- 1.00g ef887fcd-f2a9-4e26-a01e-bdbf5ba14ae5 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-ao--- 2.12g f8eb4d4c-9aae-44b8-9123-73f3182dc4dc 013bcc40-5f3d-4394-bd3b-971b14852654 -wi-ao--- 2.00g SO the question is why the VG related to my unattached SD (013bcc40-5f3d-4394-bd3b-971b14852654 if I understand correctly), still have active opened logical volumes and they are for.... Before putting into maintenance I verified that no VM disks was associated to that SD (at least from the gui). Gianluca

To see who has in charge the VG: [root@f18ovn01 ~]# fuser /dev/013bcc40-5f3d-4394-bd3b-971b14852654/* /dev/dm-83: 27445 /dev/dm-73: 27445 /dev/dm-84: 27445 /dev/dm-82: 27445 [root@f18ovn01 ~]# ps -wfp 27445 UID PID PPID C STIME TTY TIME CMD qemu 27445 1 7 Apr16 ? 03:02:22 /usr/bin/qemu-kvm -name zensrv -S -M pc-0.14 -cpu Opteron_G2 -enable-kvm -m 2048 -sm It was a vm for which I moved the disk and apparently I got error about it but from the gui the disk was the new one (original disk was on this sd...); see: http://lists.ovirt.org/pipermail/users/2013-April/013847.html So shutdown of the VM [root@f18ovn01 ~]# fuser /dev/013bcc40-5f3d-4394-bd3b-971b14852654/* [root@f18ovn01 ~]# Now remove operation from gui gets: Error while executing action Remove Storage Domain: Storage domain cannot be reached. Please ensure it is accessible from the host(s). So it seems I have a missing link but the actual structure is there: [root@f18ovn01 ~]# ll /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/ total 32 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 14b5167c-5883-4920-8236-e8905456b01f -> /rhev/data-center/mnt/blockSD/14b5167c-5883-4920-8236-e8905456b01f lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 2aff7dc6-e25b-433b-9681-5541a29bb07c -> /rhev/data-center/mnt/blockSD/2aff7dc6-e25b-433b-9681-5541a29bb07c lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 -> /rhev/data-center/mnt/blockSD/3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 596a3408-67d7-4b26-b482-e3a7554a5897 -> /rhev/data-center/mnt/blockSD/596a3408-67d7-4b26-b482-e3a7554a5897 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 8573d237-f86f-4b27-be80-479281a53645 -> /rhev/data-center/mnt/blockSD/8573d237-f86f-4b27-be80-479281a53645 lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 be882de7-6c79-413b-85aa-f0b8b77cb59e -> /rhev/data-center/mnt/blockSD/be882de7-6c79-413b-85aa-f0b8b77cb59e lrwxrwxrwx. 1 vdsm kvm 88 Apr 18 09:03 e3251723-08e1-4b4b-bde4-c10d6372074b -> /rhev/data-center/mnt/f18engine.ceda.polimi.it:_ISO/e3251723-08e1-4b4b-bde4-c10d6372074b lrwxrwxrwx. 1 vdsm kvm 66 Apr 18 09:03 mastersd -> /rhev/data-center/mnt/blockSD/2aff7dc6-e25b-433b-9681-5541a29bb07c [root@f18ovn01 ~]# ll /rhev/data-center/mnt/blockSD/013bcc40-5f3d-4394-bd3b-971b14852654 total 8 drwxr-xr-x. 2 vdsm kvm 4096 Mar 11 18:48 dom_md drwxr-xr-x. 4 vdsm kvm 4096 Apr 16 17:54 images I don't know if existence of the link <--> SD associated to DC or what is so I try to force destroy it Message: " The following operation is unrecoverable and destructive! All references to objects that reside on Storage Domain DS6800_Z2_1601 in the database will be removed. You may need to manually clean the storage in order to reuse it. " Approve and I get it Storage Domain DS6800_Z2_1601 was forcibly removed by admin@internal vg is still there: [root@f18ovn01 ~]# vgs VG #PV #LV #SN Attr VSize VFree 013bcc40-5f3d-4394-bd3b-971b14852654 1 5 0 wz--n- 99.62g 83.50g 14b5167c-5883-4920-8236-e8905456b01f 1 11 0 wz--n- 99.62g 58.25g 2aff7dc6-e25b-433b-9681-5541a29bb07c 1 11 0 wz--n- 99.62g 21.75g 3fb66ba1-cfcb-4341-8960-46f0e8cf6e83 1 18 0 wz--n- 149.62g 64.38g 596a3408-67d7-4b26-b482-e3a7554a5897 1 13 0 wz--n- 99.62g 57.75g 8573d237-f86f-4b27-be80-479281a53645 1 6 0 wz--n- 49.62g 45.75g VG_ISCSI 1 1 0 wz--n- 50.00g 0 be882de7-6c79-413b-85aa-f0b8b77cb59e 1 6 0 wz--n- 99.62g 95.75g fedora_f18ovn01 1 2 0 wz--n- 67.84g 4.00m vg_drbd0 1 1 0 wz--n- 2.00g 0 could I safely remove from OS that VG and its LVs? Gianluca
participants (2)
-
Gianluca Cecchi
-
Yeela Kaplan