[Users] Reimporting storage domains after reinstalling ovirt

Hi Guys, Currently I've got a Centos machine running the latest ovirt-release. This machine is using a local raid set containing a directory with ovirt-based VMs from my previous install. The ovirt install is a completely fresh install, no storage/VMs have been created yet. What's the best way to reimport those? I might try to create a new storage domain and copy all old VMs into it (if that works anyway...) , or just reimport the old domain from the web-interface. Does either trick have any advantage, or is there a best practice I should adhere to? Cheers, Boudewijn

On 09-03-14 02:12, Boudewijn Ector wrote:
Hi Guys,
Currently I've got a Centos machine running the latest ovirt-release. This machine is using a local raid set containing a directory with ovirt-based VMs from my previous install. The ovirt install is a completely fresh install, no storage/VMs have been created yet.
What's the best way to reimport those? I might try to create a new storage domain and copy all old VMs into it (if that works anyway...) , or just reimport the old domain from the web-interface.
Does either trick have any advantage, or is there a best practice I should adhere to?
Cheers,
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users I just tried reimporting my old storage domains but that doesn't seem to work:
[root@server data]# pwd /raid/ovirt-old/data [root@server data]# ls 1979444d-b79a-494c-8c1a-bcc132e31a04 __DIRECT_IO_TEST__ This is the old data domain I used, and the 1979444d-b79a-494c-8c1a-bcc132e31a04 contains quite a lot of VMs. When going to the webinterface and doing : - Storage - Import domain - Type: NFS - export path: $IP:/raid/ovirt-old/data (and $IP:/raid/ovirt-old/data/1979444d-b79a-494c-8c1a-bcc132e31a04) Of course I created an entry in /etc/exports in order to be able to mount this domain by NFS: [root@leiden data]# exportfs /raid/ovirt 192.168.1.44/255.255.255.255 /raid/ovirt-old 192.168.1.44/255.255.255.255 And ownership is by user vdsm. Despite of this it doesn't work. Unfortunately this page (http://www.ovirt.org/Features/Import_an_existing_Storage_Domain) isn't of much use either. How should I do this? Cheers, Boudewijn

Hi Boudewijn, First of all, the wiki page you are referring to is a feature page that was never implemented. We are currently working on the same feature, this is the correct feature page: http://www.ovirt.org/Features/ImportStorageDomain . I will ask that the irrelevant page be removed to avoid any further confusion. Second, currently, the only way to import a domain is to create an export domain and import it. Can you get the old setup-up and create an export domain? If not, we'll try to help and work around this is issue, but this is going to be very complex, since this is not supported. Just to have a general understanding of your setup - your storage on the same machine as ovirt engine? I'm assuming you're using web-admin. What exactly are you doing there? We don't have the import option there as you mentioned, so I don't understand how you could import an SD that was not export. Adding the entry to /etc/exports is meaningless. Do you have the ovfs of the old VMs available? In the meantime checking for an easier solution than the above, will get back to you on that. Regards, Vered ----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: users@ovirt.org Sent: Sunday, March 9, 2014 3:48:41 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
On 09-03-14 02:12, Boudewijn Ector wrote:
Hi Guys,
Currently I've got a Centos machine running the latest ovirt-release. This machine is using a local raid set containing a directory with ovirt-based VMs from my previous install. The ovirt install is a completely fresh install, no storage/VMs have been created yet.
What's the best way to reimport those? I might try to create a new storage domain and copy all old VMs into it (if that works anyway...) , or just reimport the old domain from the web-interface.
Does either trick have any advantage, or is there a best practice I should adhere to?
Cheers,
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users I just tried reimporting my old storage domains but that doesn't seem to work:
[root@server data]# pwd /raid/ovirt-old/data [root@server data]# ls 1979444d-b79a-494c-8c1a-bcc132e31a04 __DIRECT_IO_TEST__
This is the old data domain I used, and the 1979444d-b79a-494c-8c1a-bcc132e31a04 contains quite a lot of VMs.
When going to the webinterface and doing : - Storage - Import domain - Type: NFS - export path: $IP:/raid/ovirt-old/data (and $IP:/raid/ovirt-old/data/1979444d-b79a-494c-8c1a-bcc132e31a04)
Of course I created an entry in /etc/exports in order to be able to mount this domain by NFS:
[root@leiden data]# exportfs /raid/ovirt 192.168.1.44/255.255.255.255 /raid/ovirt-old 192.168.1.44/255.255.255.255
And ownership is by user vdsm.
Despite of this it doesn't work. Unfortunately this page (http://www.ovirt.org/Features/Import_an_existing_Storage_Domain) isn't of much use either.
How should I do this?
Cheers,
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 09-03-14 14:11, Vered Volansky wrote:
Hi Boudewijn,
First of all, the wiki page you are referring to is a feature page that was never implemented. We are currently working on the same feature, this is the correct feature page: http://www.ovirt.org/Features/ImportStorageDomain . I will ask that the irrelevant page be removed to avoid any further confusion.
Second, currently, the only way to import a domain is to create an export domain and import it. Can you get the old setup-up and create an export domain? If not, we'll try to help and work around this is issue, but this is going to be very complex, since this is not supported.
Just to have a general understanding of your setup - your storage on the same machine as ovirt engine?
I'm assuming you're using web-admin. What exactly are you doing there? We don't have the import option there as you mentioned, so I don't understand how you could import an SD that was not export. Adding the entry to /etc/exports is meaningless.
Do you have the ovfs of the old VMs available? In the meantime checking for an easier solution than the above, will get back to you on that.
Regards, Vered Dear Vered,
Thank you very much for your reply; I didn't expect this (at first hand) simple action to be so hard/complex to perform. My machine is indeed a single machine containing both a node, webinterface and storage. The old setup has been reinstalled, so I can't get that one to work anymore. On the other hand I do still have a database dump from it. I'm using the webadmin indeed and if I go to storage there's a button saying import domain. In the dialogue that pops up after pressing that button it's mentioned that I should use a FQDN/IP notation which made me expect that it uses NFS ;-). Yes I do have ovf's, this is a directory listing from one of the storage domains: [root@leiden 1979444d-b79a-494c-8c1a-bcc132e31a04]# find . ./images ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99 ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31 ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31.lease ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31.meta ./images/d72c41ff-2e34-474d-86be-3c11181b2128 ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93.lease ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93.meta ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6.meta ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b.meta ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b.lease ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6.lease ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1 ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9.lease ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9.meta ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9 ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165.lease ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165 ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165.meta ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2 ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc.meta ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc.lease ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc ./images/a33a673d-751f-4287-a655-e84dfcfcd005 ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287.lease ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287 ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287.meta ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf.lease ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf.meta ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5 ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df.lease ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df.meta ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e.meta ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e.lease ./master ./master/vms ./master/vms/0edd5aea-3425-4780-8f54-1c84f9a87765 ./master/vms/0edd5aea-3425-4780-8f54-1c84f9a87765/0edd5aea-3425-4780-8f54-1c84f9a87765.ovf ./master/vms/0b03d653-127a-449a-b6c6-276fed15de1b ./master/vms/0b03d653-127a-449a-b6c6-276fed15de1b/0b03d653-127a-449a-b6c6-276fed15de1b.ovf ./master/vms/00000000-0000-0000-0000-000000000000 ./master/vms/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000.ovf ./master/vms/f45a4a7c-5db5-40c2-af06-230aa5f2b090 ./master/vms/f45a4a7c-5db5-40c2-af06-230aa5f2b090/f45a4a7c-5db5-40c2-af06-230aa5f2b090.ovf ./master/vms/0b062e65-7b0f-4177-9e08-cba48230f89a ./master/vms/0b062e65-7b0f-4177-9e08-cba48230f89a/0b062e65-7b0f-4177-9e08-cba48230f89a.ovf ./master/vms/a466a009-cde7-40db-b3db-712b737eb64a ./master/vms/a466a009-cde7-40db-b3db-712b737eb64a/a466a009-cde7-40db-b3db-712b737eb64a.ovf ./master/vms/c040505a-da58-4ee1-8e17-8e32b9765608 ./master/vms/c040505a-da58-4ee1-8e17-8e32b9765608/c040505a-da58-4ee1-8e17-8e32b9765608.ovf ./master/vms/45434b2f-2a79-4a13-812e-a4fd2f563947 ./master/vms/45434b2f-2a79-4a13-812e-a4fd2f563947/45434b2f-2a79-4a13-812e-a4fd2f563947.ovf ./master/vms/a16e4354-0c32-47c1-a01b-7131da3dbb6b ./master/vms/a16e4354-0c32-47c1-a01b-7131da3dbb6b/a16e4354-0c32-47c1-a01b-7131da3dbb6b.ovf ./master/vms/b6cd8901-6832-4d95-935e-bb24d53f486d ./master/vms/b6cd8901-6832-4d95-935e-bb24d53f486d/b6cd8901-6832-4d95-935e-bb24d53f486d.ovf ./master/tasks ./dom_md ./dom_md/inbox ./dom_md/ids ./dom_md/metadata ./dom_md/outbox ./dom_md/leases How should I go on recovering those VMs? Cheers, Boudewijn

On 03/09/2014 06:04 PM, Boudewijn Ector wrote:
On 09-03-14 14:11, Vered Volansky wrote:
Hi Boudewijn,
First of all, the wiki page you are referring to is a feature page that was never implemented. We are currently working on the same feature, this is the correct feature page: http://www.ovirt.org/Features/ImportStorageDomain . I will ask that the irrelevant page be removed to avoid any further confusion.
Second, currently, the only way to import a domain is to create an export domain and import it. Can you get the old setup-up and create an export domain? If not, we'll try to help and work around this is issue, but this is going to be very complex, since this is not supported.
Just to have a general understanding of your setup - your storage on the same machine as ovirt engine?
I'm assuming you're using web-admin. What exactly are you doing there? We don't have the import option there as you mentioned, so I don't understand how you could import an SD that was not export. Adding the entry to /etc/exports is meaningless.
Do you have the ovfs of the old VMs available? In the meantime checking for an easier solution than the above, will get back to you on that.
Regards, Vered Dear Vered,
Thank you very much for your reply; I didn't expect this (at first hand) simple action to be so hard/complex to perform.
hence the feature page you saw, to add this functionality (currently work in progress)
My machine is indeed a single machine containing both a node, webinterface and storage.
The old setup has been reinstalled, so I can't get that one to work anymore. On the other hand I do still have a database dump from it. I'm using the webadmin indeed and if I go to storage there's a button saying import domain. In the dialogue that pops up after pressing that button it's mentioned that I should use a FQDN/IP notation which made me expect that it uses NFS ;-).
Yes I do have ovf's, this is a directory listing from one of the storage domains:
[root@leiden 1979444d-b79a-494c-8c1a-bcc132e31a04]# find . ./images ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99 ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31 ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31.lease ./images/6055a0a2-a6e9-4466-b0eb-3928c5c84d99/9e5be41b-c512-4f22-9d7c-81090d62dc31.meta ./images/d72c41ff-2e34-474d-86be-3c11181b2128 ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93.lease ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93.meta ./images/d72c41ff-2e34-474d-86be-3c11181b2128/0dd635c7-ef77-4e0f-962c-90b66085ba93 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6.meta ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b.meta ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6 ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/38eee7d5-9fd1-44b0-876c-b24e4bc0085b.lease ./images/df485d87-7dda-4cee-8fd0-5dc8b00d44c6/988f90f6-a37d-4dfd-8477-70aa5d2db5b6.lease ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1 ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9.lease ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9.meta ./images/898c01ea-d7b5-4d59-8329-c9140f3e55c1/8b511fc2-4ec5-4c82-9faf-93da8490adc9 ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165.lease ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165 ./images/1f6b7b10-736c-4a6c-9743-a628f370ff2f/8633fb9b-9c08-406b-925e-7d5955912165.meta ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2 ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc.meta ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc.lease ./images/c866dd6c-c7e5-419a-85e4-af49228be5a2/5e56a396-8deb-4c04-9897-0e4f6582abcc ./images/a33a673d-751f-4287-a655-e84dfcfcd005 ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287.lease ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287 ./images/a33a673d-751f-4287-a655-e84dfcfcd005/2cd8d3dc-e92f-4be5-88fa-923076aba287.meta ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf.lease ./images/07b94c8d-8195-449b-b5e0-873bde6f85fd/efc46a9a-6fcd-4e48-a197-e6bdf1e655bf.meta ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5 ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df.lease ./images/72dba8d6-4303-4db7-8a32-aafa0a3165a5/caecf666-302d-426c-8a32-65eda8d9e5df.meta ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e.meta ./images/7fd446ce-bfb5-4706-9eb8-4133fcfbc00d/88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e.lease ./master ./master/vms ./master/vms/0edd5aea-3425-4780-8f54-1c84f9a87765 ./master/vms/0edd5aea-3425-4780-8f54-1c84f9a87765/0edd5aea-3425-4780-8f54-1c84f9a87765.ovf ./master/vms/0b03d653-127a-449a-b6c6-276fed15de1b ./master/vms/0b03d653-127a-449a-b6c6-276fed15de1b/0b03d653-127a-449a-b6c6-276fed15de1b.ovf ./master/vms/00000000-0000-0000-0000-000000000000 ./master/vms/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000.ovf ./master/vms/f45a4a7c-5db5-40c2-af06-230aa5f2b090 ./master/vms/f45a4a7c-5db5-40c2-af06-230aa5f2b090/f45a4a7c-5db5-40c2-af06-230aa5f2b090.ovf ./master/vms/0b062e65-7b0f-4177-9e08-cba48230f89a ./master/vms/0b062e65-7b0f-4177-9e08-cba48230f89a/0b062e65-7b0f-4177-9e08-cba48230f89a.ovf ./master/vms/a466a009-cde7-40db-b3db-712b737eb64a ./master/vms/a466a009-cde7-40db-b3db-712b737eb64a/a466a009-cde7-40db-b3db-712b737eb64a.ovf ./master/vms/c040505a-da58-4ee1-8e17-8e32b9765608 ./master/vms/c040505a-da58-4ee1-8e17-8e32b9765608/c040505a-da58-4ee1-8e17-8e32b9765608.ovf ./master/vms/45434b2f-2a79-4a13-812e-a4fd2f563947 ./master/vms/45434b2f-2a79-4a13-812e-a4fd2f563947/45434b2f-2a79-4a13-812e-a4fd2f563947.ovf ./master/vms/a16e4354-0c32-47c1-a01b-7131da3dbb6b ./master/vms/a16e4354-0c32-47c1-a01b-7131da3dbb6b/a16e4354-0c32-47c1-a01b-7131da3dbb6b.ovf ./master/vms/b6cd8901-6832-4d95-935e-bb24d53f486d ./master/vms/b6cd8901-6832-4d95-935e-bb24d53f486d/b6cd8901-6832-4d95-935e-bb24d53f486d.ovf ./master/tasks ./dom_md ./dom_md/inbox ./dom_md/ids ./dom_md/metadata ./dom_md/outbox ./dom_md/leases
How should I go on recovering those VMs?
change the domain into an export domain, attach it, and import the VMs from it (currently, you can only import an export domain) i think this thread has the relevant info: https://www.mail-archive.com/users@ovirt.org/msg04257.html

How should I go on recovering those VMs?
change the domain into an export domain, attach it, and import the VMs from it (currently, you can only import an export domain) i think this thread has the relevant info: https://www.mail-archive.com/users@ovirt.org/msg04257.html
Okay thank you very much for pointing that out. I now realise the difference between the regular and export shares. I just changed the metadata file (at /raid/ovirt-old/data/1979444d-b79a-494c-8c1a-bcc132e31a04/dom_md0) from: [root@leiden dom_md]# cat metadata.backup CLASS=Data DESCRIPTION=leiden-data IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS=dafe25c2-1ce7-4979-9d8d-a35688da207a:Active,1979444d-b79a-494c-8c1a-bcc132e31a04:Active,d2676b04-e2ff-420f-8f29-36dafc2df47b:Active POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID=5849b030-626e-47cb-ad90-3ce782d831b3 REMOTE_PATH=192.168.1.44:/raid/ovirt/data ROLE=Master SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 _SHA_CKSUM=cab6c41e19812714ba79c48fc98b7037032725e4 into: CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 (there's another storage doman which has to be converted too but that's not that relevant imo). Now I log in onto the webinterface, select storage domain -> import domain : Domain function set to export/NFS export path: 192.168.1.44:/raid/ovirt-old/data/ (192.168.1.44 is the server's IP) Now this error pops up: Error while executing action: Cannot add Storage. Storage format V3 is not supported on the selected host version. I just had a look for this one and found this bug (in which you replied... such a small world :) ) : https://bugzilla.redhat.com/show_bug.cgi?id=1059604 But there's no obvious solution over there. Despite this the domains are available in the storage domain list but they're unattached so no use to me. Am I still missing something? Cheers, Boudewijn

(there's another storage doman which has to be converted too but that's not that relevant imo).
Now I log in onto the webinterface, select storage domain -> import domain :
Domain function set to export/NFS export path: 192.168.1.44:/raid/ovirt-old/data/
(192.168.1.44 is the server's IP)
Now this error pops up:
Error while executing action: Cannot add Storage. Storage format V3 is not supported on the selected host version.
I just had a look for this one and found this bug (in which you replied... such a small world :) ) : https://bugzilla.redhat.com/show_bug.cgi?id=1059604 But there's no obvious solution over there.
Despite this the domains are available in the storage domain list but they're unattached so no use to me.
Am I still missing something?
Cheers,
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Interesting: i just tried readding it again (maybe rereading the storage domain takes a lot of time) but despite it not showing up in the storage domain list, ovirt's webadmin tells met the repo has already been added: Error while executing action: Cannot add Storage Connection. Storage connection already exists. Strange. Cheers, Boudewijn

Hi, Regarding what to do in order to be able to add the domain, due to the bug you pointed out (https://bugzilla.redhat.com/show_bug.cgi?id=1059604), the workaround is to move the host to a datacenter. Check the host's cluster under the web interface - if it does not list a datacenter, you can move the host to maintenance, select the cluster, click 'edit' and then select a datacenter. The second error "Error while executing action: Cannot add Storage Connection. Storage connection already exists." is also a known bug (https://bugzilla.redhat.com/show_bug.cgi?id=1014966). You need to find the storage connection and remove it manually from oVirt. This can be done with REST or SDK. For REST: 1. find the ID of the connection. should be at: <fqdn/ip of engine>/api/storageconnections . (look for the connection that lists 192.168.1.44:/raid/ovirt-old/data and note it's ID) 2. Send a DELETE request to <fqdn/ip of engine>/api/storageconnections/<ID of storage connection> Thanks, Gadi Ickowicz ----- Original Message ----- From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: users@ovirt.org Sent: Monday, March 10, 2014 1:00:35 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
(there's another storage doman which has to be converted too but that's not that relevant imo).
Now I log in onto the webinterface, select storage domain -> import domain :
Domain function set to export/NFS export path: 192.168.1.44:/raid/ovirt-old/data/
(192.168.1.44 is the server's IP)
Now this error pops up:
Error while executing action: Cannot add Storage. Storage format V3 is not supported on the selected host version.
I just had a look for this one and found this bug (in which you replied... such a small world :) ) : https://bugzilla.redhat.com/show_bug.cgi?id=1059604 But there's no obvious solution over there.
Despite this the domains are available in the storage domain list but they're unattached so no use to me.
Am I still missing something?
Cheers,
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Interesting: i just tried readding it again (maybe rereading the storage domain takes a lot of time) but despite it not showing up in the storage domain list, ovirt's webadmin tells met the repo has already been added: Error while executing action: Cannot add Storage Connection. Storage connection already exists. Strange. Cheers, Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 10-03-14 07:48, Gadi Ickowicz wrote:
Hi,
Regarding what to do in order to be able to add the domain, due to the bug you pointed out (https://bugzilla.redhat.com/show_bug.cgi?id=1059604), the workaround is to move the host to a datacenter. Check the host's cluster under the web interface - if it does not list a datacenter, you can move the host to maintenance, select the cluster, click 'edit' and then select a datacenter.
The second error "Error while executing action: Cannot add Storage Connection. Storage connection already exists." is also a known bug (https://bugzilla.redhat.com/show_bug.cgi?id=1014966). You need to find the storage connection and remove it manually from oVirt. This can be done with REST or SDK.
For REST: 1. find the ID of the connection. should be at: <fqdn/ip of engine>/api/storageconnections . (look for the connection that lists 192.168.1.44:/raid/ovirt-old/data and note it's ID) 2. Send a DELETE request to <fqdn/ip of engine>/api/storageconnections/<ID of storage connection>
Thanks, Gadi Ickowicz
Hi Gadi, My machine is indeed already in a DC, I missed that part of the bug completely yesterday night ;-). Regarding the storage connection: okay I'll have it removed. On the other hand, will readding it by hand (again) help me in getting my old VMs to work again? I really like ovirt, but having this amount of problems when reinstalling my hypervisor really spoils the party :(. Cheers, Boudewijn

On 03/10/2014 12:30 AM, Boudewijn Ector wrote:
How should I go on recovering those VMs?
change the domain into an export domain, attach it, and import the VMs from it (currently, you can only import an export domain) i think this thread has the relevant info: https://www.mail-archive.com/users@ovirt.org/msg04257.html
Okay thank you very much for pointing that out. I now realise the difference between the regular and export shares.
I just changed the metadata file (at /raid/ovirt-old/data/1979444d-b79a-494c-8c1a-bcc132e31a04/dom_md0) from:
[root@leiden dom_md]# cat metadata.backup CLASS=Data DESCRIPTION=leiden-data IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS=dafe25c2-1ce7-4979-9d8d-a35688da207a:Active,1979444d-b79a-494c-8c1a-bcc132e31a04:Active,d2676b04-e2ff-420f-8f29-36dafc2df47b:Active POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID=5849b030-626e-47cb-ad90-3ce782d831b3 REMOTE_PATH=192.168.1.44:/raid/ovirt/data ROLE=Master SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 _SHA_CKSUM=cab6c41e19812714ba79c48fc98b7037032725e4
into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3
(there's another storage doman which has to be converted too but that's not that relevant imo).
Now I log in onto the webinterface, select storage domain -> import domain :
Domain function set to export/NFS export path: 192.168.1.44:/raid/ovirt-old/data/
(192.168.1.44 is the server's IP)
Now this error pops up:
Error while executing action: Cannot add Storage. Storage format V3 is not supported on the selected host version.
I just had a look for this one and found this bug (in which you replied... such a small world :) ) : https://bugzilla.redhat.com/show_bug.cgi?id=1059604 But there's no obvious solution over there.
the bug is about a host with a cluster not associated with a DC. is your cluster associated with a DC?
Despite this the domains are available in the storage domain list but they're unattached so no use to me.
Am I still missing something?
Cheers,
Boudewijn

On 10-03-14 10:46, Itamar Heim wrote:
On 03/10/2014 12:30 AM, Boudewijn Ector wrote:
How should I go on recovering those VMs?
change the domain into an export domain, attach it, and import the VMs from it (currently, you can only import an export domain) i think this thread has the relevant info: https://www.mail-archive.com/users@ovirt.org/msg04257.html
Okay thank you very much for pointing that out. I now realise the difference between the regular and export shares.
I just changed the metadata file (at /raid/ovirt-old/data/1979444d-b79a-494c-8c1a-bcc132e31a04/dom_md0) from:
[root@leiden dom_md]# cat metadata.backup CLASS=Data DESCRIPTION=leiden-data IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS=dafe25c2-1ce7-4979-9d8d-a35688da207a:Active,1979444d-b79a-494c-8c1a-bcc132e31a04:Active,d2676b04-e2ff-420f-8f29-36dafc2df47b:Active
POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID=5849b030-626e-47cb-ad90-3ce782d831b3 REMOTE_PATH=192.168.1.44:/raid/ovirt/data ROLE=Master SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 _SHA_CKSUM=cab6c41e19812714ba79c48fc98b7037032725e4
into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3
(there's another storage doman which has to be converted too but that's not that relevant imo).
Now I log in onto the webinterface, select storage domain -> import domain :
Domain function set to export/NFS export path: 192.168.1.44:/raid/ovirt-old/data/
(192.168.1.44 is the server's IP)
Now this error pops up:
Error while executing action: Cannot add Storage. Storage format V3 is not supported on the selected host version.
I just had a look for this one and found this bug (in which you replied... such a small world :) ) : https://bugzilla.redhat.com/show_bug.cgi?id=1059604 But there's no obvious solution over there.
the bug is about a host with a cluster not associated with a DC. is your cluster associated with a DC?
Yes it is, so my buglink isn't that relevant after all. Thanks :). Despite of that, my setup still isn't running and I haven't got a clue on how to get it to work again. Cheers, Boudewijn

into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3
This needs to be VERSION=0 (I successfully rescued a storage domain this way just this weekend) Good luck! Jason

On 10-03-14 16:55, Jason Brooks wrote:
into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 This needs to be VERSION=0
(I successfully rescued a storage domain this way just this weekend)
Good luck!
Jason
Hi Jason, Thanks! I'm going to try this after finishing my coffee! Can you explain to me *why* this will change the situation? I'd like to have a better understanding about the inner workings of ovirt. Cheers, Boudewijn

----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: "Jason Brooks" <jbrooks@redhat.com> Cc: users@ovirt.org Sent: Monday, March 10, 2014 8:56:31 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
On 10-03-14 16:55, Jason Brooks wrote:
into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 This needs to be VERSION=0
(I successfully rescued a storage domain this way just this weekend)
Good luck!
Jason
Hi Jason,
Thanks! I'm going to try this after finishing my coffee! Can you explain to me *why* this will change the situation? I'd like to have a better understanding about the inner workings of ovirt.
I don't know -- TYPE=0 is what you find in the export domain metadata files. I'd love to be able to bring back images w/o the export domain step in the middle...
Cheers,
Boudewijn

This is a multi-part message in MIME format. --------------090102060102070909090704 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Hi Guys, Thank you very much Jason, Itamar, Gadi! I'm almost there thanks to your help. My steps were: - remove the old storageconnection: $ curl -u "admin@internal:*****" -X DELETE https://192.168.1.44:443/api/storageconnections/5636a8c3-65b6-44a4-9ba4-e598... -k I changed version 3 -> 0 in metadata file And did an import (export NFS): 192.168.1.44:/raid/ovirt-old/data Currently the box is importing the VMs from the old to the new repository. But, there's still a last challenge to overcome: I had this VM with about 2TB storage attached on both my storage domains (I had another one too). This poses two different problems: - I can't import the VM from the first storage domain, since not all disks did reside on this single storage domain. This error pops up (which is quite logical): Error while executing action: downloadbak: * Cannot import VM. VM's Image does not exist. Yeah I know I shouldn't have split the VMs resources over multiple storage domains... I won't make that mistake again. Is there a way to move these disks from the second "old domain" to the first "old domain" so the VM can be reimported? Furthermore, my storage machine has about 1.4TB free disk space left. While importing, a copy of a VM is being made... and this particular one is 2TB. So I'm going to run out of disk space while doing so; is there a way to move instead of copy a VM while importing it? Cheers (and thanks a lot for all the help, I really appreciate it!), Boudewijn --------------090102060102070909090704 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi Guys,<br> <br> <br> Thank you very much Jason, Itamar, Gadi! I'm almost there thanks to your help.<br> <br> My steps were:<br> <br> - remove the old storageconnection:<br> <br> $ curl -u "admin@internal:*****" -X DELETE <a class="moz-txt-link-freetext" href="https://192.168.1.44:443/api/storageconnections/5636a8c3-65b6-44a4-9ba4-e598dc60a4e4">https://192.168.1.44:443/api/storageconnections/5636a8c3-65b6-44a4-9ba4-e598dc60a4e4</a> -k<br> <br> I changed version 3 -> 0 in metadata file<br> <br> And did an import (export NFS):<br> 192.168.1.44:/raid/ovirt-old/data<br> <br> Currently the box is importing the VMs from the old to the new repository.<br> <br> But, there's still a last challenge to overcome:<br> <br> I had this VM with about 2TB storage attached on both my storage domains (I had another one too). This poses two different problems:<br> <br> - I can't import the VM from the first storage domain, since not all disks did reside on this single storage domain. This error pops up (which is quite logical):<br> <br> <div class="GJJWYTYDJYB"> <div class="GJJWYTYDEVB GJJWYTYDKYB"> <div class="gwt-HTML">Error while executing action: <br> <br> downloadbak: <ul style="margin-top:0"> <li>Cannot import VM. VM's Image does not exist.</li> </ul> </div> </div> </div> Yeah I know I shouldn't have split the VMs resources over multiple storage domains... I won't make that mistake again. Is there a way to move these disks from the second "old domain" to the first "old domain" so the VM can be reimported?<br> <br> Furthermore, my storage machine has about 1.4TB free disk space left. While importing, a copy of a VM is being made... and this particular one is 2TB. So I'm going to run out of disk space while doing so; is there a way to move instead of copy a VM while importing it?<br> <br> <br> Cheers (and thanks a lot for all the help, I really appreciate it!),<br> <br> Boudewijn<br> </body> </html> --------------090102060102070909090704--

----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: "Jason Brooks" <jbrooks@redhat.com> Cc: users@ovirt.org Sent: Monday, March 10, 2014 10:21:27 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
Hi Guys,
Thank you very much Jason, Itamar, Gadi! I'm almost there thanks to your help.
My steps were:
- remove the old storageconnection:
$ curl -u "admin@internal:*****" -X DELETE https://192.168.1.44:443/api/storageconnections/5636a8c3-65b6-44a4-9ba4-e598... -k
I changed version 3 -> 0 in metadata file
And did an import (export NFS): 192.168.1.44:/raid/ovirt-old/data
Currently the box is importing the VMs from the old to the new repository.
But, there's still a last challenge to overcome:
I had this VM with about 2TB storage attached on both my storage domains (I had another one too). This poses two different problems:
- I can't import the VM from the first storage domain, since not all disks did reside on this single storage domain. This error pops up (which is quite logical):
Error while executing action:
downloadbak:
* Cannot import VM. VM's Image does not exist.
Yeah I know I shouldn't have split the VMs resources over multiple storage domains... I won't make that mistake again. Is there a way to move these disks from the second "old domain" to the first "old domain" so the VM can be reimported?
Furthermore, my storage machine has about 1.4TB free disk space left. While importing, a copy of a VM is being made... and this particular one is 2TB. So I'm going to run out of disk space while doing so; is there a way to move instead of copy a VM while importing it?
Some people have reported success creating an image of the desired size, then noting the name of this new image, and copying the old image into the place of the new one, with the new name. Something like that might work, but I don't have first-hand experience w/ it. Jason
Cheers (and thanks a lot for all the help, I really appreciate it!),
Boudewijn

Some people have reported success creating an image of the desired size, then noting the name of this new image, and copying the old image into the place of the new one, with the new name. Something like that might work, but I don't have first-hand experience w/ it.
Jason
Hi Jason, Thanks, although that's quite an ugly option, although better than not being able to do it at all... I'll keep it in mind anyway if everything else fails. Well I was wondering: Can't I just copy those images and vm's from the second storage domain to the first and add those in the metadata? I might give that a shot tonight. Cheers, Boudewijn

----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: "Jason Brooks" <jbrooks@redhat.com> Cc: users@ovirt.org Sent: Monday, March 10, 2014 10:46:45 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
Some people have reported success creating an image of the desired size, then noting the name of this new image, and copying the old image into the place of the new one, with the new name. Something like that might work, but I don't have first-hand experience w/ it.
Jason
Hi Jason,
Thanks, although that's quite an ugly option, although better than not being able to do it at all... I'll keep it in mind anyway if everything else fails.
Well I was wondering: Can't I just copy those images and vm's from the second storage domain to the first and add those in the metadata? I might give that a shot tonight.
There's a database element to this as well, that's why I think there's the "fake" image creation element.
Cheers,
Boudewijn

Some people have reported success creating an image of the desired size, then noting the name of this new image, and copying the old image into the place of the new one, with the new name. Something like that might work, but I don't have first-hand experience w/ it.
Jason
Hi Jason, Due to lack of viable alternative, I've decided to go and try this approach. I just had a look at my datafiles: - these are either 8gb (OS) or 250gb (LVM images) - can't mount those directly in my host OS (tried because of the next point) - I don't know to what VM this image/VM belongs . They're all quite the same (basic debian install), so determining it just by running strings etc on those will not be easy - I can't import the old VMs from the old storage. If I create new images and dd the old information into those new images the metadata will not be copied too. So the only option is not reusing the VM's but creating completely new ones and determining which disk images are required for these VMs. Then creating the new image structure and dd'ing the corresponding images from the old VMs into the new ones. In order to do so I need to know what data belongs to what VM. Is there a trick for doing this? I still do have the database from the old ovirt machine, this might save me. Will have a look into that one tomorrow. Cheers, Boudewijn

----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: "Jason Brooks" <jbrooks@redhat.com> Cc: users@ovirt.org Sent: Tuesday, March 11, 2014 1:32:00 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
Some people have reported success creating an image of the desired size, then noting the name of this new image, and copying the old image into the place of the new one, with the new name. Something like that might work, but I don't have first-hand experience w/ it.
Jason
Hi Jason,
Due to lack of viable alternative, I've decided to go and try this approach. I just had a look at my datafiles:
- these are either 8gb (OS) or 250gb (LVM images) - can't mount those directly in my host OS (tried because of the next point) - I don't know to what VM this image/VM belongs . They're all quite the same (basic debian install), so determining it just by running strings etc on those will not be easy - I can't import the old VMs from the old storage. If I create new images and dd the old information into those new images the metadata will not be copied too.
So the only option is not reusing the VM's but creating completely new ones and determining which disk images are required for these VMs. Then creating the new image structure and dd'ing the corresponding images from the old VMs into the new ones. In order to do so I need to know what data belongs to what VM. Is there a trick for doing this?
I still do have the database from the old ovirt machine, this might save me. Will have a look into that one tomorrow.
Cheers,
Boudewijn
Hi Boudewijn, So we can proceed and recover your data i'd like to know - 1. can you use the db backup? will you lose any important data if you chose to use it? 2, did you have snapshots for your vm? please answer so we can proceed with it, we can find methods for restoring without having to perform images copy (and to restore with copying) - but each way has it's implications. thanks, Liron.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Boudewijn, So we can proceed and recover your data i'd like to know - 1. can you use the db backup? will you lose any important data if you chose to use it? 2, did you have snapshots for your vm?
please answer so we can proceed with it, we can find methods for restoring without having to perform images copy (and to restore with copying) - but each way has it's implications. thanks, Liron.
Dear Liron, Sure thank you very much :). 1: Well it's just a database dump. I can import it into a fresh db, that's no problem. 2: No I did not have snapshots for my VMs as far as I know. I might have a single one (in order to test/play around), but I will not mind losing it. Cheers, Boudewijn

On 11-03-14 16:39, Liron Aravot wrote: > > ----- Original Message ----- >> From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> >> To: "Jason Brooks" <jbrooks@redhat.com> >> Cc: users@ovirt.org >> Sent: Tuesday, March 11, 2014 1:32:00 AM >> Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt >> >> >>> Some people have reported success creating an image of the desired >>> size, then noting the name of this new image, and copying the old >>> image into the place of the new one, with the new name. Something >>> like that might work, but I don't have first-hand experience w/ >>> it. >>> >>> Jason >> Hi Jason, >> >> >> Due to lack of viable alternative, I've decided to go and try this >> approach. I just had a look at my datafiles: >> >> - these are either 8gb (OS) or 250gb (LVM images) >> - can't mount those directly in my host OS (tried because of the next >> point) >> - I don't know to what VM this image/VM belongs . They're all quite >> the same (basic debian install), so determining it just by running >> strings etc on those will not be easy >> - I can't import the old VMs from the old storage. If I create new >> images and dd the old information into those new images the metadata >> will not be copied too. >> >> So the only option is not reusing the VM's but creating completely >> new ones and determining which disk images are required for these VMs. >> Then creating the new image structure and dd'ing the corresponding >> images from the old VMs into the new ones. In order to do so I need to >> know what data belongs to what VM. >> Is there a trick for doing this? >> >> I still do have the database from the old ovirt machine, this might >> save me. Will have a look into that one tomorrow. >> >> Cheers, >> >> Boudewijn > Hi Boudewijn, > So we can proceed and recover your data i'd like to know - > 1. can you use the db backup? will you lose any important data if you chose to use it? > 2, did you have snapshots for your vm? > > please answer so we can proceed with it, we can find methods for restoring without having to perform images copy (and to restore with copying) - but each way has it's implications. > thanks, > Liron. > Hi Liron, Have you already been able to look at my reply on the list? It would be great for me to be able to make some decent progress this weekend. Cheers, Boudewijn

----- Original Message ----- > From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> > To: "Liron Aravot" <laravot@redhat.com> > Cc: "Jason Brooks" <jbrooks@redhat.com>, users@ovirt.org > Sent: Saturday, March 15, 2014 5:01:55 PM > Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt > > On 11-03-14 16:39, Liron Aravot wrote: > > > > ----- Original Message ----- > >> From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> > >> To: "Jason Brooks" <jbrooks@redhat.com> > >> Cc: users@ovirt.org > >> Sent: Tuesday, March 11, 2014 1:32:00 AM > >> Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt > >> > >> > >>> Some people have reported success creating an image of the desired > >>> size, then noting the name of this new image, and copying the old > >>> image into the place of the new one, with the new name. Something > >>> like that might work, but I don't have first-hand experience w/ > >>> it. > >>> > >>> Jason > >> Hi Jason, > >> > >> > >> Due to lack of viable alternative, I've decided to go and try this > >> approach. I just had a look at my datafiles: > >> > >> - these are either 8gb (OS) or 250gb (LVM images) > >> - can't mount those directly in my host OS (tried because of the next > >> point) > >> - I don't know to what VM this image/VM belongs . They're all quite > >> the same (basic debian install), so determining it just by running > >> strings etc on those will not be easy > >> - I can't import the old VMs from the old storage. If I create new > >> images and dd the old information into those new images the metadata > >> will not be copied too. > >> > >> So the only option is not reusing the VM's but creating completely > >> new ones and determining which disk images are required for these VMs. > >> Then creating the new image structure and dd'ing the corresponding > >> images from the old VMs into the new ones. In order to do so I need to > >> know what data belongs to what VM. > >> Is there a trick for doing this? > >> > >> I still do have the database from the old ovirt machine, this might > >> save me. Will have a look into that one tomorrow. > >> > >> Cheers, > >> > >> Boudewijn > > Hi Boudewijn, > > So we can proceed and recover your data i'd like to know - > > 1. can you use the db backup? will you lose any important data if you chose > > to use it? > > 2, did you have snapshots for your vm? > > > > please answer so we can proceed with it, we can find methods for restoring > > without having to perform images copy (and to restore with copying) - but > > each way has it's implications. > > thanks, > > Liron. > > > Hi Liron, > > > > Have you already been able to look at my reply on the list? It would be > great for me to be able to make some decent progress this weekend. > > Cheers, > > Boudewijn Hi Boudewijn, if you have db backup and you won't lose any data using it - it would be the simplest approach. Please read carefully the following options and keep backup before attempting any of it - for vm's that you don't have space issue with - you can try to previously suggested approach, but it'll obviously take longer as it requires copying of the data. Option A - *doesn't require copying the disks *if your vms had snapshots involving disks - it won't work currently. let's try to restore a specific vm and continue from there - i'm adding here info - if needed i'll test it on my own deployment. A. first of all, let's get the disks attached to some vm : some options to do that. *under the webadmin ui, select a vm listed under the "export" domain, there should be a disks tab indicating what disks are attached to the vm - check if you can see the disk id's there. B. query the storage domain content using rest-api - afaik we don't return that info from there. so let's skip that option. 1. under the storage domain storage directory (storage) enter the /vms directory - you should see bunch of OVF files there - that's a file containing a vm configuration. 2. open one specific ovf file - that's the vm that we'll attempt to restore - the ovf file is a file containing the vm configuration *within the ovf file look for the following string: "diskId" and copy those ids aside, these should be the vm attached disks. *copy the vm disk from the other storage domain, edit the metadata accordingly to have the proper storage domain id listed *try to import the disks using the method specified here: https://bugzilla.redhat.com/show_bug.cgi?id=886133 *after this, you should see the disks as "floating", then you can add the vm using the OVF file we discussed in stage 2 using the method specified here: http://gerrit.ovirt.org/#/c/15894/ Option B - *Replace the images data files with blank files *Initiate Import for the vm, should be really quick obviously. *As soon as the import starts, you can either: 1. let the import be done and replace the data, not that in that case the info saved in the engine db won't be correct (for example, the actual image size..etc) 2. after the tasks for importing are created (you'll see that in the engine log), turn the engine down immediately (immediately means within few seconds) and after the copy tasks completes on the host replace the data files and then start the engine - so that when the engine will start it'll update the db information according to the updated data files. >

Hi Boudewijn, if you have db backup and you won't lose any data using it - it would be the simplest approach.
Please read carefully the following options and keep backup before attempting any of it - for vm's that you don't have space issue with - you can try to previously suggested approach, but it'll obviously take longer as it requires copying of the data.
Option A - *doesn't require copying the disks *if your vms had snapshots involving disks - it won't work currently.
let's try to restore a specific vm and continue from there - i'm adding here info - if needed i'll test it on my own deployment. A. first of all, let's get the disks attached to some vm : some options to do that. *under the webadmin ui, select a vm listed under the "export" domain, there should be a disks tab indicating what disks are attached to the vm - check if you can see the disk id's there. B. query the storage domain content using rest-api - afaik we don't return that info from there. so let's skip that option. 1. under the storage domain storage directory (storage) enter the /vms directory - you should see bunch of OVF files there - that's a file containing a vm configuration. 2. open one specific ovf file - that's the vm that we'll attempt to restore - the ovf file is a file containing the vm configuration *within the ovf file look for the following string: "diskId" and copy those ids aside, these should be the vm attached disks. *copy the vm disk from the other storage domain, edit the metadata accordingly to have the proper storage domain id listed *try to import the disks using the method specified here: https://bugzilla.redhat.com/show_bug.cgi?id=886133 *after this, you should see the disks as "floating", then you can add the vm using the OVF file we discussed in stage 2 using the method specified here: http://gerrit.ovirt.org/#/c/15894/
Option B - *Replace the images data files with blank files *Initiate Import for the vm, should be really quick obviously. *As soon as the import starts, you can either: 1. let the import be done and replace the data, not that in that case the info saved in the engine db won't be correct (for example, the actual image size..etc) 2. after the tasks for importing are created (you'll see that in the engine log), turn the engine down immediately (immediately means within few seconds) and after the copy tasks completes on the host replace the data files and then start the engine - so that when the engine will start it'll update the db information according to the updated data files.
Hi Guys, Thank you for the elaborate information. I'll try to restore the DB indeed and make sure all the mounts I previously (when creating the DB-dump) had will be there too. I also just had a look in my old DB, which I've just restored: engine=# select vm_name from vm_static; vm_name ----------------- Blank template mail nagios bacula debian-template jabber downloadbak vpn (9 rows) That's looking great. Actually the most important VMs to restore (the rest are re-created in about 2-3 hours so having to re-create those instead of restoring would be okayish): - bacula - downloadbak Problem is that both of those are the VMs with the most of the disks attached. Just had a look in the databasedump for the vm id's and found this:
COPY disks_vm_map (history_id, vm_disk_id, vm_id, attach_date, detach_date) FROM stdin; 2 b2c5d2d5-636c-408b-b52f-b7f5558c0f7f a16e4354-0c32-47c1-a01b-7131da3dbb6b 2014-01-21 02:32:58+01 \N 1 4ef54bf7-525b-4a73-b071-c6750fc7c907 33f78ede-e885-4636-bb0b-1021c31d1cca 2014-01-21 02:32:58+01 2014-01-21 18:52:00+01 5 38eee7d5-9fd1-44b0-876c-b24e4bc0085b 0b062e65-7b0f-4177-9e08-cba48230f89a 2014-01-22 00:02:01+01 \N 4 988f90f6-a37d-4dfd-8477-70aa5d2db5b6 0b062e65-7b0f-4177-9e08-cba48230f89a 2014-01-21 22:57:01+01 2014-01-22 00:02:01+01 6 88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e a466a009-cde7-40db-b3db-712b737eb64a 2014-01-22 00:37:01+01 \N 7 2cd8d3dc-e92f-4be5-88fa-923076aba287 c040505a-da58-4ee1-8e17-8e32b9765608 2014-01-22 00:46:01+01 \N 8 5e56a396-8deb-4c04-9897-0e4f6582abcc 45434b2f-2a79-4a13-812e-a4fd2f563947 2014-01-22 01:45:01+01 \N 9 caecf666-302d-426c-8a32-65eda8d9e5df 0edd5aea-3425-4780-8f54-1c84f9a87765 2014-01-22 19:42:02+01 \N 10 8633fb9b-9c08-406b-925e-7d5955912165 f45a4a7c-5db5-40c2-af06-230aa5f2b090 2014-01-22 19:57:02+01 \N 11 81b71076-be95-436b-9657-61890e81cee9 c040505a-da58-4ee1-8e17-8e32b9765608 2014-01-22 23:22:02+01 2014-01-30 02:09:09+01 12 924e5ba6-913e-4591-a15f-3b61eb66a2e1 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-01 20:42:12+01 2014-02-03 18:00:14+01 14 f613aa23-4831-4aba-806e-fb7dcdcd704d c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:05:14+01 \N 15 182ce48c-59d0-4883-8265-0269247d22e0 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:13:14+01 \N 16 cadcce7f-53ff-4735-b5ff-4d8fd1991d51 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:13:14+01 \N 17 76749503-4a8b-4e8f-a2e4-9d89e0de0d71 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:13:14+01 \N 18 c46bb1c0-dad9-490c-95b4-b74b25b80129 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:13:14+01 \N 19 0ad131d7-2619-42a2-899f-d25c33969dc6 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:14:14+01 \N 20 e66b18a7-e2c5-4f6c-9884-03e5c7477e3d c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-03 18:14:14+01 \N 21 e1c098fe-4b5d-4728-81d0-7edfdd3d0ec8 a16e4354-0c32-47c1-a01b-7131da3dbb6b 2014-02-04 00:45:14+01 \N 22 8b511fc2-4ec5-4c82-9faf-93da8490adc9 b6cd8901-6832-4d95-935e-bb24d53f486d 2014-02-04 01:12:14+01 \N 13 c463c150-77df-496b-bebb-6c5fe090ddd8 1964733f-c562-49e1-86b5-c71b12e8c7e2 2014-02-02 18:49:13+01 2014-02-04 01:12:14+01 3 faac72a3-57af-4508-b844-37ee547f9bf3 a16e4354-0c32-47c1-a01b-7131da3dbb6b 2014-01-21 03:55:00+01 2014-02-04 21:55:15+01 23 aca392f5-8395-46fe-9111-8a3c4812ff72 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-05 00:59:16+01 \N 24 829348f3-0f63-4275-92e1-1e84681a422b c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-05 00:59:16+01 \N 25 1d304cb5-67bd-4e21-aa2c-2470c19af885 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-05 11:50:16+01 \N 26 179ad90d-ed46-467d-ad75-aea6e3ea115e c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-05 22:41:17+01 \N 27 4d583a7a-8399-4299-9799-dec33913c20a c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-05 22:41:17+01 \N 28 9e5be41b-c512-4f22-9d7c-81090d62dc31 c040505a-da58-4ee1-8e17-8e32b9765608 2014-02-25 23:17:21+01 \N \.
I created my dump using pg_dumpall > blah.sql and reimported using psql -f blah.sql . Despite this : Schema | Name | Type | Owner --------+-----------------------------------+-------+-------- public | action_version_map | table | engine public | ad_groups | table | engine public | async_tasks | table | engine public | async_tasks_entities | table | engine public | audit_log | table | engine public | base_disks | table | engine public | bookmarks | table | engine public | business_entity_snapshot | table | engine public | cluster_policies | table | engine public | cluster_policy_units | table | engine public | custom_actions | table | engine public | disk_image_dynamic | table | engine public | disk_lun_map | table | engine public | dwh_history_timekeeping | table | engine public | dwh_osinfo | table | engine public | event_map | table | engine That table doesn't exist at all... weird. I'm not a postgres guru so maybe there's some foo involved I haven't seen :). The mountpoints I created in my initial setup:
engine=# select id,connection from storage_server_connections; id | connection --------------------------------------+----------------------------------------------- 752c8d02-b8dd-46e3-9f51-395a7f3e246d | X.Y.nl:/var/lib/exports/iso 162603af-2d67-4cd5-902c-a8fc3e4cbf9b | 192.168.1.44:/raid/ovirt/data d84f108d-86d0-42a4-9ee9-12e4506b434b | 192.168.1.44:/raid/ovirt/import_export f60fb79b-5062-497d-9576-d27fdcbc70a0 | 192.168.1.44:/raid/ovirt/iso (4 rows)
Seems great too, I indeed created those. When restarting ovirt-engine I get in my logs:
2014-03-17 01:50:06,417 ERROR [org.ovirt.engine.core.bll.Backend] (ajp--127.0.0.1-8702-2) Error in getting DB connection. The database is inaccessible. Original exception is: UncategorizedSQLException: CallableStatementCallback; uncategorized SQLException for SQL [{call checkdbconnection()}]; SQL state [25P02]; error code [0]; ERROR: current transaction is aborted, commands ignored until end of transaction block; nested exception is org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block Also the webinterface stays blank.
Okay that sounds like database permissions: [root@Xovirt-engine]# cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf ENGINE_DB_HOST="localhost" ENGINE_DB_PORT="5432" ENGINE_DB_USER="engine" ENGINE_DB_PASSWORD="********" ENGINE_DB_DATABASE="engine" ENGINE_DB_SECURED="False" ENGINE_DB_SECURED_VALIDATION="False" ENGINE_DB_DRIVER="org.postgresql.Driver" ENGINE_DB_URL="jdbc:postgresql://${ENGINE_DB_HOST}:${ENGINE_DB_PORT}/${ENGINE_DB_DATABASE}?sslfactory=org.postgresql.ssl.NonValidatingFactory" I tried to reset the database's password using this in the psql shell: alter user engine WITH password '********'; (the same password as above) Still authentication fails, but when I do this: psql -h localhost -p 5432 -U engine engine It works fine... O gosh more debugging ;). Any clue where I should have a look? I just tried copying the old /etc/ovirt* stuff over /etc/overt* so both configs and db are sync'ed again. To no avail. Thanks guys! Boudewijn

On Mon, Mar 17, 2014 at 2:18 AM, Boudewijn Ector wrote:
Okay that sounds like database permissions:
[root@Xovirt-engine]# cat /etc/ovirt-engine/engine.conf.d/10-setup-database.conf ENGINE_DB_HOST="localhost" ENGINE_DB_PORT="5432" ENGINE_DB_USER="engine" ENGINE_DB_PASSWORD="********" ENGINE_DB_DATABASE="engine" ENGINE_DB_SECURED="False" ENGINE_DB_SECURED_VALIDATION="False" ENGINE_DB_DRIVER="org.postgresql.Driver" ENGINE_DB_URL="jdbc:postgresql://${ENGINE_DB_HOST}:${ENGINE_DB_PORT}/${ENGINE_DB_DATABASE}?sslfactory=org.postgresql.ssl.NonValidatingFactory"
I tried to reset the database's password using this in the psql shell:
alter user engine WITH password '********'; (the same password as above)
Still authentication fails, but when I do this:
psql -h localhost -p 5432 -U engine engine
It works fine... O gosh more debugging ;). Any clue where I should have a look? I just tried copying the old /etc/ovirt* stuff over /etc/overt* so both configs and db are sync'ed again. To no avail.
Thanks guys!
Boudewijn _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
For PostgreSQL access you have to check your previous settings in /var/lib/pgsql/data relevant files are pg_hba.conf and postgresql.conf On a standard ovirt 3.3.3 on CentOS 6.5 my config is pg_hba.conf; local all all ident host engine engine 0.0.0.0/0 md5 host engine engine ::0/0 md5 host all all 127.0.0.1/32 ident host all all ::1/128 ident If I compare modifications made by engine setup in respect of pre-defined ones: [root@ovirteng02 data]# diff postgresql.conf postgresql.conf.20140301072333 64,65c64 < # max_connections = 100 # (change requires restart) < max_connections = 150 ---
max_connections = 100 # (change requires restart)
[root@ovirteng02 data]# diff pg_hba.conf pg_hba.conf.20140301072333 71,72d70 < host engine engine 0.0.0.0/0 md5 < host engine engine ::0/0 md5 check also your PostgreSQL version with the original one. HIH, Gianluca

check also your PostgreSQL version with the original one. HIH, Gianluca Okay I've finally found some time to fix my problems with the old storage domains. I reinstalled the box and it's running fine but I'd love to recover the
On 17-03-14 06:31, Gianluca Cecchi wrote: old domain I used for downloading stuff. I just reread your email from march 16th and I'm going to try the thing you've suggested, so I hope you're still willing to help me :).
Option A - *doesn't require copying the disks *if your vms had snapshots involving disks - it won't work currently.
let's try to restore a specific vm and continue from there - i'm adding here info - if needed i'll test it on my own deployment. A. first of all, let's get the disks attached to some vm : some options to do that. *under the webadmin ui, select a vm listed under the "export" domain, there should be a disks tab indicating what disks are attached to the vm - check if you can see the disk id's there. B. query the storage domain content using rest-api - afaik we don't return that info from there. so let's skip that option. 1. under the storage domain storage directory (storage) enter the /vms directory - you should see bunch of OVF files there - that's a file containing a vm configuration. 2. open one specific ovf file - that's the vm that we'll attempt to restore - the ovf file is a file containing the vm configuration *within the ovf file look for the following string: "diskId" and copy those ids aside, these should be the vm attached disks. *copy the vm disk from the other storage domain, edit the metadata accordingly to have the proper storage domain id listed *try to import the disks using the method specified here: https://bugzilla.redhat.com/show_bug.cgi?id=886133 *after this, you should see the disks as "floating", then you can add the vm using the OVF file we discussed in stage 2 using the method specified here: http://gerrit.ovirt.org/#/c/15894/ In order to get the disks attached to a VM, I need to move them into a new import domain. How should I determine which files to get? There were multiple VMs in the directory, each having multiple LVM-based storage domains so I just had a look, I 'm only interested in images from "Downloadbak".
I found: engine=# select image_guid,size,parentid,imagestatus,volume_type,volume_format,active from images order by parentid; image_guid | size | parentid | imagestatus | volume_type | volume_format | active --------------------------------------+--------------+--------------------------------------+-------------+-------------+---------------+-------- 1d304cb5-67bd-4e21-aa2c-2470c19af885 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t aca392f5-8395-46fe-9111-8a3c4812ff72 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 182ce48c-59d0-4883-8265-0269247d22e0 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t cadcce7f-53ff-4735-b5ff-4d8fd1991d51 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 2cd8d3dc-e92f-4be5-88fa-923076aba287 | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t 5e56a396-8deb-4c04-9897-0e4f6582abcc | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t 0ad131d7-2619-42a2-899f-d25c33969dc6 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t caecf666-302d-426c-8a32-65eda8d9e5df | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t 8633fb9b-9c08-406b-925e-7d5955912165 | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t b2c5d2d5-636c-408b-b52f-b7f5558c0f7f | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t e66b18a7-e2c5-4f6c-9884-03e5c7477e3d | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 988f90f6-a37d-4dfd-8477-70aa5d2db5b6 | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | f e1c098fe-4b5d-4728-81d0-7edfdd3d0ec8 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 179ad90d-ed46-467d-ad75-aea6e3ea115e | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 4d583a7a-8399-4299-9799-dec33913c20a | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 9e5be41b-c512-4f22-9d7c-81090d62dc31 | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t f613aa23-4831-4aba-806e-fb7dcdcd704d | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 76749503-4a8b-4e8f-a2e4-9d89e0de0d71 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t c46bb1c0-dad9-490c-95b4-b74b25b80129 | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 8b511fc2-4ec5-4c82-9faf-93da8490adc9 | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t 88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e | 8589934592 | 00000000-0000-0000-0000-000000000000 | 1 | 1 | 5 | t 829348f3-0f63-4275-92e1-1e84681a422b | 268435456000 | 00000000-0000-0000-0000-000000000000 | 1 | 2 | 5 | t 38eee7d5-9fd1-44b0-876c-b24e4bc0085b | 8589934592 | 988f90f6-a37d-4dfd-8477-70aa5d2db5b6 | 1 | 2 | 4 | t 00000000-0000-0000-0000-000000000000 | 85899345920 | | 0 | 2 | 4 | t engine=# select * from base_disks where disk_alias like 'download%'; disk_id | disk_interface | wipe_after_delete | propagate_errors | disk_alias | disk_description | shareable | boot | sgio | alignment | last_alignment_scan --------------------------------------+----------------+-------------------+------------------+--------------------+------------------+-----------+------+------+-----------+--------------------- a33a673d-751f-4287-a655-e84dfcfcd005 | VirtIO | f | Off | downloadbak_Disk1 | test | f | t | | 0 | 5e28342e-2e90-491b-a6c3-49b2443092fd | VirtIO | f | Off | downloadbak_Disk2 | | f | f | | 0 | 907f3071-69eb-4167-8e34-22d3985f63cf | VirtIO | f | Off | downloadbak_Disk3 | | f | f | | 0 | 26628bb8-a057-4fab-af65-20258f083ab0 | VirtIO | f | Off | downloadbak_Disk4 | | f | f | | 0 | 666b1602-6979-4b17-a7fe-6524a1bf603b | VirtIO | f | Off | downloadbak_Disk5 | | f | f | | 0 | cf5371aa-2cbe-4edc-a876-2b503208b0e6 | VirtIO | f | Off | downloadbak_Disk6 | | f | f | | 0 | d718155e-eaf2-41e0-aebd-799a31af18bc | VirtIO | f | Off | downloadbak_Disk7 | | f | f | | 0 | 8616828d-e3f3-4e55-863f-387df2110ebc | VirtIO | f | Off | downloadbak_Disk8 | | f | f | | 0 | 40909eb8-be4c-4280-b9ef-51f3fa36340e | VirtIO | f | Off | downloadbak_Disk9 | | f | f | | 0 | 8483a1b6-4d79-4675-8433-eb03dcd5f53d | VirtIO | f | Off | downloadbak_Disk10 | | f | f | | 0 | eb32f782-7b29-4f54-bd3e-a736ae8c5476 | VirtIO | f | Off | downloadbak_Disk11 | | f | f | | 0 | 85d7448a-ad17-4a77-86a9-6a55c1baf1a6 | VirtIO | f | Off | downloadbak_Disk12 | | f | f | | 0 | b5a5411b-b234-4340-8338-1f2b860e4265 | VirtIO | f | Off | downloadbak_Disk13 | | f | f | | 0 | 6055a0a2-a6e9-4466-b0eb-3928c5c84d99 | VirtIO | f | Off | downloadbak_Disk14 | | f | f | | 0 | (14 rows) What's the best way of getting those attached to another VM? Moving them into an import repo? Cheers, Boudewijn

On 03/10/2014 06:03 PM, Jason Brooks wrote:
----- Original Message -----
From: "Boudewijn Ector" <boudewijn@boudewijnector.nl> To: "Jason Brooks" <jbrooks@redhat.com> Cc: users@ovirt.org Sent: Monday, March 10, 2014 8:56:31 AM Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
On 10-03-14 16:55, Jason Brooks wrote:
into:
CLASS=Backup DESCRIPTION=export-storage IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=613 POOL_DESCRIPTION=Default POOL_DOMAINS= POOL_SPM_ID=1 POOL_SPM_LVER=0 POOL_UUID= REMOTE_PATH=nfsserver:/raid/ovirt-old/data ROLE=Regular SDUUID=1979444d-b79a-494c-8c1a-bcc132e31a04 TYPE=NFS VERSION=3 This needs to be VERSION=0
(I successfully rescued a storage domain this way just this weekend)
Good luck!
Jason
Hi Jason,
Thanks! I'm going to try this after finishing my coffee! Can you explain to me *why* this will change the situation? I'd like to have a better understanding about the inner workings of ovirt.
I don't know -- TYPE=0 is what you find in the export domain metadata files. I'd love to be able to bring back images w/o the export domain step in the middle...
import data domain is in the works, should make 3.5.
participants (7)
-
Boudewijn Ector
-
Gadi Ickowicz
-
Gianluca Cecchi
-
Itamar Heim
-
Jason Brooks
-
Liron Aravot
-
Vered Volansky