
As expected... this is a learning curve. My three node cluster.. in an attempt to learn how to do admin work on it, debug it... I have now redeployed the engine and even added second one on a node in cluster. But..... I now realize that my "production vms" are gone. In the past, on a manual build with KVM + Gluster .. when I repaired a damaged cluster I would just then browse to the xml file and import. I think with oVirt, those days are gone. As the PostGres engine knows the links to the disk / thin provision volumes / network / VM definition files. ###### Question: 1) Can someone point me to the manual on how to re-constitute a VM and bring it back into oVirt where all "oVirt-engines" were redeployed. It is only three or four VMs I typically care about (HA cluster and OCP ignition/ Ansible tower VM). 2) How do I make sure these core VMs are able to be reconstituted. Can I create a dedicated volume where the VMs are full provisioned, and the path structure is "human understandable". 3) I know that you can backup the engine. If I had been a smart person, how does one backup and recover from this kind of situation. Does anyone have any guides or good articles on this? Thanks, -- p <jeremey.wise@gmail.com>enguinpages

Question: 1) Can someone point me to the manual on how to re-constitute a VM and >bring it back into oVirt where all "oVirt-engines" were redeployed. It is >only three or four VMs I typically care about (HA cluster and OCP >ignition/ Ansible tower VM). Ensure that the old Engine is powered off. Next, add the storage domains in the new HostedEngine and inside the storage engine , there is a "Import VM" tab which will allow you to import your VMs.
2) How do I make sure these core VMs are able to be reconstituted. Can I >create a dedicated volume where the VMs are full provisioned, and the path >structure is "human understandable".
Usually when you power up a VM , the VM's xml is logged in the vdsm log. Also , while the VM is running , you can find the xml of the VM in the /var/run/libvirt (or whatever libvirt puts it).
3) I know that you can backup the engine. If I had been a smart person, >how does one backup and recover from this kind of situation. Does >anyone have any guides or good articles on this?
https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf -> page 52 Best Regards, Strahil Nikolov

Thanks for reply.. It really is appreciated. 1) Note about VM import -> Can you provide details / Example. For me I click on oVirt-> Compute -> Virtual Machines -> Import -> Source Option (only ones making any sense would be "Export Domain" rest are unrelated and KVM one would need xml.. which is I think gone now) So.. if I choose "Export Domain" the "path" is to what file to import VM? 2) note about import from live VM in .. When engine is gone.. this would be intersting to try.. but as I reboot.. and re-installed engine.. I think this cleared any hope of getting the libvirt xml files out [root@medusa libvirt]# tree /var/run/libvirt/ . ├── hostdevmgr ├── interface │ └── driver.pid ├── libvirt-admin-sock ├── libvirt-sock ├── libvirt-sock-ro ├── network │ ├── autostarted │ ├── driver.pid │ ├── nwfilter.leases │ ├── ;vdsmdummy;.xml │ └── vdsm-ovirtmgmt.xml ├── nodedev │ └── driver.pid ├── nwfilter │ └── driver.pid ├── nwfilter-binding │ └── vnet0.xml ├── qemu │ ├── autostarted │ ├── driver.pid │ ├── HostedEngine.pid │ ├── HostedEngine.xml │ └── slirp ├── secrets │ └── driver.pid ├── storage │ ├── autostarted │ └── driver.pid ├── virtlockd-sock ├── virtlogd-admin-sock └── virtlogd-sock 10 directories, 22 files [root@medusa libvirt]#

Importing is done from UI (Admin portal) -> Storage -> Domains -> Newly Added domain -> "Import VM" -> select Vm and you can import. Keep in mind that it is easier to import if all VM disks are on the same storage domain (I've opened a RFE for multi-domain import). Best Regards, Strahil Nikolov В петък, 25 септември 2020 г., 21:41:45 Гринуич+3, penguin pages <jeremey.wise@gmail.com> написа: Thanks for reply.. It really is appreciated. 1) Note about VM import -> Can you provide details / Example. For me I click on oVirt-> Compute -> Virtual Machines -> Import -> Source Option (only ones making any sense would be "Export Domain" rest are unrelated and KVM one would need xml.. which is I think gone now) So.. if I choose "Export Domain" the "path" is to what file to import VM? 2) note about import from live VM in .. When engine is gone.. this would be intersting to try.. but as I reboot.. and re-installed engine.. I think this cleared any hope of getting the libvirt xml files out [root@medusa libvirt]# tree /var/run/libvirt/ . ├── hostdevmgr ├── interface │ └── driver.pid ├── libvirt-admin-sock ├── libvirt-sock ├── libvirt-sock-ro ├── network │ ├── autostarted │ ├── driver.pid │ ├── nwfilter.leases │ ├── ;vdsmdummy;.xml │ └── vdsm-ovirtmgmt.xml ├── nodedev │ └── driver.pid ├── nwfilter │ └── driver.pid ├── nwfilter-binding │ └── vnet0.xml ├── qemu │ ├── autostarted │ ├── driver.pid │ ├── HostedEngine.pid │ ├── HostedEngine.xml │ └── slirp ├── secrets │ └── driver.pid ├── storage │ ├── autostarted │ └── driver.pid ├── virtlockd-sock ├── virtlogd-admin-sock └── virtlogd-sock 10 directories, 22 files [root@medusa libvirt]# _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUXIOEH2V4SY3M...

there is a "Import VM" tab which will allow you to import your VMs. Anyone have idea on how this is done? I can see .meta and .lease files but not sure how this would be used to import VMs. It would be nice to tell it to review current "disk / images" and compare to what is in library ... because I think those VMs orphaned and the ones I imported... reflect dead / garbage now on the volumes. And I see no easy way to clean up.

Ok.. digging around a bit. ############################## [root@medusa vmstore]# tree -h /media/data/ /media/data/ ├── [ 48] 7801d608-0416-4e5e-a469-2fefa2398d06 │ ├── [ 89] dom_md │ │ ├── [ 1.0M] ids │ │ ├── [ 16M] inbox │ │ ├── [ 2.0M] leases │ │ ├── [ 549] metadata │ │ ├── [ 16M] outbox │ │ └── [ 1.2M] xleases │ ├── [ 8.0K] images │ │ ├── [ 8.0K] 060cb15c-efb4-45fd-82e3-9001312cffdf │ │ │ ├── [ 160K] d3f7ab6e-d371-4748-bc6d-26557ce9812a │ │ │ ├── [ 1.0M] d3f7ab6e-d371-4748-bc6d-26557ce9812a.lease │ │ │ └── [ 430] d3f7ab6e-d371-4748-bc6d-26557ce9812a.meta │ │ ├── [ 149] 138a359c-13e6-4448-b543-533894e41fca │ │ │ ├── [ 1.7G] ece912a4-6756-4944-803c-c7ac58713ef4 │ │ │ ├── [ 1.0M] ece912a4-6756-4944-803c-c7ac58713ef4.lease │ │ │ └── [ 304] ece912a4-6756-4944-803c-c7ac58713ef4.meta │ │ ├── [ 149] 26def4e7-1153-417c-88c1-fd3dfe2b0fb9 │ │ │ ├── [ 100G] 0136657f-1f6f-4140-8c7b-f765316d4e3a │ │ │ ├── [ 1.0M] 0136657f-1f6f-4140-8c7b-f765316d4e3a.lease │ │ │ └── [ 316] 0136657f-1f6f-4140-8c7b-f765316d4e3a.meta │ │ ├── [ 149] 2d684975-06e1-442e-a785-1cfcc70a9490 │ │ │ ├── [ 4.3G] 688ce708-5be2-4082-9337-7209081082bf │ │ │ ├── [ 1.0M] 688ce708-5be2-4082-9337-7209081082bf.lease │ │ │ └── [ 343] 688ce708-5be2-4082-9337-7209081082bf.meta │ │ ├── [ 8.0K] 444ee51d-da70-419e-8d8e-a94aed28d0fa │ │ │ ├── [ 112M] 23608ebc-f8a4-4482-875c-e12eaa69c8eb │ │ │ ├── [ 1.0M] 23608ebc-f8a4-4482-875c-e12eaa69c8eb.lease │ │ │ ├── [ 377] 23608ebc-f8a4-4482-875c-e12eaa69c8eb.meta │ │ │ ├── [ 683M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c │ │ │ ├── [ 1.0M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease │ │ │ └── [ 369] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta │ │ ├── [ 149] 54cfa3af-9045-4e9b-ba8d-4ac7181490da │ │ │ ├── [ 113M] a1807521-7009-4896-9325-6a2a7c0e29ef │ │ │ ├── [ 1.0M] a1807521-7009-4896-9325-6a2a7c0e29ef.lease │ │ │ └── [ 288] a1807521-7009-4896-9325-6a2a7c0e29ef.meta │ │ ├── [ 149] 5917ba35-689c-409b-a89c-37bd08f06e76 │ │ │ ├── [ 7.7G] ea6610cd-c0b9-457f-aaf5-d199a3bd1a83 │ │ │ ├── [ 1.0M] ea6610cd-c0b9-457f-aaf5-d199a3bd1a83.lease │ │ │ └── [ 351] ea6610cd-c0b9-457f-aaf5-d199a3bd1a83.meta │ │ ├── [ 8.0K] 6914d63d-e57f-4e9f-9ca2-a378ad2f0a4f │ │ │ ├── [ 683M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c │ │ │ ├── [ 1.0M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease │ │ │ ├── [ 369] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta │ │ │ ├── [ 100M] b0deed57-f5fe-4e44-8d0a-105029bdeae5 │ │ │ ├── [ 1.0M] b0deed57-f5fe-4e44-8d0a-105029bdeae5.lease │ │ │ └── [ 377] b0deed57-f5fe-4e44-8d0a-105029bdeae5.meta │ │ ├── [ 149] 7daa2083-29d8-4b64-a50a-d09ab1428513 │ │ │ ├── [ 100G] d96bf89f-351a-4c86-9865-9531d8f7a97b │ │ │ ├── [ 1.0M] d96bf89f-351a-4c86-9865-9531d8f7a97b.lease │ │ │ └── [ 316] d96bf89f-351a-4c86-9865-9531d8f7a97b.meta │ │ ├── [ 8.0K] 7e523f3d-311a-4caf-ae34-6cd455274d5f │ │ │ ├── [ 683M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c │ │ │ ├── [ 1.0M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease │ │ │ ├── [ 369] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta │ │ │ ├── [ 4.6G] 55ebf216-b404-4c09-8cbc-91882f42cb94 │ │ │ ├── [ 1.0M] 55ebf216-b404-4c09-8cbc-91882f42cb94.lease │ │ │ └── [ 316] 55ebf216-b404-4c09-8cbc-91882f42cb94.meta │ │ ├── [ 149] 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1 │ │ │ ├── [ 11G] a704ef38-6883-4857-b2fa-423033058927 │ │ │ ├── [ 1.0M] a704ef38-6883-4857-b2fa-423033058927.lease │ │ │ └── [ 311] a704ef38-6883-4857-b2fa-423033058927.meta │ │ ├── [ 149] a2bbc814-015a-4a37-8f58-68aa6ef73f8e │ │ │ ├── [ 19G] a5cb25e3-28a0-4d88-b2ca-d3732765b5fb │ │ │ ├── [ 1.0M] a5cb25e3-28a0-4d88-b2ca-d3732765b5fb.lease │ │ │ └── [ 311] a5cb25e3-28a0-4d88-b2ca-d3732765b5fb.meta │ │ ├── [ 149] adecc80f-8ce0-4ce0-9d73-d5de8f4a72e1 │ │ │ ├── [ 10G] cc010af4-ce51-4917-9f2c-db0ec9353103 │ │ │ ├── [ 1.0M] cc010af4-ce51-4917-9f2c-db0ec9353103.lease │ │ │ └── [ 326] cc010af4-ce51-4917-9f2c-db0ec9353103.meta │ │ ├── [ 149] b8bd6924-fcd3-4479-a7da-6b255431a308 │ │ │ ├── [ 20G] f5a891db-4492-49e4-bf6a-72182ba4bf15 │ │ │ ├── [ 1.0M] f5a891db-4492-49e4-bf6a-72182ba4bf15.lease │ │ │ └── [ 314] f5a891db-4492-49e4-bf6a-72182ba4bf15.meta │ │ ├── [ 149] ce4133ad-562f-4f23-add6-cd168a906267 │ │ │ ├── [ 118M] a09c8a84-1904-4632-892e-beb55abc873a │ │ │ ├── [ 1.0M] a09c8a84-1904-4632-892e-beb55abc873a.lease │ │ │ └── [ 313] a09c8a84-1904-4632-892e-beb55abc873a.meta │ │ ├── [ 149] d0038fa8-eee1-4548-82b9-b7f79adb182c │ │ │ ├── [ 7.7G] 7568e474-8ab5-4953-8cd3-a9c9b8df3595 │ │ │ ├── [ 1.0M] 7568e474-8ab5-4953-8cd3-a9c9b8df3595.lease │ │ │ └── [ 349] 7568e474-8ab5-4953-8cd3-a9c9b8df3595.meta │ │ ├── [ 149] f6679e35-fa56-4ed8-aa47-18492e00fd01 │ │ │ ├── [ 683M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c │ │ │ ├── [ 1.0M] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.lease │ │ │ └── [ 369] 3f2f0fc7-cb3c-4bbc-9f7b-4f196588c78c.meta │ │ └── [ 8.0K] fff3c1de-e21e-4f03-8905-d587448f6543 │ │ ├── [ 160K] 6f4343b6-dc3d-40e3-b872-6a954b8d1a7b │ │ ├── [ 1.0M] 6f4343b6-dc3d-40e3-b872-6a954b8d1a7b.lease │ │ └── [ 430] 6f4343b6-dc3d-40e3-b872-6a954b8d1a7b.meta │ └── [ 30] master │ ├── [ 6] tasks │ └── [ 6] vms ######################### This kind of represents where my "production VMs" live I assume... that their is no means to map host to those files.. but based on size i kind of know which are which. Ex: Three VMs I want back: ns01, ns02, ansible00 1) parse all the .meta files to get mapping structure Ex: from mount of gluster /media/data/7801d608-0416-4e5e-a469-2fefa2398d06/find . |grep .meta | xargs cat | grep -A 5 ansible00 Ex: [root@medusa vmstore]# cat /media/data/7801d608-0416-4e5e-a469-2fefa2398d06/images/b8bd6924-fcd3-4479-a7da-6b255431a308/f5a891db- DESCRIPTION={"DiskAlias":"ansible00_boot","DiskDescription":"ansible00_boot"} DISKTYPE=DATA DOMAIN=7801d608-0416-4e5e-a469-2fefa2398d06 FORMAT=COW GEN=0 IMAGE=7e523f3d-311a-4caf-ae34-6cd455274d5f -- DESCRIPTION={"DiskAlias":"ansible00_var","DiskDescription":"ansible00_var"} DISKTYPE=DATA DOMAIN=7801d608-0416-4e5e-a469-2fefa2398d06 FORMAT=RAW GEN=0 IMAGE=b8bd6924-fcd3-4479-a7da-6b255431a308 ############ ansible00 7801d608-0416-4e5e-a469-2fefa2398d06 - Boot 7e523f3d-311a-4caf-ae34-6cd455274d5f (folder for image) - VAR b8bd6924-fcd3-4479-a7da-6b255431a308 (folder for image) # Now attempt import [root@medusa b8bd6924-fcd3-4479-a7da-6b255431a308]# python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url https://ovirte01.penguinpages.local/ --username admin@internal --password-file /media/vmstore/.ovirt.password --cafile /media/vmstore/.ovirte01_pki-resource.cer --sd-name data --disk-sparse /media/data/7801d608-0416-4e5e-a469-2fefa2398d06/images/b8bd6924-fcd3-4479-a7da-6b255431a308/f5a891db-4492-49e4-bf6a-72182ba4bf15 Checking image... Image format: raw Disk format: raw Disk content type: data Disk provisioned size: 21474836480 Disk initial size: 21474836480 Disk name: f5a891db-4492-49e4-bf6a-72182ba4bf15.raw Disk backup: False Connecting... Creating disk... Disk ID: 77f1af5a-0912-484f-a2d3-d9564641e031 Creating image transfer... Transfer ID: 7911904b-2573-4b57-b398-76de8f79670d Transfer host name: medusa Uploading image... [ ------- ] 0 bytes, 0.00 seconds, 0 bytes/s [root@medusa 7e523f3d-311a-4caf-ae34-6cd455274d5f]# python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url https://ovirte01.penguinpages.local/ --username admin@internal --password-file /media/vmstore/.ovirt.password --cafile /media/vmstore/.ovirte01_pki-resource.cer --sd-name data --disk-sparse /media/data/7801d608-0416-4e5e-a469-2fefa2398d06/images/7e523f3d-311a-4caf-ae34-6cd455274d5f/55ebf216-b404-4c09-8cbc-91882f42cb94 Checking image... Image format: qcow2 Disk format: cow Disk content type: data Disk provisioned size: 10737418240 Disk initial size: 6278938624 Disk name: 55ebf216-b404-4c09-8cbc-91882f42cb94.qcow2 Disk backup: False Connecting... Creating disk... Disk ID: bcf3d639-7b01-4c22-ac50-d20d7fd9ab29 Creating image transfer... Transfer ID: e6cb5763-987b-453e-8a5b-34f7671c476a Transfer host name: medusa Uploading image... [ ------- ] 0 bytes, 0.00 seconds, 0 bytes/s # now in oVirt engine rename volumes and create new VM with definitions 77f1af5a-0912-484f-a2d3-d9564641e031 ansible00_var bcf3d639-7b01-4c22-ac50-d20d7fd9ab29 ansible00_boot oVirt Engine -> Storage -> disk -> select volume -> edit - change aliase to be human name # Attach to VM as disk .. and set "boot" disk and then power on. One down.. two to go. PS: And now dig into how to make an oVirt-Engine backup... and... restore it.. # Optional: Backup / Restore Process for Engine Ex: Make a replica saved on NAS systemctl stop ovirt-engine time engine-backup --scope=all --mode-backup --file=/media/sw2_usb_A2/penguinpages_local_cluster/ovirte01_`date +%y%m%d%H%M%S`.bck --log=/media/sw2_usb_A2/penguinpages_local_cluster/ovirte01_log_`date +%y%m%d%H%M%S`.log # Restore would be.. ######### Needs testing!!!! engine-backup --mode=restore --file=/media/sw2_usb_A2/penguinpages_local_cluster/ovirte01_`date +%y%m%d%H%M%S`.bck --log=/media/sw2_usb_A2/penguinpages_local_cluster/ovirte01_restore_log_`date +%y%m%d%H%M%S`.log --provision-db --provision-dwh-db --restore-permissions
participants (3)
-
Jeremey Wise
-
penguin pages
-
Strahil Nikolov