File level restore of a VM backup located in export domain in Overt
by kevin.doyle@manchester.ac.uk
Hi
I would like to create backups of VM's using cron. I have installed and ran https://github.com/wefixit-AT/oVirtBackup This works well and saves an image to the export domain I created. I can also carry out a full restore from this by importing from the export domain and selecting the backup. The question I have is how to do a single file recovery from the backup image. I would like to restore /etc/hosts just as an example. What command would I use ?
I am not sure of the layout of the export domain directory
── dom_md
│ ├── ids
│ ├── inbox
│ ├── leases
│ ├── metadata
│ └── outbox
├── images
│ └── 622eb98e-f10b-4e96-bec5-8f0a7e7745fe
│ ├── b47ce8e1-f552-47f8-af56-33a3b8ce7aed
│ └── b47ce8e1-f552-47f8-af56-33a3b8ce7aed.meta
└── master
├── tasks
└── vms
└── c6e6c49c-c9bc-4683-8b19-d18250b5697b
└── c6e6c49c-c9bc-4683-8b19-d18250b5697b.ovf
The largest file is found is in images folder so I assume this is the backup image ?
./images/622eb98e-f10b-4e96-bec5-8f0a7e7745fe:
total 2.7G
drwxr-xr-x. 2 vdsm kvm 99 Jan 10 11:48 .
drwxr-xr-x. 3 vdsm kvm 50 Jan 10 11:48 ..
-rw-rw----. 1 vdsm kvm 50G Jan 10 11:51 b47ce8e1-f552-47f8-af56-33a3b8ce7aed
-rw-r--r--. 1 vdsm kvm 269 Jan 10 11:48 b47ce8e1-f552-47f8-af56-33a3b8ce7aed.meta
# file b47ce8e1-f552-47f8-af56-33a3b8ce7aed
b47ce8e1-f552-47f8-af56-33a3b8ce7aed: x86 boot sector; partition 1: ID=0x83, active, starthead 32, startsector 2048, 2097152 sectors; partition 2: ID=0x8e, starthead 170, startsector 2099200, 102758400 sectors, code offset 0x63
Any help would be appreciated, I am new to Ovirt
regards
Kevin
5 years, 11 months
ovirt 4.2.7-1 - adding virtual host ( nested virt. )
by paf1@email.cz
Hello guys,
I've got problem with adding new host (ESX-virtual) to ovirt 4.2.7-1 (
gluster included)
Is this feature supported ???
2019-01-07 19:38:30,168+01 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler1) [15a4029b] Error while refreshing server data
for cluster 'MID' from database: null
regs.
Paul
5 years, 11 months
VMs Hung On Booting From Hard Disk
by Douglas Duckworth
Hi
I have deployed 10 VMs in our oVirt cluster using Ansible. Thanks everyone for helping get it working.
However, I randomly run into issues where the OS won't load after completing kickstart installation.
As you can see the disks are attached while we are booting to it however I keep getting a hang at "booting from hard disk." The kickstart, kernel, and everything else for this VM, matches others. I am unsure why this problem occurs.
I have attached repeating vdsm errors from the host which has the VM.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
5 years, 11 months
Re: move 'ovirtmgmt' bridge to a bonded NIC team
by Dominik Holler
On Tue, 8 Jan 2019 17:01:38 +0000
Shawn Southern <shawn.southern(a)entegrus.com> wrote:
>We've recently added additional NICs to our oVirt nodes, and want to move the ovirtmgmt interface to one of the bonded interfaces, away from the single ethernet port currently used. This is to provide redundant connectivity to the nodes.
>
>I've not had any luck finding documentation on how to do this. If we change it manually by editing files in /etc/sysconfig/network-scripts, VDSM simply changes everything back.
>
Please use oVirt Engine to manage the network configuration.
>I'm just looking to be pointed in the right direction here.
>
I see no reason why the usual way of configuring host networking via
Compute > Hosts > hostname > Network Interfaces > Setup Host Networks
should not work.
ovirtmgmt must not be used by a VM on this host during the change,
and ovirtmgmt should use a static IP address in
Setup Host Networks > Edit management network: ovirtmgmt > IPv4.
It might be a good idea to move the host to maintenance before the change,
and ensure connectivity from the bond to oVirt Engine, because the change will be rolled back,
if connectivity is lost.
>Thanks!
5 years, 11 months
oVirt upgrade 4.1 to 4.2
by p.staniforth@leedsbeckett.ac.uk
Hello when we did a live upgrade of our VDC from 4.1 to 4.2 we had a large number of VMs running that had a Custom Compatibility Version set to 4.1 to allow them to keep running while the cluster and VDC were upgraded.
Unfortunately there was a large number of snapshots taken be the users before they were restarted their VMs so they have the Custom_Compatibility_Version set to 4.1 and so can't run in the 4.2 VDC, is there a way to search for them in the API or SDK because I can only find them in the events log when they fail to start.
Thanks,
Paul S.
5 years, 11 months
[Cannot edit VM. Maximum number of sockets exceeded.]
by Matthias Leopold
Hi,
when a user is managing a "higher number" (couldn't find the exact
number yet, roughly >10) of VMs in VM Portal and wants to edit a VM he
gets a "[Cannot edit VM. Maximum number of sockets exceeded.]" error
message in the browser, which I also see in engine.log. I couldn't find
the reason for this. I'm using squid as a SPICE Proxy at cluster level.
oVirt version is 4.2.7, can anybody help me?
thx
matthias
5 years, 11 months
ISCSI Domain & LVM
by tehnic@take3.ro
Hello all,
i have a question regarding this note in ovirt storage documentation:
**Important:** If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption.
What does "you must create a filter to hide the guest logical volumes" exactly mean?
I assume i have to set some filter in lvm.conf on all ovirt hosts, but im not sure about what to filter?
What i already saw is that after creating the ISCSI domain and cloning/moving/creating virtual machines on it there are new PVs and LVs visible on the ovirt hosts having an object UUID in the name (output from "pvs" and "lvs" commands).
Is this expected behavior or do i have to filter exactly this ones by allowing only local disks to be scanned for PVs/LVs?
Or do i have to setup filter to allow only local disks + ISCSI disks (in my case /dev/sd?) to be scanned for PVs/LVs?
I noticed too that after detaching and removing the ISCSI domain i still have UUID PVs. They all show up with "input/output error" in the output of "pvs" and stay there until i reboot the ovirt hosts.
On my ISCSI target system i already set the correct lvm filters so that "targetcli" is happy after reboot.
Thank you!
Happy new year,
Robert
5 years, 11 months
Shutdown VMs when Nobreak is on Battery
by Vinícius Ferrão
Hello,
I would like to know if oVirt support VMs shutdown when a battery threshold on the nobreak device is reached.
There are some fencing agents for APC devices, so I’m hoping this is supported.
If not, how are you guys doing this kind of thing? A separate device or VM in the datacenter to issue the shutdown commands?
Thanks,
5 years, 11 months