I was following the instructions at http://www.ovirt.org/Node_PXE. My goal is to use pxeboot (iPXE) to boot the node entirely into RAM with rootfs. What I got now is an installation screen which requires a local storage. My system has no disk and this doesn't quite work for me.
What's are the kernel parameters I need to use to boot the node into RAM automatically? I am OK as the first step with a stateless node.
I am trying to set up a testing network using o-virt, but the networking is
refusing to cooperate. I am testing for possible use in two different
My previous experience has been with VMWare. I have always set up a single
bridged network on each host. All my hosts, VMs, and non-VM computers were
peers on the LAN. They could all talk to each other, and things worked very
well. There was a firewall/gateway that provided access to the Internet, and
hosts, VMs, and could all communicate with the Internet as needed.
o-virt seems to be compartmentalizing things beyond all reason.
Is there any way to set up simple networking, so ALL computers can see each
Is there anywhere that describes the philosophy behind the networking setup?
What reason is there that networks are so divided?
After banging my head against the wall trying to configure just one host, I
am very frustrated. I have spent several HOURS Googling for a coherent
explanation of how/why networking is supposed to work, but only fine obscure
references like "letting non-VMs see VM traffic would be a huge security
violation". I have no concept of what king of an installation the o-virt
designers have in mind, but it is obviously worlds different from what I am
trying to do.
The best I can tell, o-virt networking works like this (at least when you
have only one NIC):
there must be an ovirtmgt network, which cannot be combined with any other
the ovirtmgt network cannot talk to VMs (unless that VM is running the
the ovirtmgt network can only talk to hosts, not to other non-VM computers
a VM network can talk only to VMs
cannot talk to hosts
cannot talk to non-VMs
hosts cannot talk to my LAN
hosts cannot talk to VMs
VMs cannot talk to my LAN
All of the above are enforced by a boatload of firewall rules that o-virt
puts into every host and VM under its jurisdiction.
All of the above is inferred from things I Googled, because I can't find
anywhere that explains what or how things are supposed to work--only things
telling people WHAT THEY CANT DO. All I see on the mailing lists is people
getting their hands slapped because they are trying to do SIMPLE SETUPS that
should work, but don't (due to either design restrictions or software bugs).
My use case A:
* My (2 or 3) hosts have only one physical NIC.
* My VMs exist to provide services to non-VM computers.
* The VMs do not run X-windows, but they provide GUI programs to
non-VMs via "ssh -X" connections.
* MY VMs need access to storage that is shared with hosts and non-VMs on
Is there some way to TURN OFF network control in o-virt? My systems are
small and static. I can hand-configure the networking a whole lot easier
than I can deal with o-virt (as I have used it so far). Mostly I would need
to be able to turn off the firewall rules on both hosts and VMs.
banging head against wall,
> How are you folks doing your hypervisor level backups?
> Under ESXi I used GhettoVCB which basically took a snap shot, copied the
> disk image to another location, then deleted the snap.
Thank you for this hint, I didn't know about GhettoVCB and I'm
definately going to have a look at it.
> I haven't been able to find too much information on how this can be done
> with ovirt. I see discussions on the new backup API, but I'm not
> interested in spending big bucks on an enterprise backup solution for a
> home lab.
> Only discussion I saw on using snapshots for backups said don't do it
> because the tools don't sync memory when the snapshots are taken.
The problem with snapshot-based backups is, that they are usually only
crash-consistent, meaning that they contain the state of a system's
disks as they would be if you pulled the power plug on a server. If you
restore a system from this type of backup, you would see file system
recovery happening at the first boot, and you risk data loss from -for
example- database servers.
The process that GhettoVCB uses according to your description above is
the same. Your backups are only crash-consistent.
If you need application-level consistency, you need a mechanism to
inform applications that a backup is going to take place (or rather: a
snapshot will be taken) and that they should place themselves in a
consistent state. For example: sync data to disk, flush transaction
logs, stuff like that. Microsoft Windows has VSS for that. For Linux,
there is no such thing (that I know of). Common practice for "quiescing"
database servers and such on Linux is making consistent SQL dumps in a
I my case, for most guests a crash-consistent backup, containing a
recent MySQL or PostgreSQL dump is sufficient. I use LVM snapshots (not
oVirt snapshots) for backups, and I use Rsync to transfer the data. I
have been experimenting with Virtsync , but I'm having a bit of
trouble with that, so for the moment, it's just Rsync.
Efficiently backing up sparse images with Rsync can be a bit of a
challenge (that's why Virtsync was created in the first place, IIRC),
but using '--sparse' on the inital backup and '--inplace' on subsequent
backups seems to do the trick.
I hope this helps.
I have a cluster of 2 hosts with f19 and 3.3.1 as delivered in ovirt
Running kernel is 3.11.8-200.fc19.x86_64
$ rpm -q ksm libvirt qemu-kvm vdsm
package ksm is not installed
Engine is another server, always with f19 and ovirt stable 3.3.1.
I have a GlusterFS DC with a cluster composed by these two hosts.
They are HP blades BL685c G1 with 64Gb of ram each and 4xOpteron CPUs,
seen by oVirt as "AMD Opteron G2" in datacenter.
I have 3 VMs in this cluster: 2 x CentOS 6.4 and 1 x CentOS 5.10.
After I put all in maintenance and then start up. I have both hosts up
I power on in sequence the 3 VMs and they start all on the same host
(the one that is not the elected SPM).
Should it be some sort of load balancing or is this situation correct/expected?
I notice ksm is not installed: could it be this one the reason in any
way? Or culd it be useful anyway to install it (I thought it was a
dependency for oVirt...)
$ rpm -qa|grep ksm
BTW: if I run a live migration of CentOS 5.10 VM to the other host it
works ok without problem, so the host seems ok to me.
All 3 VMs are configured as servers and as "run on any host in cluster"
Thanks in advance,
There doesn't seem to be a single "tools" or "agent" install like there
is under ESXi, that's required to be installed under ovirt, instead I've
seen references to
1) virtio drivers
2) Spice drivers
3) qemu agent
What's really needed under both Windows and Linux to get guests working
Are all these agents included in Fedora?
I updated to the latest Fedora 19 on two test servers, and now vdsmd
will not start.
Systedctl says this:
dsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: failed (Result: exit-code) since Mon 2013-10-14 12:31:30
EDT; 23h ago
Process: 1788 ExecStart=/lib/systemd/systemd-vdsmd start
Oct 14 12:31:30 rs0-ovirt0.rexdb.us python: DIGEST-MD5 ask_user_info()
Oct 14 12:31:30 rs0-ovirt0.rexdb.us python: DIGEST-MD5 client step 2
Oct 14 12:31:30 rs0-ovirt0.rexdb.us python: DIGEST-MD5 ask_user_info()
Oct 14 12:31:30 rs0-ovirt0.rexdb.us python: DIGEST-MD5
Oct 14 12:31:30 rs0-ovirt0.rexdb.us python: DIGEST-MD5 client step 3
Oct 14 12:31:30 rs0-ovirt0.rexdb.us systemd-vdsmd:
/lib/systemd/systemd-vdsmd: line 185: 1862 Segmentation fault
Oct 14 12:31:30 rs0-ovirt0.rexdb.us systemd-vdsmd: vdsm: Failed to
define network filters on libvirt[FAILED]
Oct 14 12:31:30 rs0-ovirt0.rexdb.us systemd: vdsmd.service: control
process exited, code=exited status=139
Oct 14 12:31:30 rs0-ovirt0.rexdb.us systemd: Failed to start Virtual
Desktop Server Manager.
Oct 14 12:31:30 rs0-ovirt0.rexdb.us systemd: Unit vdsmd.service
entered failed state.
Has anyone else experienced this?
This is a multi-part message in MIME format.
Content-Type: text/plain; charset=windows-1250; format=flowed
Hi, I'm currently looking for ways to backup my disks.
Live storage migration converted my disks from
preallocated to thin provisioning. Preallocated disk
backups are working from a simple dd from a snapshot,
but the thin qcow2 disks are a problem. I seems it uses
multiple logical volumes (3 in my case). Also they
are combined MUCH bigger than the preallocated:
8GB raw vs thin: 31 GB - 1 snapshot.
So I have two questions:
- how would you backup these thin disks?
- is it possible to convert thin to preallocated and
discard the snapshots
Ernest Beinrohr, AXON PRO
DevOps, Ing <http://www.beinrohr.sk/ing.php>, RHCE
<http://www.beinrohr.sk/lpic.php>, VCA <http://www.beinrohr.sk/vca.php>,
+421-2--6241-0360 <callto://+421-2--6241-0360>, +421-903--482-603
icq:28153343, skype:oernii-work <callto://oernii-work>,
For a successful technology, reality must take precedence over public
relations, for Nature cannot be fooled. Richard Feynman
Content-Type: text/html; charset=windows-1250
<meta http-equiv="content-type" content="text/html; charset=windows-1250">
<body bgcolor="#FFFFFF" text="#000000">
Hi, I'm currently looking for ways to backup my disks. <br>
Live storage migration converted my disks from <br>
preallocated to thin provisioning. Preallocated disk <br>
backups are working from a simple dd from a snapshot, <br>
but the thin qcow2 disks are a problem. I seems it uses <br>
multiple logical volumes (3 in my case). Also they <br>
are combined MUCH bigger than the preallocated:<br>
8GB raw vs thin: 31 GB - 1 snapshot.<br>
So I have two questions:<br>
- how would you backup these thin disks?<br>
- is it po