I'm trying to find a way to clean up the VMs list of my DCs.
I think some of my users have created VM they're not using anymore, but
it's difficult to sort them out.
In some cases, I can shutdown some of them and wait.
Is there somewhere stored in the db tables the date of the last VM
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.
I'm having some issues with my oVirt 4.1 (fully updated to latest
release as of yesterday) cluster. When I clone a VM the disks of both
the original and the clone stay in the locked state, and the only way I
can resolve it is to go into the database on the engine and run "update
images set imagestatus=1 where imagestatus=2;"
I'm using NFS4 as a datastore and the disks seem to copy fine (file
sizes match and everything), but the locking worries me. To clone the
VM I just shut the source VM down and then right click on it and select
I've attached the full VDSM log from my last attempt, but here is the
excerpt of the lines just referencing the two disks
(d73206ed-89ba-48a9-82ff-c107c1af60f0 is the original VMs disk and
670a7b20-fecd-45c6-af5c-3c7b98258224 is the clone.)
2017-05-10 11:36:20,120-0300 INFO (jsonrpc/2) [dispatcher] Run and
volFormat=5, preallocate=2, postZero=u'false', force=u'false',
2017-05-10 11:36:20,152-0300 INFO (jsonrpc/2) [storage.Image] image
d73206ed-89ba-48a9-82ff-c107c1af60f0 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist
2017-05-10 11:36:20,169-0300 INFO (jsonrpc/2) [storage.Image]
chain=[<storage.fileVolume.FileVolume object at 0x278e450>] (image:249)
2017-05-10 11:36:20,292-0300 INFO (tasks/0) [storage.Image]
chain=[<storage.fileVolume.FileVolume object at 0x302c6d0>] (image:249)
2017-05-10 11:36:20,295-0300 INFO (tasks/0) [storage.Image]
dstSdUUID=20423d5e-188c-4e10-9893-588ceb81b354 volType=8 volFormat=RAW
preallocate=SPARSE force=False postZero=False discard=False (image:765)
2017-05-10 11:36:20,305-0300 INFO (tasks/0) [storage.Image] copy source
vol size 41943040 destination
apparentsize 41943040 (image:815)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] image
670a7b20-fecd-45c6-af5c-3c7b98258224 in domain
20423d5e-188c-4e10-9893-588ceb81b354 has vollist  (image:319)
2017-05-10 11:36:20,306-0300 INFO (tasks/0) [storage.Image] Create
for image's volumes (image:149)
2017-05-10 11:36:20,392-0300 INFO (tasks/0) [storage.Volume] Request to
create RAW volume
with size = 20480 sectors (fileVolume:439)
2017-05-10 11:37:58,453-0300 INFO (tasks/0) [storage.VolumeManifest]
imgUUID=670a7b20-fecd-45c6-af5c-3c7b98258224 volUUID =
013d8855-4e49-4984-8266-6a5e9437dff7 legality = LEGAL (volume:393)
I'm going to try to update to 4.1 an HC environment, currently on 4.0 with
3 nodes in CentOS 7.3 and one of them configured as arbiter
Any particular caveat in HC?
Are the steps below, normally used for Self Hosted Engine environments the
only ones to consider?
- update repos on the 3 hosts and on the engine vm
- global maintenance
- update engine
- update also os packages of engine vm
- shutdown engine vm
- disable global maintenance
- verify engine vm boots and functionality is ok
- update hosts: preferred way will be from the gui itself that takes care
of moving VMs, maintenance and such or to proceed manually?
Is there a preferred order with which I have to update the hosts, after
updating the engine? Arbiter for first or as the latest or not important at
Any possible problem having disaligned versions of glusterfs packages until
I complete all the 3 hosts? Any known bugs passing from 4.0 to 4.1 and
related glusterfs components?
Thanks in advance,
This is a multi-part message in MIME format.
Content-Type: text/plain; charset=utf-8; format=flowed
Is the maintainer of ovirt-guest-agent for Ubuntu on this mail list ?
I have noticed that if you install ovirt-guest-agent package from Ubuntu
repositories it doesn't start. Throws an error about python and never
starts. Has anyone noticied the same ? OS in this case is a clean
minimal install of Ubuntu 16.04.
Installing it from the following repository works fine -
Content-Type: text/html; charset=utf-8
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello<br>
Is the maintainer of ovirt-guest-agent for Ubuntu on this mail
I have noticed that if you install ovirt-guest-agent package from
Ubuntu repositories it doesn't start. Throws an error about python
and never starts. Has anyone noticied the same ? OS in this case
is a clean minimal install of Ubuntu 16.04.<br>
Installing it from the following repository works fine -
<a class="moz-txt-link-freetext" href="http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04...">http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04...</a><br>
To increase network throughput we have changed txqueuelen of network device
and bridge manually. And observed improved throughput.
But in ovirt I not see any option to increase txqueuelen.
Can someone suggest me what will be the right way to increase throughput ?
Note: I am trying to increase throughput for ipsec packets.
Hi at all,
we have a 3 node cluster with this configuration:
ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full
replicated" and 1 node is the arbiter.
Now we have a new server to add to cluster then we want to add this new
server and remove the arbiter (or, make this new server a "full replicated"
gluster with arbiter role? I don't know)
Can you please help me to know what is the right way to do this? Or, Can
you give me any doc or link that explain the steps to do this?
Thank you in advance!
I've oVirt 3.5.3-1-1.el6 with two kind of storage , NFS and Fibre
Channel. Each Hypervisor
has HBA controller and Data CLutser has four Storage named: DATA_FC,
Anybody know how can I extend a DATA_FC ? Which is the best practies ?
Extend LUN from HP MSA controller or adding a new lun ?
if I extend lun inside HP MSA oVirt see the new size ?
Thanks a lot.
I've migrated from a bare-metal engine to a hosted engine. There were
no errors during the install, however, the hosted engine did not get
started. I tried running:
on the host I deployed it on, and it returns nothing (exit code is 1
however). I could not ping it either. So I tried starting it via
'hosted-engine --vm-start' and it returned:
Virtual machine does not exist
But it then became available. I logged into it successfully. It is not
in the list of VMs however.
Any ideas why the hosted-engine commands fail, and why it is not in
the list of virtual machines?
Thanks for any help,