On 25 May 2016 at 12:44, Eyal Edri <eedri(a)redhat.com> wrote:
OK,
I suggest to test using a VM with local disk (preferably on a host with SSD
configured), if its working,
lets expedite moving all VMs or at least a large amount of VMs to it until
we see network load reduced.
This is not that easy, oVirt doesn't support mixing local disk and
storage in the same cluster, so we will need to move hosts to a new
cluster for this.
Also we will lose the ability to use templates, or otherwise have to
create the templates on each and every disk.
The scratch disk is a good solution for this, where you can have the
OS image on the central storage and the ephemeral data on the local
disk.
WRT to the storage architecture - a single huge (10.9T) ext4 is used
as the FS on top of the DRBD, this is probably not the most efficient
thing one can do (XFS would probably have been better, RAW via iSCSI -
even better).
I'm guessing that those 10/9TB are not made from a single disk but
with a hardware RAID of some sort. In this case deactivating the
hardware RAID and re-exposing it as multiple separate iSCSI LUNs (That
are then re-joined to a single sotrage domain in oVirt) will enable
different VMs to concurrently work on different disks. This should
lower the per-vm storage latency.
Looking at the storage machine I see strong indication it is IO bound
- the load average is ~12 while there are just 1-5 working processes
and the CPU is ~80% idle and the rest is IO wait.
Running 'du *' at:
/srv/ovirt_storage/jenkins-dc/658e5b87-1207-4226-9fcc-4e5fa02b86b4/images
one can see that most images are ~40G in size (that is _real_ 40G not
sparse!). This means that despite having most VMs created based on
templates, the VMs are full template copies rather then COW clones.
What this means is that using pools (where all VMs are COW copies of
the single pool template) is expected to significantly reduce the
storage utilization and therefore the IO load on it (the less you
store, the less you need to read back).
--
Barak Korren
bkorren(a)redhat.com
RHEV-CI Team