recommendations for best performance and reliability
by Rudi Ahlers
Hi,
Can someone please give me some pointers, what would be the best setup for
performance and reliability?
We have the following hardware setup:
3x Supermicro server with following features per server:
128GB RAM
4x 8TB SATA HDD
2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Quad port 10Gbe Inter NIC
2x 10GB Cisco switches (to isolate storage network from LAN)
One of the servers will be in another office, with a 600Mb wireless link
for Disaster Recovery.
What is recommended for the best setup in terms of redundancy and speed?
I am guessing GlusterFS with a Distributed Striped Replicated Volume across
3 of the servers.
For added performance I want to use the SSD drives, perhaps with dm-cache?
Should I combine the 4x HDD's using LVM on each host node?
What about RAID 6?
Virtual Machines will then reside on the oVirt Cluster and any one of the 3
host nodes can fail, or any single HDD can fail and all should still work,
right/?
--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
7 years, 5 months
Re: [ovirt-users] how to redo oVirt cluster?
by Luca 'remix_tj' Lorenzetto
Usually db is inside the engine vm. Is a postgres. Deleting it is not a
problem, engine installation tools should recreate itt.
Il 14 nov 2017 8:05 AM, "Rudi Ahlers" <rudiahlers(a)gmail.com> ha scritto:
> Hi Luca,
>
> Where is the engine DB stored? Can I simply delete it, or would that
> compromise a reinstallation? Though I can't imagine it would be problem.
>
> On Tue, Nov 14, 2017 at 8:58 AM, Luca 'remix_tj' Lorenzetto <
> lorenzetto.luca(a)gmail.com> wrote:
>
>> Hello Rudi,
>>
>> I think that uninstalling ovirt-engine and removing vdsm from hosts
>> should be enough. Pay attention to cleaning up engine db which contains all
>> the engine data.
>>
>> Luca
>>
>>
>> Il 14 nov 2017 7:20 AM, "Rudi Ahlers" <rudiahlers(a)gmail.com> ha scritto:
>>
>>> Hi,
>>>
>>> I have setup an oVirt cluster and did some tests. But how do I redo
>>> everything, without reinstalling CentOS as well?
>>> Would it be as simple as uninstalling all the ovirt? Or do I need to
>>> manually delete some config files and other traces off the install as well?
>>>
>>> --
>>> Kind Regards
>>> Rudi Ahlers
>>> Website: http://www.rudiahlers.co.za
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
7 years, 5 months
how to redo oVirt cluster?
by Rudi Ahlers
Hi,
I have setup an oVirt cluster and did some tests. But how do I redo
everything, without reinstalling CentOS as well?
Would it be as simple as uninstalling all the ovirt? Or do I need to
manually delete some config files and other traces off the install as well?
--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
7 years, 5 months
Re: [ovirt-users] iSCSI domain on 4kn drives
by Martijn Grendelman
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman
> <martijn.grendelman(a)isaac.nl <mailto:martijn.grendelman@isaac.nl>> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman
>> <martijn.grendelman(a)isaac.nl
>> <mailto:martijn.grendelman@isaac.nl>> wrote:
>>
>> Hi,
>>
>> Does oVirt support iSCSI storage domains on target LUNs using
>> a block
>> size of 4k?
>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>
> Is this on the roadmap?
>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
I don't think I ever replied to this, but I can confirm that in RHEV 3.6
it works with NFS.
Best regards,
Martijn.
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:<br>
<blockquote
cite="mid:280cfbd3a16ad1b76cc7de56bda88f45,CAJgorsbJHLV1e3fH4b4AR3GBp1oi44fDhfeii+PQ1iY1RwUStw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">On Fri, Aug 5, 2016 at 4:42 PM, Martijn
Grendelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl" target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Op 4-8-2016 om
18:36 schreef Yaniv Kaul:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><span class="">On Thu,
Aug 4, 2016 at 11:49 AM, Martijn Grendelman <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl"
target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
</span><span class="">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi,<br>
<br>
Does oVirt support iSCSI storage domains on
target LUNs using a block<br>
size of 4k?<br>
</blockquote>
<div><br>
</div>
</span><span class="">
<div>No, we do not - not if it exposes 4K
blocks.</div>
<div>Y.</div>
</span></div>
</div>
</div>
</blockquote>
<br>
Is this on the roadmap?<br>
</div>
</blockquote>
<div><br>
</div>
<div>Not in the short term roadmap.</div>
<div>Of course, patches are welcome. It's mainly in VDSM.</div>
<div>I wonder if it'll work in NFS.</div>
<div>Y.</div>
</div>
</div>
</div>
</blockquote>
<br>
I don't think I ever replied to this, but I can confirm that in RHEV
3.6 it works with NFS.<br>
<br>
Best regards,<br>
Martijn.<br>
</body>
</html>
--------------DE48748F7C67E1FABE46EEAF--
7 years, 5 months
Host in kdumping state
by Davide Ferrari
Hello
I've got a faulty host that keeps rebooting itself from time to time
(due to HW issues), that is/was part of the 3 hosts group hosting the
HostedEngine, and now it always appears as "Kdumping" in the web
administration panel.
All my hosts are oVirt 4.1 on Centos 7.3 with glusterfs 3.7 but this one
that was updated by mistake to Centos 7.4 with glusterfs 3.8.
Is this due to the different OS/gluster version? How can I "reset" it? I
want to remove it permanently and assign the HostedEngine to another host?
Moreover, the main glusterfs volume, the one which holds the HE image,
has some bricks on this failing machine (vm03):
# gluster volume status data_ssd
Status of volume: data_ssd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick vm01.storage.billy:/gluster/ssd/data/
brick 49156 0 Y 6039
Brick vm02.storage.billy:/gluster/ssd/data/
brick 49153 0 Y 99097
Brick vm03.storage.billy:/gluster/ssd/data/
arbiter_brick 49159 0 Y 5325
Brick vm03.storage.billy:/gluster/ssd/data/
brick N/A N/A N N/A
Brick vm04.storage.billy:/gluster/ssd/data/
brick 49152 0 Y 14811
Brick vm02.storage.billy:/gluster/ssd/data/
arbiter_brick 49154 0 Y 99104
Self-heal Daemon on localhost N/A N/A Y 6753
Self-heal Daemon on vm01.storage.billy N/A N/A Y 79317
Self-heal Daemon on vm02.storage.billy N/A N/A Y 41778
Self-heal Daemon on vm04.storage.billy N/A N/A Y 125116
What's the best way to replace them? Is this guide still useful?
https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-gl...
(I guess so)
Thanks!
--
Davide Ferrari
Lead System Engineer
Billy Performance Network
7 years, 5 months
oVirt 4.1 - "Create virtual disk - thin provisioning" makes RAW by default not QCOW2
by andreil1
Hi,
Just got test setup of oVirt 4.1 running.
Seem to be really brilliant piece of software.
1) Defined local storage domain.
Create virtual disk - thin provisioning makes RAW images by default instead of QCOW2.
It preallocated all space in advance.
I looked inside META file in the same folder as disk image - it is marked as RAW.
May be its converted to RAW because I formatted it as 3 partitions: SWAP + Boot Ext4 + Root Ext4 ?
Should I use single Ext4 partition per virtual disk image to keep it QCOW2 ?
Is there any way to change this behaviour, or convert RAW into QCOW2 within oVirt? Certainly I can do it with QEMU, but image become unregistered, or may be some functionality become broken.
2) SWAP partition - is it better to create small 1GB SWAP as RAM disk or use 4GB partition on virtual RAW disk, or I’m wrong here ?
Thanks in advance.
Andrei
7 years, 5 months
Re: [ovirt-users] recommendations for best performance and reliability
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------6C7C95E00858D75DD80BDF80
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello Rudi
If you have a 4th server it may work,but I am not knowledgeable about
Gluster's GEOreplication. Perhaps someone else can advise.
With regards RAID 5 for 4 disks this is intended for capacity and as
mentioned 4 disks is the maximum I would use for RAID5. As you have SSDs
and intend to use a caching technique like bcache ou dm-cache this
should cover the performance hit of RAID 5 write.
If you consider using RAID 6 in only 4 disks you are much better of
using RAID 10 then as you will have double write performance and same
capacity. I normally use RAID 10 or in ZFS configurations (not this
case) multiple vdevs of RAID6.
For RAID in Linux I have been using mdraid. I know LVM does some Raid
but I personally never done myself so can't advise.
Regards.
Fernando
On 13/11/2017 10:19, Rudi Ahlers wrote:
> Hi Fernando,
>
> Thanx.
>
> I meant to say, the 4th server will be in another office. It's about
> 3Km away and I was thinking of using Gluster's GEOreplication for this
> purpose.
>
> I am not a fond user of RAID5 at all. But this raises the question:
> does RAID add any unnecessary overhead? I would rather run RAID 6 or
> RAI10.
> And then, if RAID is the preferred way (over LVM?), as I don't have
> dedicated hardware RAID cards, would mdraid add any benefit?
>
> On Mon, Nov 13, 2017 at 1:47 PM, FERNANDO FREDIANI
> <fernando.frediani(a)upx.com <mailto:fernando.frediani@upx.com>> wrote:
>
> Helli Rudi
>
> Nice specs.
>
> I wouldn't use GlusterFS for this setup with the third server in a
> different location. Just have this server as an Standalone and
> replicate the VMs there. You won't have real time replication, but
> much less hassle and probably to have constant failures, specially
> knowing you have a wireless link.
>
> For the SSDs I have been using bcache with success. Relatively
> simple to setup and pretty good performance.
>
> For your specs as you have 4 mechanical disks I would recommend
> you to have a RAID 5 between them (4 disks is my limit for RAID 5)
> and a RAID 0 made of SSDs for the bcache device. If the RAID 0
> fails for any reason it will fall back directly to the mechanical
> disks and you can do maintenance on the Node doing live migration
> in order to replace the failed disks.
>
> However as you have you have 2 remaining server to create your
> cluster you may need to consider GlusterFS on the top of this RAID
> to have the replication and Highavaibility.
>
> Hope it helps.
>
> Fernando
>
>
> On 13/11/2017 08:03, Rudi Ahlers wrote:
>> Hi,
>>
>> Can someone please give me some pointers, what would be the best
>> setup for performance and reliability?
>>
>> We have the following hardware setup:
>>
>> 3x Supermicro server with following features per server:
>> 128GB RAM
>> 4x 8TB SATA HDD
>> 2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
>> 2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>> Quad port 10Gbe Inter NIC
>> 2x 10GB Cisco switches (to isolate storage network from LAN)
>>
>> One of the servers will be in another office, with a 600Mb
>> wireless link for Disaster Recovery.
>>
>> What is recommended for the best setup in terms of redundancy and
>> speed?
>>
>> I am guessing GlusterFS with a Distributed Striped Replicated
>> Volume across 3 of the servers.
>>
>> For added performance I want to use the SSD drives, perhaps with
>> dm-cache?
>>
>> Should I combine the 4x HDD's using LVM on each host node?
>> What about RAID 6?
>>
>>
>>
>> Virtual Machines will then reside on the oVirt Cluster and any
>> one of the 3 host nodes can fail, or any single HDD can fail and
>> all should still work, right/?
>>
>>
>>
>>
>> --
>> Kind Regards
>> Rudi Ahlers
>> Website: http://www.rudiahlers.co.za
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
--------------6C7C95E00858D75DD80BDF80
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello Rudi</p>
<p>If you have a 4th server it may work,but I am not knowledgeable
about Gluster's GEOreplication. Perhaps someone else can advise.</p>
<p>With regards RAID 5 for 4 disks this is intended for capacity and
as mentioned 4 disks is the maximum I would use for RAID5. As you
have SSDs and intend to use a caching technique like bcache ou
dm-cache this should cover the performance hit of RAID 5 write.<br>
If you consider using RAID 6 in only 4 disks you are much better
of using RAID 10 then as you will have double write performance
and same capacity. I normally use RAID 10 or in ZFS configurations
(not this case) multiple vdevs of RAID6.<br>
</p>
<p>For RAID in Linux I have been using mdraid. I know LVM does some
Raid but I personally never done myself so can't advise.</p>
<p>Regards.<br>
Fernando<br>
</p>
<br>
<div class="moz-cite-prefix">On 13/11/2017 10:19, Rudi Ahlers wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAAj3DjmTaxR5JEESLTVtqyzk4H_cN6StKVB_JNXBjJtWrDJXEw@mail.gmail.com">
<div dir="ltr">Hi Fernando,
<div><br>
</div>
<div>Thanx. </div>
<div><br>
</div>
<div>I meant to say, the 4th server will be in another office.
It's about 3Km away and I was thinking of using Gluster's
GEOreplication for this purpose. </div>
<div><br>
</div>
<div>I am not a fond user of RAID5 at all. But this raises the
question: does RAID add any unnecessary overhead? I would
rather run RAID 6 or RAI10. </div>
<div>And then, if RAID is the preferred way (over LVM?), as I
don't have dedicated hardware RAID cards, would mdraid add any
benefit?</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Nov 13, 2017 at 1:47 PM,
FERNANDO FREDIANI <span dir="ltr"><<a
href="mailto:fernando.frediani@upx.com" target="_blank"
moz-do-not-send="true">fernando.frediani(a)upx.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Helli Rudi</p>
<p>Nice specs.</p>
<p>I wouldn't use GlusterFS for this setup with the third
server in a different location. Just have this server as
an Standalone and replicate the VMs there. You won't
have real time replication, but much less hassle and
probably to have constant failures, specially knowing
you have a wireless link.</p>
<p>For the SSDs I have been using bcache with success.
Relatively simple to setup and pretty good performance.</p>
<p>For your specs as you have 4 mechanical disks I would
recommend you to have a RAID 5 between them (4 disks is
my limit for RAID 5) and a RAID 0 made of SSDs for the
bcache device. If the RAID 0 fails for any reason it
will fall back directly to the mechanical disks and you
can do maintenance on the Node doing live migration in
order to replace the failed disks.</p>
<p>However as you have you have 2 remaining server to
create your cluster you may need to consider GlusterFS
on the top of this RAID to have the replication and
Highavaibility.</p>
<p>Hope it helps.</p>
<p>Fernando<br>
</p>
<div>
<div class="h5"> <br>
<div class="m_-8722418027031101913moz-cite-prefix">On
13/11/2017 08:03, Rudi Ahlers wrote:<br>
</div>
</div>
</div>
<blockquote type="cite">
<div>
<div class="h5">
<div dir="ltr">Hi,
<div><br>
</div>
<div>Can someone please give me some pointers,
what would be the best setup for performance and
reliability?</div>
<div><br>
</div>
<div>We have the following hardware setup:</div>
<div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px"><br>
</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">3x Supermicro
server with following features per server:</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">128GB RAM</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">4x 8TB SATA
HDD</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">2x SSD drives
(intel_ssdsc2ba400g4 - 400GB DC S3710)</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">2x 12 core CPU
(Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">Quad port
10Gbe Inter NIC</span></font></div>
<div><font color="#2c2c2c" face="Open Sans,
Helvetica, Arial, sans-serif"><span
style="font-size:14.6667px">2x 10GB Cisco
switches (to isolate storage network from
LAN)</span></font></div>
</div>
<div><br>
</div>
<div>
<div>One of the servers will be in another
office, with a 600Mb wireless link for
Disaster Recovery. </div>
<div><br>
</div>
<div>What is recommended for the best setup in
terms of redundancy and speed?</div>
<div><br>
</div>
<div>I am guessing GlusterFS with a Distributed
Striped Replicated Volume across 3 of the
servers. </div>
<div><br>
</div>
<div>For added performance I want to use the SSD
drives, perhaps with dm-cache?</div>
<div><br>
</div>
<div>Should I combine the 4x HDD's using LVM on
each host node?</div>
<div>What about RAID 6?</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>Virtual Machines will then reside on the
oVirt Cluster and any one of the 3 host
nodes can fail, or any single HDD can fail
and all should still work, right/? </div>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
-- <br>
<div
class="m_-8722418027031101913gmail_signature">Kind
Regards<br>
Rudi Ahlers<br>
Website: <a
href="http://www.rudiahlers.co.za"
target="_blank" moz-do-not-send="true">http://www.rudiahlers.co.za</a><br>
</div>
</div>
</div>
<br>
<fieldset
class="m_-8722418027031101913mimeAttachmentHeader"></fieldset>
<br>
</div>
</div>
<pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-8722418027031101913moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users(a)ovirt.org</a>
<a class="m_-8722418027031101913moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</div>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" moz-do-not-send="true">Users(a)ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="gmail_signature" data-smartmail="gmail_signature">Kind
Regards<br>
Rudi Ahlers<br>
Website: <a href="http://www.rudiahlers.co.za"
target="_blank" moz-do-not-send="true">http://www.rudiahlers.co.za</a><br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------6C7C95E00858D75DD80BDF80--
7 years, 5 months
Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine
by Open tech
Hi All,
I am new to Ovirt. I am hitting the exact same error while trying a new
install in a nested virtualization setup on esxi 6.5.
I am following this tutorial as well. Have three nodes on esxi with dual
networks & passwordless ssh enabled.
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-glus...
Node install goes through without issue. Run into this error when i hit
deploy.
TASK [Run a shell script]
******************************************************
fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpbDBjAt/run-script.retry
@Simone Marchioni were you able to find a solution ???.
Thanks
hk
7 years, 5 months