Hi,
I did the above changes on the server ( adding enterprise disks for storage
domains ), and there are improvements. But.
I just had a vm getting into "VM paused due to I/O error" :( .
This vm has both disks as "thin-provisioning", and it runs as an OpenShift
node ( Centos 7).
Any chance that the problem could be because the disks are not
"pre-alocated" and the filesystem writes are to intensive ?
Getting a bit worried since this stack it's supposed to be seen as a
production system....
Thank you very much !
On Mon, Jul 16, 2018 at 3:59 PM, Andrei Verovski <andreil1(a)starlett.lv>
wrote:
Hi,
Looks almost good, see comments below.
On 16 Jul 2018, at 13:48, Leo David <leoalex(a)gmail.com> wrote:
Thank you very much, really helpfull.
So I will have:
2 x Sandisk ssd plus 240GB ( still consumer grade ) for OS
Run continues (2+ hours, random + sequential read/write) iozone tests (in
a loop) before launching system on these SSD RAID.
You may be lucky or may be not.
2 x Samsung sm863a ( dc grade ) 1.9TB for creating 1 x raid1 ssd volume
4 x Dell 2.4TB spinning ( dc grade ) for creating 1 x raid10 spinning
volume
1 x nvme car for quick / non critical fast volume
- I would go for gluster since there are chances to extend the cluster in
near future.
You can also set shared volume on (at least) 2 x external SAN, NAS, NFS
server, whatever.
Any thoughts on this config ?
Any thougts on configuring the perc controller to be gluster optimized (
for replicated / ditributed replicated volumes ) ?
Don’t think there are gluster-optimized settings.
Run iozone tests to find optimal settings depending on your workload.
Again, thank you very much !
On Mon, Jul 16, 2018 at 1:11 PM, Andrei Verovski <andreil1(a)starlett.lv>
wrote:
> Hi, Leo,
>
> I would recommend the following configuration:
> RAID1 120GB for the OS and oVirt software
> RAID10 (GB whatever you need) for VM data
> I tested RAID5 it appears slower then RAID10.
>
> Please note consumer SSD may appear to work with SAS RAID controller yet
> actually they DO NOT !
> Load them with IOZONE stress test, and you will see complete freeze, and
> in case of OS install on these SSD, unbootable dead system.
>
> I tested consumer WD and KingFast SATA SSD, with both HP and
> 3Ware/Broadcom SAS RAID cards, all of them failed IOZONE stress test.
> BTW, you can buy 120GB used SAS server disks on ebay for something close
> to nothing.
> Conventional (mechanical) SATA hard drives work fine with SAS RAID cards
> (at least in my case).
>
> For single server you don’t need glusterfs, nfs is enough.
>
>
> > On 16 Jul 2018, at 09:27, Leo David <leoalex(a)gmail.com> wrote:
> >
> > Hello everyone,
> > Based on your experiance or well know best practices can you provide me
> with an advice on raid configuration ?
> > I have a single server, that will have a couple of brand new enterprise
> grade ssds, spinning and one pci nvme card.
> > The server ( Dell PE r630 ) comes wirt Perc h730p raid controller with
> 2gb cache.
> > I am thinking of creating a raid 1 array with the 2 of ssd's and ne
> raid10 array woth the rest of for spinning hdds. Then different gluster
> volume on each, to provide me 2 ( ssd / sata ) storage domains.
> > What do you think on haveing raid1 / raid10 as underlying HA storage
> for gluster volumes ?
> > At the moment a have some consumer devices ( samsung evo & seagate
> spinning shingled type ) and about every day the vms get into "VM has been
> paused due to storage IO errors" - i am thinking because of bad type hdds.
> > Any thoughts on these ?
> > Thank you ,
> >
> > Leo
> > --
> > Best regards, Leo David
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
https://www.ovirt.org/communit
> y/about/community-guidelines/
> > List Archives:
https://lists.ovirt.org/archiv
> es/list/users(a)ovirt.org/message/NC2QQNT2E57DYZ3W6MZUBQO75RHSZOBY/
>
>
--
Best regards, Leo David