
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration: - 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare); Is this configuration supported or I have to change something? Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo Phone: +3909123860056 Fax: +3909123860880

On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo <benedetto.vassallo@unipa.it> wrote:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something? Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
Hi, Recently it was mentioned that there were some issues with the 'too new' EPYC. For now, you can do : 1. Use some old machines for initial setup of the HostedEngine VM (disable all Spectre/Meltdown in advance) -> and then add the new EPYC-based hosts and remove the older systems. Sadly, the older systems cannot be too old :) 2. Host the HostedEngine VM on your current VmWare environment or on a separate KVM host. Hosting the HostedEngine on bare metal is also OK 3. Wait (I don't know how long) till EPYC issues are solved. Best Regards, Strahil Nikolov

Def. Quota Strahil Nikolov <hunter86_bg@yahoo.com>:
On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo <benedetto.vassallo@unipa.it> wrote:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something? Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
Hi,
Recently it was mentioned that there were some issues with the 'too new' EPYC. For now, you can do : 1. Use some old machines for initial setup of the HostedEngine VM (disable all Spectre/Meltdown in advance) -> and then add the new EPYC-based hosts and remove the older systems. Sadly, the older systems cannot be too old :)
2. Host the HostedEngine VM on your current VmWare environment or on a separate KVM host. Hosting the HostedEngine on bare metal is also OK
3. Wait (I don't know how long) till EPYC issues are solved.
Best Regards,Strahil Nikolov
Thank you. Maybe it's better to use Intel processors? Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo Phone: +3909123860056 Fax: +3909123860880

On January 23, 2020 11:45:37 AM GMT+02:00, Benedetto Vassallo <benedetto.vassallo@unipa.it> wrote:
Def. Quota Strahil Nikolov <hunter86_bg@yahoo.com>:
On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo <benedetto.vassallo@unipa.it> wrote:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something? Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
Hi,
Recently it was mentioned that there were some issues with the 'too new' EPYC. For now, you can do : 1. Use some old machines for initial setup of the HostedEngine VM (disable all Spectre/Meltdown in advance) -> and then add the new EPYC-based hosts and remove the older systems. Sadly, the older systems cannot be too old :)
2. Host the HostedEngine VM on your current VmWare environment or on
a separate KVM host. Hosting the HostedEngine on bare metal is also OK
3. Wait (I don't know how long) till EPYC issues are solved.
Best Regards,Strahil Nikolov
Thank you. Maybe it's better to use Intel processors? Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
Amd is giving higher memory bandwidth and higher bang/buck ratio with more cores per $ compared to intel - so I wouldn't recommend intel. I guess you can try with installing an older version (v 4.2.X) and then upgrade the cluster to 4.3 . Good Luck and Welcome to oVirt. Best Regards, Strahil Nikolov

Hi Benedetto, we have a running cluster with the same machine and a similar configuration, currently we haven't encounter any issue. We're running ovirt 4.3.7. Greeting, Paolo Il 22/01/20 17:46, Benedetto Vassallo ha scritto:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something? Thank you and Best Regards.
-- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU7PVYPSNUASWZ...

Thank you Paolo. Can we keep in contact (in private) to exchange furter informations? Best Regards. Def. Quota Paolo Margara <paolo.margara@polito.it>:
Hi Benedetto,
we have a running cluster with the same machine and a similar configuration, currently we haven't encounter any issue. We're running ovirt 4.3.7.
Greeting,
Paolo Il 22/01/20 17:46, Benedetto Vassallo ha scritto:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration: - 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare); Is this configuration supported or I have to change something? Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo Phone: +3909123860056 Fax: +3909123860880 _______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo Phone: +3909123860056 Fax: +3909123860880

You know my mail address ;-) Greetings, Paolo Il 23/01/20 11:04, Benedetto Vassallo ha scritto:
Thank you Paolo. Can we keep in contact (in private) to exchange furter informations? Best Regards.
Def. Quota Paolo Margara <paolo.margara@polito.it <mailto:paolo.margara@polito.it>>:
Hi Benedetto,
we have a running cluster with the same machine and a similar configuration, currently we haven't encounter any issue. We're running ovirt 4.3.7.
Greeting,
Paolo
Il 22/01/20 17:46, Benedetto Vassallo ha scritto:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare); - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something? Thank you and Best Regards.
-- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives:
-- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880
-- LABINF - HPC@POLITO DAUIN - Politecnico di Torino Corso Castelfidardo, 34D - 10129 Torino (TO) phone: +39 011 090 7051 site: http://www.labinf.polito.it/ site: http://hpc.polito.it/

On Wed, Jan 22, 2020, 17:54 Benedetto Vassallo <benedetto.vassallo@unipa.it> wrote:
Hello Guys, Here at University of Palermo (Italy) we are planning to switch from vmware to ovirt using the hyperconverged solution. Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server; - 2x AMD EPYC 7301 Processor; - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank); - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card; - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare); - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
The hosted engine storage donain is small and sould run only one VM, so you probably don't need 1.2T disks for it.
- 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare); - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Hyperconverged uses gluster, and gluster uses replication (replica 3 or replica 2 + arbiter) so adding raid below may not be needed. You may use the SSDs for lvm cache for the gluster setup. I would try to ask on Gluster mailing list about this. Sahina, what do you think? Nir Is this configuration supported or I have to change something?
Thank you and Best Regards. -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU7PVYPSNUASWZ...

Def. Quota Nir Soffer <nsoffer@redhat.com>:
Hyperconverged uses gluster, and gluster uses replication (replica 3 or replica 2 + arbiter) so adding raid below may not be needed.
Yes, I know this, but there is a way from the UI to create the storage domain using more then one disk? I can't understand this in the guide available at https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy...
You may use the SSDs for lvm cache for the gluster setup.
That would be great!
I would try to ask on Gluster mailing list about this.
Thank you, I'm waiting for your news. Best Regards -- -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo Phone: +3909123860056 Fax: +3909123860880

I believe you would have to either combine the drives with raid or lvm so it’s presented as one device or just create multiple storage domains On Fri, Jan 24, 2020 at 5:41 AM Benedetto Vassallo < benedetto.vassallo@unipa.it> wrote:
Def. Quota Nir Soffer <nsoffer@redhat.com>:
Hyperconverged uses gluster, and gluster uses replication (replica 3 or replica 2 + arbiter) so adding raid below may not be needed.
Yes, I know this, but there is a way from the UI to create the storage domain using more then one disk? I can't understand this in the guide available at
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy...
You may use the SSDs for lvm cache for the gluster setup.
That would be great!
I would try to ask on Gluster mailing list about this.
Thank you, I'm waiting for your news.
Best Regards
-- -- Benedetto Vassallo Responsabile U.O. Sviluppo e manutenzione dei sistemi Sistema Informativo di Ateneo Università degli studi di Palermo
Phone: +3909123860056 Fax: +3909123860880 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/INQJOX7FY7MDXA...
participants (5)
-
Benedetto Vassallo
-
Jayme
-
Nir Soffer
-
Paolo Margara
-
Strahil Nikolov