
From previous thread about gluster issues, things seem to be running much better than before, and it has raised a few questions that I can't seem to find any answers to: Setup: 3 hyperconverged nodes each node has 1x 1tb SSD and 1x 500gb nVME drive each node is connected via ethernet and also by a 40gb infiniband connection for the gluster replication. Questions: 1. I created a 3tb VDO drive on the SSD and a 1.3tb VDO Drive on the nVME drive with a 1Gb cache on each server and I enabled the RDMA transport. a. Did I lose anything by doing the whole process manually? Since I did it like that, things seem to run MUCH better so far, rebooting nodes the gluster resyncs almost instantly and storage seems faster also. b. Is there a way to split out the gluster traffic from the normal ovirt traffic (VM and cluster communication)? originally I had separate names for each node for the gluster network and it WAS split, but if I ever reset a gluster node, the name got changed. What I did now is just put in a hosts file on each node that sends all traffic over the infiniband, but I don't feel like that's optimal. I was unable to add in the separate gluster names in the configuration as they were not part of the existing cluster. c. Why does the hyper-connverged wizard not let you adjust the VDO cache size, the transport type, or use VDO to "over-subscribe" the drive sizes like I did manually? 2. I wanted to add more space in the near future, would it be better to create a RAID0 with the new drives, or just use them as another separate storage location?

On Sat, Jan 5, 2019 at 8:12 AM <michael@wanderingmad.com> wrote:
From previous thread about gluster issues, things seem to be running much better than before, and it has raised a few questions that I can't seem to find any answers to:
Setup: 3 hyperconverged nodes each node has 1x 1tb SSD and 1x 500gb nVME drive each node is connected via ethernet and also by a 40gb infiniband connection for the gluster replication.
Questions:
1. I created a 3tb VDO drive on the SSD and a 1.3tb VDO Drive on the nVME drive with a 1Gb cache on each server and I enabled the RDMA transport. a. Did I lose anything by doing the whole process manually? Since I did it like that, things seem to run MUCH better so far, rebooting nodes the gluster resyncs almost instantly and storage seems faster also.
By manual process - I assume you did not use the Cockpit UI to create the bricks and gluster volumes? You should review the volume options that are set.
b. Is there a way to split out the gluster traffic from the normal ovirt traffic (VM and cluster communication)? originally I had separate names for each node for the gluster network and it WAS split, but if I ever reset a gluster node, the name got changed. What I did now is just put in a hosts file on each node that sends all traffic over the infiniband, but I don't feel like that's optimal. I was unable to add in the separate gluster names in the configuration as they were not part of the existing cluster.
You can split the gluster traffic by using a different address (of gluster network) to peer probe the gluster servers. And use the ovirtmgmt interface to add the hosts to engine.
c. Why does the hyper-connverged wizard not let you adjust the VDO cache size, the transport type, or use VDO to "over-subscribe" the drive sizes like I did manually?
This is a bug which we are tracking to fix in 4.3.
2. I wanted to add more space in the near future, would it be better to create a RAID0 with the new drives, or just use them as another separate storage location?
Both are valid options. creating new volume would mean that you may have to move some of the vdisks to the newly created volume to free up space.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRKXWOLAXHDL4E...

initially I used gluster wizard. As my knowledge and understanding have increased, I tried to "grow" initial volumes, which caused new lvm thinpool issues. So I went through manually and made the storage for ovirt usage. I originally did just that with the separate addresses. I want to say that worked, until I "reset" some of the gluster bricks in the interface, and then when I did that, it was changing the addresses from the infiniband IPs to the IPv4 IPs, which have 1/10th the bandwidth.
participants (2)
-
michael@wanderingmad.com
-
Sahina Bose