up and running with ovirt 4.2 and gluster

Hi, I have my ovirt 4.2 hci cluster up and running, but I have a probably stupid question. During the setup and in the documents it defaults to setting up the engine datastore/volume, a VMstore/volume and a data/volume. What is the Data volume for? I am planning on running VMs on this so I would assume I need VMstore and that's what I will grow on my system. In previous installs you had the opportunity to create the ISO domain, but not now. I can see how to create in the engine UI so that's fine, but I am just wondering what people use the Data volume for as an example. Thanks Bill

There is no difference. I think those names were carried over from previous generations when you had to have an ISO domain for storing ISOs. Now you can store ISOs & VMs in the same volume.

Why would they be setup by default via the cockpit if they are no longer needee? On Sat, Jul 28, 2018, 1:13 PM femi adegoke, <ovirt@fateknollogee.com> wrote:
There is no difference.
I think those names were carried over from previous generations when you had to have an ISO domain for storing ISOs.
Now you can store ISOs & VMs in the same volume. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/V2ITW3BSAMKTJK...

Why would they be setup by default via the cockpit if they are no longer needee?
On Sat, Jul 28, 2018, 1:13 PM femi adegoke, <ovirt(a)fateknollogee.com> wrote:
I agree, that step alone is very confusing - vmstore is/was "export" and is no longer needed - iso domains are no longer needed - The only domains you need in the HCI setup are :/data and :/engine

On Sat, Jul 28, 2018 at 7:48 PM, <william.dossett@gmail.com> wrote:
Hi,
I have my ovirt 4.2 hci cluster up and running, but I have a probably stupid question.
During the setup and in the documents it defaults to setting up the engine datastore/volume, a VMstore/volume and a data/volume…
Engine volume = the Hosted Engine storage domain vmstore volume = the storage domain hosting the OS disks for the VMs data volume = the storage domain hosting the data disks of the VMs This was designed such that when setting up for disaster recovery, you need to set up remote backup only for the data disks Please let us know /contribute to improving the documentation if this was not clear
What is the Data volume for? I am planning on running VMs on this so I would assume I need VMstore and that’s what I will grow on my system. In previous installs you had the opportunity to create the ISO domain, but not now. I can see how to create in the engine UI so that’s fine, but I am just wondering what people use the Data volume for as an example.
Thanks
Bill
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/PHXIHJ23C236BMJM5S4MJQWDGV6GT3FT/

Hello, This is a bit off-topic but still applies to the general HCI cockpit config and its for lack of better words minimal documentation. Here is what I'm struggling to understand re: brick/volume configuration, I hope someone can perhaps point me in the right direction. So as I mentioned before I have three physical nodes. Each of these have a raided boot drive for oVirt install and two additional 2TB SSD disks in JBOD. The two SSDs are what I intend to use for glusterFS and are represented on each physical machine as /dev/sda and /dev/sdb In my first screenshot attached gluster-default it shows the default cockpit gdeploy brick configuration screen. Here you see that I am selecting JBOD. The other two options for stripe size and disk count from my understanding of the documentation do not apply to JBOD and when JBOD is selected these values are ignored. Now the confusing part, below under the "brick configuration" section. You will see the default here is to show the three volumes (configured on the previous step of the wizard) and have a default size set and device set as "sdb". If I were to leave this config in place then only one single disk in each of my physical machines i.e. /dev/sdb would be used in the gluster configuration (obviously not what I want). So to make use of both disks in each machine at least one of these has to have /dev/sda device specified. let's say I set the bricks like this: engine - sda - 100gb data - sdb - 1800gb vmstore - sda - 500gb I believe this would limit my data volume to the /dev/sdb device on each of the three physical machines but it limits me to the size of that physical device (2TB). Does this mean that all VM data interaction only happens on one single disk in each node i.e. /dev/sdb in host0, 1 and 2? I have 4TB usable space between the two jbod drives in each host, what if I wanted my data volume to be 3.5TB, is it possible? I want to make sure that all 6 drives in the three nodes are being used as I would think that the more drives = more I/O. I don't want my data volume to only be reading and writing to three SSDs when I have 6 available. If I use one of my 2TB drives for data and the other 2TB in each host is used only for engine and vmstore that would seem to me to be a big waste of drive space and potential I/O. If I'm understanding the above correctly and I cannot specify any more than 2TB for the data brick then is it possible for me to create two data volumes on the previous step and point them to the same mount. That way I'd split VM storage to two separate volumes, some on /dev/sda drive and some on /dev/sdb (similar to my second screenshot titled "gluster-modified". That might work but it just feels wrong. What if the JBOD configuration were different. What if instead of having 2 JBOD disks per node I had 4 or even 8 individual disks in each host how would you even define all the available devices for only three volumes? You would have 8 devices but only three volumes you can specify device locations for. The configuration screens in cockpit gdeploy would make sense in a RAID environment where all of the disks are represented by one device location, but it doesn't make much sense in a JBOD configuration with multiple independent devices. Any recommendations or guidance in regards to how I should best lay out the volume and bricks for my use case would be greatly appreciated. Thanks! James On Mon, Jul 30, 2018 at 5:20 AM, Sahina Bose <sabose@redhat.com> wrote:
On Sat, Jul 28, 2018 at 7:48 PM, <william.dossett@gmail.com> wrote:
Hi,
I have my ovirt 4.2 hci cluster up and running, but I have a probably stupid question.
During the setup and in the documents it defaults to setting up the engine datastore/volume, a VMstore/volume and a data/volume…
Engine volume = the Hosted Engine storage domain vmstore volume = the storage domain hosting the OS disks for the VMs data volume = the storage domain hosting the data disks of the VMs
This was designed such that when setting up for disaster recovery, you need to set up remote backup only for the data disks
Please let us know /contribute to improving the documentation if this was not clear
What is the Data volume for? I am planning on running VMs on this so I would assume I need VMstore and that’s what I will grow on my system. In previous installs you had the opportunity to create the ISO domain, but not now. I can see how to create in the engine UI so that’s fine, but I am just wondering what people use the Data volume for as an example.
Thanks
Bill
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/PHXIHJ23C236BMJM5S4MJQWDGV6GT3FT/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/YGFIYGNPNGSOBQ2E2DE4X6CZHBKZKBVU/

Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment.

It's nice to know someone else out there has these questions as well. What I'd really like confirmation on is basically this: In a JBOD oVirt HCI configuration with multiple disk devices per host node is it possible to have just one data volume or is it necessary to create multiple data volumes (one or more per each physical disk you want to store vm data on)? Meaning if I had 4 JBOD disks in each node and I wanted VM data to be stored and accessed on all 4 would I need to create 4 different data volumes (one per disk). And if so would they have to have unique names and brick dirs/mount points? - James On Mon, Jul 30, 2018 at 10:09 AM, <william.dossett@gmail.com> wrote:
Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/VX5CRCDEF4XJBT5LP3VQ6QULSOYX3UTR/

On Mon, Jul 30, 2018 at 6:46 PM, Jayme <jaymef@gmail.com> wrote:
It's nice to know someone else out there has these questions as well. What I'd really like confirmation on is basically this: In a JBOD oVirt HCI configuration with multiple disk devices per host node is it possible to have just one data volume or is it necessary to create multiple data volumes (one or more per each physical disk you want to store vm data on)? Meaning if I had 4 JBOD disks in each node and I wanted VM data to be stored and accessed on all 4 would I need to create 4 different data volumes (one per disk). And if so would they have to have unique names and brick dirs/mount points?
The guideline is to create 1 brick per device and this is what the Cockpit UI implements. When you group together all 4 JBOD disks to form 1 brick - you're essentially setting up 1 large brick without any hardware RAID protection for the disks. Any 1 of the disk failure would cause the entire brick to be unavailable and would require healing the whole brick. So if you had 4 1TB disks, you would end up creating a 4TB brick which needs healing - not the best of scenarios. Why do you need to create 1 data storage domain instead of multiple?
- James
On Mon, Jul 30, 2018 at 10:09 AM, <william.dossett@gmail.com> wrote:
Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/VX5CRCDEF4XJBT5LP3VQ6QULSOYX3UTR/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/7BVDT2OKDZKRYQAFSLRDJK6RIWAFH7GG/

Hello, Thanks for the feedback. I don't "need" to create only one data domain, I'm more or less trying to confirm whether or not it is proper to configure multiple data domains or if I was mis-understanding the setup procedure. So if I am correct, you are saying that if I have 2 JBOD storage devices in each host that best practice is for each of those to have its own data volume? Like this for example: ENGINE = SDA (device 1) = 100gb DATA = SDB (device 2) = 2000gb VMSTORE = SDA (device 1) = 500gb DATA2 = SDA (device 1) = 1100gb That would allocate all the space available on both of my drives (2x2tb) and create two data volumes, a 2TB one on SDB device and a 1.1TB on SDA device. SDA device would also be used for engine and vmstore volumes. Is this correct configuration for two JBOD devices per host? On Mon, Jul 30, 2018 at 10:31 AM, Sahina Bose <sabose@redhat.com> wrote:
On Mon, Jul 30, 2018 at 6:46 PM, Jayme <jaymef@gmail.com> wrote:
It's nice to know someone else out there has these questions as well. What I'd really like confirmation on is basically this: In a JBOD oVirt HCI configuration with multiple disk devices per host node is it possible to have just one data volume or is it necessary to create multiple data volumes (one or more per each physical disk you want to store vm data on)? Meaning if I had 4 JBOD disks in each node and I wanted VM data to be stored and accessed on all 4 would I need to create 4 different data volumes (one per disk). And if so would they have to have unique names and brick dirs/mount points?
The guideline is to create 1 brick per device and this is what the Cockpit UI implements. When you group together all 4 JBOD disks to form 1 brick - you're essentially setting up 1 large brick without any hardware RAID protection for the disks. Any 1 of the disk failure would cause the entire brick to be unavailable and would require healing the whole brick. So if you had 4 1TB disks, you would end up creating a 4TB brick which needs healing - not the best of scenarios.
Why do you need to create 1 data storage domain instead of multiple?
- James
On Mon, Jul 30, 2018 at 10:09 AM, <william.dossett@gmail.com> wrote:
Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/VX5CRCDEF4XJBT5LP3VQ6QULSOYX3UTR/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/7BVDT2OKDZKRYQAFSLRDJK6RIWAFH7GG/

On Mon, Jul 30, 2018 at 7:09 PM, Jayme <jaymef@gmail.com> wrote:
Hello,
Thanks for the feedback. I don't "need" to create only one data domain, I'm more or less trying to confirm whether or not it is proper to configure multiple data domains or if I was mis-understanding the setup procedure.
So if I am correct, you are saying that if I have 2 JBOD storage devices in each host that best practice is for each of those to have its own data volume?
Like this for example:
ENGINE = SDA (device 1) = 100gb DATA = SDB (device 2) = 2000gb VMSTORE = SDA (device 1) = 500gb
DATA2 = SDA (device 1) = 1100gb
That would allocate all the space available on both of my drives (2x2tb) and create two data volumes, a 2TB one on SDB device and a 1.1TB on SDA device. SDA device would also be used for engine and vmstore volumes. Is this correct configuration for two JBOD devices per host?
Looks good. I'm assuming you're using VMStore to host ISOs and not VM OS disks?
On Mon, Jul 30, 2018 at 10:31 AM, Sahina Bose <sabose@redhat.com> wrote:
On Mon, Jul 30, 2018 at 6:46 PM, Jayme <jaymef@gmail.com> wrote:
It's nice to know someone else out there has these questions as well. What I'd really like confirmation on is basically this: In a JBOD oVirt HCI configuration with multiple disk devices per host node is it possible to have just one data volume or is it necessary to create multiple data volumes (one or more per each physical disk you want to store vm data on)? Meaning if I had 4 JBOD disks in each node and I wanted VM data to be stored and accessed on all 4 would I need to create 4 different data volumes (one per disk). And if so would they have to have unique names and brick dirs/mount points?
The guideline is to create 1 brick per device and this is what the Cockpit UI implements. When you group together all 4 JBOD disks to form 1 brick - you're essentially setting up 1 large brick without any hardware RAID protection for the disks. Any 1 of the disk failure would cause the entire brick to be unavailable and would require healing the whole brick. So if you had 4 1TB disks, you would end up creating a 4TB brick which needs healing - not the best of scenarios.
Why do you need to create 1 data storage domain instead of multiple?
- James
On Mon, Jul 30, 2018 at 10:09 AM, <william.dossett@gmail.com> wrote:
Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/VX5CRCDEF4XJBT5LP3VQ6QULSOYX3UTR/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/7BVDT2OKDZKRYQAFSLRDJK6RIWAFH7GG/

That was the intention but I'm unclear as to which is the better approach. I intended to host all VM related data including OS on the data volumes (split between data on sdb and data2 on sda).. but now it sounds like doing VM os disks on a separate volume could be a better approach after reading this e-mail chain, as I do intend to setup georeplication disaster recovery at some point and it might be nice to have the flexibility to only georeplicate the vm data and not OS disks data. If I were to set it up the other way, i.e. using vmstore to host OS disks would it be recommended to create a VMSTORE on each device, like this for example? ENGINE = SDA (device 1) = 100gb DATA = SDB (device 2) = 1200gb DATA2 = SDA (device 1) = 1100gb VMSTORE = SDA (device 1) = 500gb VMSTORE2 = SDB (device 2) = 500gb 500gb may be a bit overkill here but you get the point. On Mon, Jul 30, 2018 at 10:52 AM, Sahina Bose <sabose@redhat.com> wrote:
On Mon, Jul 30, 2018 at 7:09 PM, Jayme <jaymef@gmail.com> wrote:
Hello,
Thanks for the feedback. I don't "need" to create only one data domain, I'm more or less trying to confirm whether or not it is proper to configure multiple data domains or if I was mis-understanding the setup procedure.
So if I am correct, you are saying that if I have 2 JBOD storage devices in each host that best practice is for each of those to have its own data volume?
Like this for example:
ENGINE = SDA (device 1) = 100gb DATA = SDB (device 2) = 2000gb VMSTORE = SDA (device 1) = 500gb
DATA2 = SDA (device 1) = 1100gb
That would allocate all the space available on both of my drives (2x2tb) and create two data volumes, a 2TB one on SDB device and a 1.1TB on SDA device. SDA device would also be used for engine and vmstore volumes. Is this correct configuration for two JBOD devices per host?
Looks good. I'm assuming you're using VMStore to host ISOs and not VM OS disks?
On Mon, Jul 30, 2018 at 10:31 AM, Sahina Bose <sabose@redhat.com> wrote:
On Mon, Jul 30, 2018 at 6:46 PM, Jayme <jaymef@gmail.com> wrote:
It's nice to know someone else out there has these questions as well. What I'd really like confirmation on is basically this: In a JBOD oVirt HCI configuration with multiple disk devices per host node is it possible to have just one data volume or is it necessary to create multiple data volumes (one or more per each physical disk you want to store vm data on)? Meaning if I had 4 JBOD disks in each node and I wanted VM data to be stored and accessed on all 4 would I need to create 4 different data volumes (one per disk). And if so would they have to have unique names and brick dirs/mount points?
The guideline is to create 1 brick per device and this is what the Cockpit UI implements. When you group together all 4 JBOD disks to form 1 brick - you're essentially setting up 1 large brick without any hardware RAID protection for the disks. Any 1 of the disk failure would cause the entire brick to be unavailable and would require healing the whole brick. So if you had 4 1TB disks, you would end up creating a 4TB brick which needs healing - not the best of scenarios.
Why do you need to create 1 data storage domain instead of multiple?
- James
On Mon, Jul 30, 2018 at 10:09 AM, <william.dossett@gmail.com> wrote:
Hi James... I'm in sort of the same position... I have 6 JBOD drives in each node. I am unsure if I am going to wind up with 6 data stores. I have 5 nodes and have other problems related to when I add those nodes further complicating the gluster issue. Hopefully someone can help clear this up as its the last hurdle I face before starting to actually put work loads on this and connecting it into our ManageIQ environment. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/VX5CRCDEF4XJBT5LP3VQ6QULSOYX3UTR/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/7BVDT2OKDZKRYQAFSLRDJK6RIWAFH7GG/

I would be happy to work on documentation... not quite sure how I start doing that though. The previous post and others that detail the only actualy datastores needed are data and engine was helpful. Having an additional datastore for isos that would perhaps not need to be (frequently) backed up also makes sense if you had a great deal of them. I am still not that familiar with the documentation - I used the up and running with ovirt 4.2 blog to get this setup. Thanks to all that replied.

Thanks -- we have a documentation update in progress. I'm looking into the status of this one specifically and I'll get back to you. I would *love* to have way more community contributions to our documentation, so thank you very much for offering :) On Mon, Jul 30, 2018 at 9:01 AM <william.dossett@gmail.com> wrote:
I would be happy to work on documentation... not quite sure how I start doing that though. The previous post and others that detail the only actualy datastores needed are data and engine was helpful. Having an additional datastore for isos that would perhaps not need to be (frequently) backed up also makes sense if you had a great deal of them. I am still not that familiar with the documentation - I used the up and running with ovirt 4.2 blog to get this setup. Thanks to all that replied. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ESALHROBQHCOPS...
-- GREG SHEREMETA SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX Red Hat NA <https://www.redhat.com/> gshereme@redhat.com IRC: gshereme <https://red.ht/sig>

On Sat, Jul 28, 2018 at 7:48 PM, <william.dossett(a)gmail.com> wrote:
Engine volume = the Hosted Engine storage domain vmstore volume = the storage domain hosting the OS disks for the VMs data volume = the storage domain hosting the data disks of the VMs
Sahina, So "best practice" is for every VM to have 2 disks: 1 for the operating system & 1 for the data?

On Mon, Jul 30, 2018 at 7:00 PM, femi adegoke <ovirt@fateknollogee.com> wrote:
On Sat, Jul 28, 2018 at 7:48 PM, <william.dossett(a)gmail.com> wrote:
Engine volume = the Hosted Engine storage domain vmstore volume = the storage domain hosting the OS disks for the VMs data volume = the storage domain hosting the data disks of the VMs
Sahina,
So "best practice" is for every VM to have 2 disks: 1 for the operating system & 1 for the data?
This is to ease setup of data backup. As VMs can be recreated from template/ISOs and only the application data can be backed up. _______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/6QW3BJICPB5WZQOC6AJFFTGQKNPRVQN7/

On Mon, Jul 30, 2018 at 7:00 PM, femi adegoke <ovirt(a)fateknollogee.com> wrote:
This is to ease setup of data backup. As VMs can be recreated from template/ISOs and only the application data can be backed up.
Ok, thanks for the heads up. I've been setting up my VMs with a single disk. Time to make that change!

On Sat, Jul 28, 2018 at 7:48 PM, <william.dossett(a)gmail.com> wrote:
Engine volume = the Hosted Engine storage domain vmstore volume = the storage domain hosting the OS disks for the VMs data volume = the storage domain hosting the data disks of the VMs
I assume the vmstore (instead of data volume) would be the "preferred" destination for storing ISOs?

@jayme @william.dossett When you say "JBOD", are these hosts with xx number of disks or hosts with a physically attached JBOD?

Hello, In my case I'm referring to hosts with internal disks. Example of my hardware config: host0: 2x250GB SSDs in RAID 1 mirror for OS/boot disk 2x2TB SSDs in JBOD (passthrough) represented as separate disks in the OS ex. /dev/sda and /dev/sdb host1 and host2 have identical configurations. - Jayme On Mon, Jul 30, 2018 at 11:08 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
@jayme @william.dossett
When you say "JBOD", are these hosts with xx number of disks or hosts with a physically attached JBOD? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/MWGC54O3OYM4AADG6JEIT2IHOQMEBH4F/

Correct no raid on the two 2tb SSDs I plan on using replica three HCI setup with no arbiter so each host will have a copy of data On Mon, Jul 30, 2018, 11:39 AM femi adegoke, <ovirt@fateknollogee.com> wrote:
Thanks Jayme for replying.
In your case, there is no RAID on your 2 x 2TB SSDs? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BBEE3462Y2NKKS...

Same here… I am using Dell R710s with 8 disks, 2 x 1TB in RAID 0 and then 6 virtual disks, one per physical disks. I think you have to do it that way for the dell perc controller to present the disk to the OS but I haven’t actually tried not doing that so not sure.. anyway this works. This has been very informative. James has only 3 nodes, but I have 5… I have another posting about what happens when I added the two additional nodes as all the disks seem to be locked and multipath and I can’t create bricks on them. But, once I get pas that problem, if I have 5 nodes, and I am going to create replica datasttores with 3 disks, do I just 3 nodes for each datastore? Is there any best practice for creating datastores on any 3 nodes… Perhaps an example: Datastore 1 - Node1, Node2, Node3 /dev/sdb Datastore 2 – Node2, Node3 /dev/sdc Node4 /dev/sdb Datastore 3 - Node3, /dev/sdd, Node4 /dev/sdc Node5 /dev/sda Does that make sense? Or is there a better way? Thanks to everyone for helping me on this, much appreciated. Bill From: Jayme [mailto:jaymef@gmail.com] Sent: Monday, July 30, 2018 8:12 AM To: femi adegoke <ovirt@fateknollogee.com> Cc: users <users@ovirt.org> Subject: [ovirt-users] Re: up and running with ovirt 4.2 and gluster Hello, In my case I'm referring to hosts with internal disks. Example of my hardware config: host0: 2x250GB SSDs in RAID 1 mirror for OS/boot disk 2x2TB SSDs in JBOD (passthrough) represented as separate disks in the OS ex. /dev/sda and /dev/sdb host1 and host2 have identical configurations. - Jayme On Mon, Jul 30, 2018 at 11:08 AM, femi adegoke <ovirt@fateknollogee.com <mailto:ovirt@fateknollogee.com> > wrote: @jayme @william.dossett When you say "JBOD", are these hosts with xx number of disks or hosts with a physically attached JBOD? _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWGC54O3OYM4AA...

Same here… I am using Dell R710s with 8 disks, 2 x 1TB in RAID 0 and then 6 virtual disks, one per physical disks. I think you have to do it that way for the dell perc controller to present the disk to the OS but I haven’t actually tried not doing that so not sure.. anyway this works.
Yes, in the past I've used that same Dell server (for Windows Storage Spaces). That Perc controller only allows the RAID 0 way (1 disk per RAID 0). 2 x 1TB in RAID 0...or was that a typo?

Typo ☹ 2 x 1 TB for boot in RAID 1 and then the rest as stated.. -----Original Message----- From: femi adegoke <ovirt@fateknollogee.com> Sent: Monday, July 30, 2018 8:52 AM To: users@ovirt.org Subject: [ovirt-users] Re: up and running with ovirt 4.2 and gluster
Same here… I am using Dell R710s with 8 disks, 2 x 1TB in RAID 0 and then 6 virtual disks, one per physical disks. I think you have to do it that way for the dell perc controller to present the disk to the OS but I haven’t actually tried not doing that so not sure.. anyway this works.
Yes, in the past I've used that same Dell server (for Windows Storage Spaces). That Perc controller only allows the RAID 0 way (1 disk per RAID 0). 2 x 1TB in RAID 0...or was that a typo? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SO2EMAWWHL52O7...
participants (7)
-
femi adegoke
-
Greg Sheremeta
-
Jayme
-
Mike
-
Sahina Bose
-
William Dossett
-
william.dossett@gmail.com