You first add the node (assign Datacenter and Cluster).
Then you define the storage volume and give details and options (for example option like:
'backup-volfile-servers=host2:host3').
Then during Host activation phase - all storage domains are mounted on the host and
during maintenance -> umounted on the host.
Best Regards,
Strahil Nikolov
На 16 юли 2020 г. 1:21:33 GMT+03:00, Philip Brown <pbrown(a)medata.com> написа:
>Awesome thats good news.
>
>So... does that happen automatically?
>
>ie: install ovirt "node" image, then tell ovirt hosted engine "go add
>that node to cluster", and it automagically gets done?
>
>Kinda sounds likely, but I'd like to set my expectations appropriately
>
>
>----- Original Message -----
>From: "Jayme" <jaymef(a)gmail.com>
>To: "Philip Brown" <pbrown(a)medata.com>
>Cc: "Strahil Nikolov" <hunter86_bg(a)yahoo.com>, "users"
><users(a)ovirt.org>
>Sent: Wednesday, July 15, 2020 3:03:39 PM
>Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
>Your other hosts that aren’t participating in gluster storage would
>just
>mount the gluster storage domains.
>
>On Wed, Jul 15, 2020 at 6:44 PM Philip Brown <pbrown(a)medata.com> wrote:
>
>> Hmm...
>>
>>
>> Are you then saying, that YES, all host nodes need to be able to talk
>to
>> the glusterfs filesystem?
>>
>>
>> on a related note, I'd like to have as few nodes actually holding
>> glusterfs data as possible, since I want that data on SSD.
>> Rather than multiple "replication set" hosts, and one arbiter.. is it
>> instead possible to have only 2 replication set hosts, and multiple
>> (arbitrariliy many) arbiter nodes?
>>
>>
>> ----- Original Message -----
>> From: "Strahil Nikolov" <hunter86_bg(a)yahoo.com>
>> To: "users" <users(a)ovirt.org>, "Philip Brown"
<pbrown(a)medata.com>
>> Sent: Wednesday, July 15, 2020 1:59:40 PM
>> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>>
>> You can use a distributed replicated volume of type 'replica 3
>arbiter
>> 1'.
>> For example, NodeA and NodeB are contain replica set 1 with NodeC
>as
>> their arbiter and NodeD and NodeE as the second replica set 2 with
>NodeC
>> as thir arbiter also.
>>
>> In such case you got only 2 copies of a single shard, but you are
>fully
>> "supported" from gluster perspective.
>> Also, all hosts can have an external storage like your NAS.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip Brown
><pbrown(a)medata.com>
>> написа:
>> >arg. when I said "add 2 more nodes that arent part of the
cluster",
>I
>> >meant,
>> >"part of the glusterfs cluster".
>> >
>> >or at minimum, maybe some kind of client-only setup, if they need
>> >access?
>> >
>> >
>> >----- Original Message -----
>> >From: "Philip Brown" <pbrown(a)medata.com>
>> >To: "users" <users(a)ovirt.org>
>> >Sent: Wednesday, July 15, 2020 10:37:48 AM
>> >Subject: [ovirt-users] mixed hyperconverged?
>> >
>> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3),
>but
>> >am wondering about certain design issues.
>> >
>> >seems like the optimal number is 3 nodes for the glusterfs.
>> >but.. I want 5 host nodes, not 3
>> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
>> >Is it possible to have 3 nodes be the hyperconverged stuff.. but
>then
>> >add in 2 "regular" nodes, that dont store anything and arent part
of
>> >the cluster?
>> >
>> >is it required to be part of the gluster cluster, to also be part of
>> >the ovirt cluster, if thats where the hosted-engine lives?
>> >or can I just have the hosted engine be switchable between the 3
>nodes,
>> >and the other 2 be VM-only hosts?
>> >
>> >Any recommendations here?
>> >
>> >I dont what 5 way replication going on. Nor do I want to have to pay
>> >for large SSDs on all my host nodes.
>> >(Im planning to run them with the ovirt 3.4 node image)
>> >
>> >
>> >
>> >--
>> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
>> >5 Peters Canyon Rd Suite 250
>> >Irvine CA 92606
>> >Office 714.918.1310| Fax 714.918.1325
>> >pbrown(a)medata.com|
www.medata.com
>> >_______________________________________________
>> >Users mailing list -- users(a)ovirt.org
>> >To unsubscribe send an email to users-leave(a)ovirt.org
>> >Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
>> >_______________________________________________
>> >Users mailing list -- users(a)ovirt.org
>> >To unsubscribe send an email to users-leave(a)ovirt.org
>> >Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/
>>
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ2FUSHYJG7WQVEPDZSYEJ5QYGYXQCMS/