On February 15, 2020 12:25:25 AM GMT+02:00, eevans(a)digitaldatatechs.com wrote:
The SDM is on an nfs share on a local disk instead of iscsi luns. I
could cre less which is the SDM as long as my vm disks are on thee
iscsi luns. That's all I want to accomplish.
Reading the documentation, I understood that the SDM would hold all
disks, vm info, etc. That's why I wanted to change it.
As for the gluster, still working on that part. Its there but no bricks
until I add the third host. I'm in the process of doing that now.
However, I have no mount points to specify which disks I want to
gluster on. Still trying to figure that out.
I am doing a reply all on the email since I started this on the user
list page. Is that ok or do I need to email someone specifically?
I'm an ovirt noob. 😊
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Darrell Budic <budic(a)onholyground.com>
Sent: Friday, February 14, 2020 4:58 PM
To: eevans(a)digitaldatatechs.com
Cc: users <users(a)ovirt.org>
Subject: [ovirt-users] Re: glusterfs
Hi Eric-
Glad you got thought that part. I don’t use iscsi backed volumes for my
gluster storage, so I don’t much advice for you there. I’ve cc’d the
ovirt users list back in, someone there may be able to help you futher.
It’s good practice to reply to the list and specific people when
conversing here, so you might want to watch to be sure you don’t drop
the cc: in the future.
Re: the storage master, it’s not related to where the VM disks are
stored. Once you mange to get a new storage domain setup, you’ll be
able to create disks on whichever domain you want, and that is how you
determine what VM disk is hooked up to what. You can even have a VM
with disks on multiple storage domains, can be good for high
performance needs. The SDM may even move around if a domain become
unavailable. You may want to check the list archives for discussion on
this, I seem to recall some in the past. You also should confirm where
the disks for your HA engine are located, they may be on your local
raid disk instead of the iscsi disks if the SDM is on a local disk…
Good luck,
-Darrell
> On Feb 14, 2020, at 3:03 PM, <eevans(a)digitaldatatechs.com>
<eevans(a)digitaldatatechs.com> wrote:
>
> I enabled gluster and reinstalled and all went well. I set it for
distributed replication so I need 3 nodes. I migrated the rest of my
vm's and I am installing the third node shortly.
> My biggest concern is getting the storage master on the lun it was
previously set to. I get the snapshots on it so I can recover from
disaster more easily.
> I need it to persistently be on the lun I designate.
> Also, I want the luns to be the gluster replication volumes but ther
is no mount point in fstab on the machines.
> I am new to gluster as well so please be patient with me.
>
> Eric Evans
> Digital Data Services LLC.
> 304.660.9080
>
>
> -----Original Message-----
> From: Darrell Budic <budic(a)onholyground.com>
> Sent: Friday, February 14, 2020 2:58 PM
> To: eevans(a)digitaldatatechs.com
> Subject: Re: [ovirt-users] Re: glusterfs
>
> You don’t even need to clean everything out, unless you need to
destroy your old storage to the create new gluster backing bricks.
Ovirt has a feature to migrate date between storage domains you can use
to move an existing VM disk to a different storage facility. Note that
“reinstall” is an option on the Installation menu for hosts, you do not
need to remove the host first. It will pretty much just add the
vdsm-gluster components in this case, safe to use. Just put it in
maintenance first.
>
> You can certainly start fresh in the manner you describe if you want.
>
>> On Feb 14, 2020, at 11:56 AM, <eevans(a)digitaldatatechs.com>
<eevans(a)digitaldatatechs.com> wrote:
>>
>> I have already imported a few vm's to see how the import process
would go. So, I remove vm's and the current storage domains, and the
hosts, then add gluster on the main ovirt node, then add the hosts
back, storage back and reimport vm's?
>> I want to make sure before I get started. My first go around with
Ovirt and want to make sure before I change anything.
>>
>> Eric Evans
>> Digital Data Services LLC.
>> 304.660.9080
>>
>>
>> -----Original Message-----
>> From: Darrell Budic <budic(a)onholyground.com>
>> Sent: Friday, February 14, 2020 11:54 AM
>> To: eevans(a)digitaldatatechs.com
>> Cc: users(a)ovirt.org
>> Subject: [ovirt-users] Re: glusterfs
>>
>> You can add it in to a running ovirt cluster, it just isn’t as
automatic. First you need to enable Gluster in at the cluster settings
level for a new or existing cluster. Then either install/reinstall your
nodes, or install gluster manually and add vdsm-gluster packages. You
can create a stand alone gluster server set this way, you don’t need
any vddm packages, but then you have to create volumes manually. Once
you’ve got that done, you can create bricks and volumes in the GUI or
by hand, and then add a new storage domain and start using it. There
may be ansible for some of this, but I haven’t done it in a while and
am not sure what’s available there.
>>
>> -Darrell
>>
>>> On Feb 14, 2020, at 8:22 AM, eevans(a)digitaldatatechs.com wrote:
>>>
>>> I currently have 3 nodes, one is the engine node and 2 Centos 7
hosts, and I plan to add another Centos 7 KVM host once I get all the
vm's migrated. I have san storage plus the raid 5 internal disks. All
OS are installed on mirrored SAS raid 1. I want to use the raid 5 vd's
as exports, ISO and use the 4TB iscsi for the vm's to run on. The
iscsi has snapshots hourly and over write weekly.
>>> So here is my question: I want to add glusterfs, but after further
reading, that should have been done in the initial setup. I am not new
to Linux, but new to Ovirt and need to know if I can implement
glusterfs now or if it's a start from scratch situation. I really don't
want to start over but would like the redundancy.
>>> Any advice is appreciated.
>>> Eric
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org To unsubscribe send an email
>>> to users-leave(a)ovirt.org Privacy
>>> Statement:
https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7PN44
>>> O
>>> 7
>>> U2FC4WGIXQAQF3MRKUDJBWZD/
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org To unsubscribe send an email
to
>> users-leave(a)ovirt.org Privacy
>> Statement:
https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GIJMUPF
>> K
>> PWOAXDDUKDGXO2VI2QFI3D6G/
>>
>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy
Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K46SGJTPZM4...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DN47OTYUUCO...
The master domain is the primary domain, which means that after a complete outage (of all
nodes) , you won't be able to do anything (DC will be down) until you bring this
storage up.
The only way to change it, is to set current master domain in maintenance (this means
engine will umount it from all hosts) and a new random domain will take the
'master' title.
Note: I tried to make a master my HostedEngine's storage domain by setting all others
in maintenance - epyc fail (All domains were in maintenance and I could not activate any
of them). Don't try that! Actually don't try to change the master domain at all.
Best Regards,
Strahil Nikolov