Thanks Jason. I think that's exactly what I did on my second attempt.
Restarting VDSM must've been the kick in the pants that it needed. And I
believe the key is to let oVirt do the gluster peer probing.
I think this is what I did:
-create gluster engine volume on first host only
-install hosted engine on first host
-hosted-engine --deploy on remaining hosts
-enable gluster service for the cluster in the UI
-only the first host will be listed, attempt import, it will fail,
cancel out
-gluster peer status now shows the peers connected, but I also manually
probed all IPs and FQDNs for good measure
-log out of UI, set global maintenance, power down engine VM, stop
vdsmd, libvirtd, ovirt-ha-agent and ovirt-ha-broker
-force sanlock shutdown and unmount engine NFS on all hosts
-add-brick replica for engine volume, kickstart replication (which I
forgot to do)
-add meta volume, set up CTDB, edit /etc/hosts, etc.
-start vdsmd, libvirtd, ovirt-ha-agent & ovirt-ha-broker
-set global maintenance none and wait for engine to start
---mine wouldn't start because gluster said FS was read-only, probably
because of failed replication. Heal full didn't do anything. I tried
replace-brick commit force and that seems to have brought it back
online, but I'm still not sure if replication of the engine volume is
happening.
Finally got the engine started, logged into the UI and found all the
gluster stuff magically working. Then I added data and iso volumes
(within the UI), set up the domains, uploaded some ISOs and added a
couple VMs.
That's where I'm at now. I'm sure I did lots of things wrong and/or
backwards. :) I don't *need* to manage gluster from the UI, but it's
easier to demonstrate to the less technical folks around here, you know,
the ones that expect a GUI for everything.
On 2/10/2015 10:30 AM, Jason Brooks wrote:
----- Original Message -----
> From: "George Skorup" <george(a)mwcomm.com>
> To: users(a)ovirt.org
> Sent: Friday, February 6, 2015 11:56:17 PM
> Subject: Re: [ovirt-users] New user intro & some questions
>
> On 1/30/2015 10:11 AM, Jason Brooks wrote:
>> Hi George --
>>
>> I typically host my ISO domain from gluster as well, rather than from
>> the NFS export the installer offers to set up.
>>
>> I've been able to force sanlock to release with:
>>
>> sanlock shutdown -f 1
> So I wiped my test cluster. Changed some partitioning, network topology,
> etc. My plan is to run the ISO domain on a gluster volume this time also.
>
> Here's where I'm at. Gluster volume 'engine' is created and started
on
> node1. The other three nodes have no volumes yet. In fact, no gluster
> peering yet.
>
> The engine is installed and running on node1 with gluster's NFS serving
> the engine volume. I then ran hosted-engine --deploy on the other three
> nodes. All four nodes are reported 'up' in the engine UI.
>
> Now I'm stuck. The problem I ran into last time, after enabling the
> gluster service on the Default cluster, the gluster peers were already
> established which put a roadblock in front of importing the gluster hosts.
>
> Like last time, the Default cluster says: "Some new hosts are detected
> in the cluster. You can Import them to engine or Detach them from the
> cluster." And if I click the import URL, the only host I see in the list
> is the first one. Last time, all four were there, but as I said, the
> gluster peers were already established and the import failed due to
> "failed to probe" or something like that.
I just tried to recreate this, on a single host, running CentOS 6,
hosting the engine through hosted-engine, and serving up its own
gluster storage. I enabled gluster in the cluster, and when I tried
to import the host, it wanted to use the default libvirtd 192.168.122.1
ip address for the host.
I canceled out of that and just restarted the vdsm service on my machine,
and after the service restarted, my data and engine volumes were there
in the web admin ui as expected.
So, try restarting vdsm.
Jason
> Is there a way to make this work? I'm fairly confused at this point. I
> would like to be able to manage at least my virt store gluster volume(s)
> from the engine UI.
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>