----- Original Message -----
From: "George Skorup" <george(a)mwcomm.com>
To: users(a)ovirt.org
Sent: Friday, February 6, 2015 11:56:17 PM
Subject: Re: [ovirt-users] New user intro & some questions
On 1/30/2015 10:11 AM, Jason Brooks wrote:
> Hi George --
>
> I typically host my ISO domain from gluster as well, rather than from
> the NFS export the installer offers to set up.
>
> I've been able to force sanlock to release with:
>
> sanlock shutdown -f 1
So I wiped my test cluster. Changed some partitioning, network topology,
etc. My plan is to run the ISO domain on a gluster volume this time also.
Here's where I'm at. Gluster volume 'engine' is created and started on
node1. The other three nodes have no volumes yet. In fact, no gluster
peering yet.
The engine is installed and running on node1 with gluster's NFS serving
the engine volume. I then ran hosted-engine --deploy on the other three
nodes. All four nodes are reported 'up' in the engine UI.
Now I'm stuck. The problem I ran into last time, after enabling the
gluster service on the Default cluster, the gluster peers were already
established which put a roadblock in front of importing the gluster hosts.
Like last time, the Default cluster says: "Some new hosts are detected
in the cluster. You can Import them to engine or Detach them from the
cluster." And if I click the import URL, the only host I see in the list
is the first one. Last time, all four were there, but as I said, the
gluster peers were already established and the import failed due to
"failed to probe" or something like that.
I just tried to recreate this, on a single host, running CentOS 6,
hosting the engine through hosted-engine, and serving up its own
gluster storage. I enabled gluster in the cluster, and when I tried
to import the host, it wanted to use the default libvirtd 192.168.122.1
ip address for the host.
I canceled out of that and just restarted the vdsm service on my machine,
and after the service restarted, my data and engine volumes were there
in the web admin ui as expected.
So, try restarting vdsm.
Jason
Is there a way to make this work? I'm fairly confused at this point. I
would like to be able to manage at least my virt store gluster volume(s)
from the engine UI.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users