[ovirt-users] 3.6 issues when using an existing gluster volume for the hosted-engine

Roy Golan rgolan at redhat.com
Thu Aug 13 02:01:18 EDT 2015


On 08/12/2015 12:38 AM, Michael DePaulo wrote:
> Any Advice?
>
> #1 and #2 seem to be the serious issues that no one else has reported yet.
>
> On Wed, Aug 5, 2015 at 10:34 AM, Michael DePaulo <mikedep333 at gmail.com> wrote:
>> Hi,
>>
>> I am attempting to setup a Ovirt 3.6 (currently beta 1) on a test
>> environment with an existing (but empty) gluster volume for the hosted
>> engine.
>>
>> I am currently experiencing 4 issues after I having deployed the
>> hosted-engine appliance VM on only 1 oVirt host:
>> 1. The gluster volume is not added as a storage domain by the end of
>> the hosted-engine-setup.
>> 2. When I try to import the gluster volume as a storage domain, I get
>> this error:
>> Error while executing action Attach Storage Domain: AcquireHostIdFailure
>> In the logs below, I had not attempted to import it yet though.
>> 3. The hosted engine VM is not listed, although the host is listed and
>> its number of VMs is listed as "1". (I have seen recent emails/bug
>> reports about what seems to be the same issue.)

That changed a bit. In order to see the VM in engine, you must import 
the SD of the hosted engine into
a functional data center.
using the ovirt-hosted-engine-setup you should be able to do that.

Adding Simone

>> 4. The hosted VM shut down automatically after several minutes.
>>
>> I last ran ovirt-hosted-engine-setup on galactica on 2015-08-03 at
>> around 22:47 -0400
>>
>> #4 was probably caused by the fact that during a previous attempt (on
>> 2015-08-02) to setup the hosted-engine, I had
>> /var/lib/ovirt-hosted-engine-ha/ha.conf containing to
>> local_maintenance=True. That value was not changed when I ran
>> ovirt-hosted-engine-setup on 2015-08-03. Today, I ran the
>> hosted-engine command to disable maintenance, and the hosted-engine VM
>> started back up. I can access it over https, but the remaining issues
>> are still present.
>>
>> ----
>> More Details:
>>
>> The 3 Gluster servers:
>> Server 1: galactica: RHEL 7.1 with Gluster 3.7.3
>> Server 2: death-star: RHEL 7.1 with Gluster 3.7.3
>> Server 3: mothership: Fedora 22 with Gluster 3.7.3
>>
>> I only intend for servers 1 and 2 to be oVirt hosts at this time.
>>
>> All servers have iptables disabled, are on the same LAN, have SELinux
>> set to enforcing, and have gluster configured with the hyper-converged
>> settings listed here:
>> http://www.ovirt.org/Features/Self_Hosted_Engine_Hyper_Converged_Gluster_Support
>>
>> I have DNS forward and reverse lookups configured correctly for all 3
>> machines plus the hosted engine.
>>
>> The existing gluster volume is "gv-virt", and it has the necessary
>> "options" for gluster to work with oVirt. Other than the .trashcan
>> folder, there were no files/folders on it when I ran
>> ovirt-hosted-engine-setup.
>>
>> I am able to mount and write to gv-virt via mount.glusterfs.
>>
>> Note that I have run hosted-engine-setup repeatedly on servers 1 and 2
>> as I try to resolve these issues. My cleanup between running it may
>> not have been correct, as exemplified by the local_maintenance
>> setting. I have been deleting the files on the gluster volume between
>> running it.
>>
>> I have the relevant contents of the relevant directories under
>> /var/log/, /var/lib/, and /etc/ here:
>> https://drive.google.com/folderview?id=0B2L_OrCln9ShMWprWVlpZVVocGs&usp=sharing
>>
>> I also put the `gluster volume status gv-virt` output there.
>>
>> Also, these 2 volumes are currently mounted on galactica:
>> galactica.depaulo.org:/gv-virt                          200G  2.7G
>> 198G   2% /rhev/data-center/mnt/glusterSD/galactica.depaulo.org:_gv-virt
>> /dev/loop1                                              2.0G  3.1M
>> 1.9G   1% /rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmp2HOqSq
>>
>> Thanks in advance,
>>
>> -Mike
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



More information about the Users mailing list