[ovirt-users] oVirt Gluster Hyperconverged problem

knarra knarra at redhat.com
Tue Sep 20 06:25:26 UTC 2016

Hello Hanson,

     Below  is the procedure to replace the host with same FQDN where 
existing host OS has to be re-installed. If the ovirt version you are 
running is 4.0, steps 14 and 15 are not required. You could reinstall 
the host from UI with HostedEngine->Deploy option.



    Move host (host3) to maintenance in UI


    Re-install OS, subscribe to channels & install required packages,
    prepare bricks (if needed)


    Check gluster peer status from working node to obtain UUID of host
    being replaced


    Create brick directories by running the command “mkdir /rhgs/brick{1..3}


    Put /etc/fstab entries in the new node by copying it from other nodes.


    Run mount -a so that bricks are mounted.


    Edit gluster UUID in /var/lib/glusterd/glusterd.info


    Copy peer info from a working peer to /var/lib/glusterd/peers
    (without the peer info of node being replaced, here host3)


    Create and remove a tmp dir at all volume mount points


    Run the command “setfattr -n trusted.non-existent-key -v abc <mount
    point>” to set extended attributes and remove the extended attribute
    by running the command “ setfattr -x trusted.non-existent-key <mount
    point>” at all mount points.


    Restart glusterd


    Ensure heal is in progress and complete


    Edit the host, and fetch fingerprint in Advanced details - as
    fingerprint is changed due to reinstallation


    Run “hosted-engine --deploy --config-append=answers.conf” on host3
    (Should be seen as additional host setup, provide the host number as
    known by other hosts)


    hosted-engine deploy fails as the host being installed cannot be
    added to the engine with hostname already known error. Reinstalling
    from the UI and aborting HE setup seems to fix this. ovirt-ha-agent
    and ovirt-ha-broker services had to be started manually


        Go to UI and click on reinstall button  to reinstall the host.
        Reinstalling host might fail due not able to configure
        management network.


        Go to Network Interfaces tab and click on “Setup Host Networks”
        and assign the networks ovirtmgmt and glusternw to the correct nics.


        Wait for sometime for the Node to come up and start
        ovirt-ha-agent and ovirt-ha-broker services.


On 09/20/2016 11:27 AM, knarra wrote:
> Hi,
>     Pad [1] contains the procedure to replace the host with same FQDN 
> where existing host OS has to be re-installed.
> [1] https://paste.fedoraproject.org/431252/47435076/
> Thanks
> kasturi.
> On 09/20/2016 06:27 AM, Hanson wrote:
>> Hi Guys,
>> I encountered an unfortunate circumstance today. Possibly an 
>> achillies heel.
>> I have three hypervisors, HV1, HV2, HV3, all running gluster for 
>> hosted engine support. Individually they all pointed to 
>> HV1:/hosted_engine with backupvol=HV2,HV3...
>> HV1 lost its bootsector, which was discovered upon a reboot. This had 
>> zero impact, as designed, on the VM's.
>> However, now that HV1 is down, how does one go about replacing the 
>> original HV? The backup servers point to HV1, and you cannot readd 
>> the HV through the GUI, and the CLI will not readd it as it's already 
>> there... you cannot remove it as it is down in the GUI...
>> Pointing the other HV's to their own storage may make sense for 
>> multiple instances of the hosted_engine, however it's nice that the 
>> gluster volumes are replicated and that one VM can be relaunched when 
>> a HV error is detected. It's also consuming less resources.
>> What's the procedure to replace the original VM?
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160920/e0a196de/attachment-0001.html>

More information about the Users mailing list