I Think I am going to have to go the "virsh" path and mount a guest based cdrom storage device using a live cd and fix the vm disk this way. Meanwhile fix up the fact that vnc and serial console have both been wiped out following the 4.1 upgrade.

On 22 March 2017 at 11:52, Ian Neilsen <ian.neilsen@gmail.com> wrote:
Good to know. Ive been using the 1 dash notation following RH document. Dont think Ive seen the 1 dash before.

On the original cluster I used IP's second cluster I used FQDN's but made sure I have a hosts file present.

On 22 March 2017 at 10:09, /dev/null <devnull@linuxitil.org> wrote:
Ian, knara,

success! I got it working using the two-dash-notation and ip addresses. Shure this is the most relyable way, even with local hosts file.

In my case, the hosted vm dies and takes some time to be running again. Is it possible to have the vm surviving the switch to the backup-volfile-server?

Thanks & regards

/dev/null

On Tue, 21 Mar 2017 11:52:32 +0530, knarra wrote
> On 03/21/2017 10:52 AM, Ian Neilsen wrote:
>

>
> knara
>
> Looks like your conf is incorrect for mnt option.
>
>
Hi Ian,
>    
>     mnt_option should be mnt_options=backup-volfile-servers=<IP1>:<IP2> and this is how we test it.
>
> Thanks
> kasturi.
>

>
> It should be I believe;  mnt_options=backupvolfile-server=server name
>
> not
>
> mnt_options=backup-volfile-servers=host2
>
>
If your dns isnt working or your hosts file is incorrect this will prevent it as well.
>
>

>
> On 21 March 2017 at 03:30, /dev/null <devnull@linuxitil.org> wrote:
>

> Hi kasturi,
>
> thank you. I tested and it seems not to work, even after rebooting the current mount does not show up the mnt_options nor the switch over works.
>
> [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> gateway=192.168.2.1
> iqn=
> conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
> connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
> conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
> user=
> host_id=2
> bridge=ovirtmgmt
> metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
> spUUID=00000000-0000-0000-0000-000000000000
> mnt_options=backup-volfile-servers=host2
> fqdn=ovirt.test.lab
> portal=
> vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
> metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
> vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
> domainType=glusterfs
> port=
> console=qxl
> ca_subject="C=EN, L=Test, O=Test, CN=Test"
> password=
> vmid=272942f3-99b9-48b9-aca4-19ec852f6874
> lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
> lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
> vdsm_use_ssl=true
> storage=host1:/gvol0
> conf=/var/run/ovirt-hosted-engine-ha/vm.conf
>
> [root@host2 ~]# mount |grep gvol0
> host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
> Any suggestion?
>
> I will try an answerfile-install as well later, but it was helpful to know, where to set this. 
>
> Thanks & best regards
>

> On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote

>
> > On 03/20/2017 05:09 AM, /dev/null wrote:
> >
Hi,

how do i make the hosted_storage aware of gluster server failure? In --deploy i 

cannot
provide backup-volfile-servers. In 
/etc/ovirt-hosted-engine/hosted-engine.conf 

there 

is
an mnt_options line, but i 

read
(https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
that this settings get lost during deployment on seconday 

servers.

Is there an official way to deal with that? Should this option be set manualy on 

all 

nodes?

Thanks!

/dev/null
Hi, > >    I think in the above patch they are just   hiding the the query for mount_options but i think all the code is still present and you should not loose mount options during additional host deployment. For more info you can refer [1]. >     >     You can set this option manually on all nodes by editing /etc/ovirt-hosted-engine/hosted-engine.conf. Following steps will help you to achieve this. > > 1) Move each host to maintenance, edit the file '/etc/ovirt-hosted-engine/hosted-engine.conf'. > 2) set mnt_options = backup-volfile-servers=<gluster_ip2>:<gluster_ip3> > 3) restart the services 'systemctl restart ovirt-ha-agent' ; 'systemctl restart ovirt-ha-broker' > 4) Activate the node. > > Repeat the above steps for all the nodes in the cluster. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2 > > Hope this helps !! > > Thanks > kasturi >
--
Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte 

untersucht
und ist - aktuelle Virenscanner vorausgesetzt - 

sauber.
For all your IT requirements visit: http://www.transtec.co.uk

>
>
_______________________________________________
Users mailing 

list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
> > -- > Diese E-Mail wurde auf Viren und gefährliche Anhänge > durch MailScanner untersucht und ist wahrscheinlich virenfrei. > MailScanner dankt transtec f�r die freundliche Unterst�tzung.
-- Diese E-Mail wurde auf Viren und gefährliche Anhänge durch MailScanner untersucht und ist wahrscheinlich virenfrei. MailScanner dankt transtec für die freundliche Unterstützung. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
>
> Ian Neilsen Mobile: 0424 379 762 Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen

>
> --
> Diese E-Mail wurde auf Viren und gefährliche Anhänge
> durch MailScanner untersucht und ist wahrscheinlich virenfrei.
> MailScanner dankt transtec f�r die freundliche Unterst�tzung.



--
Diese E-Mail wurde auf Viren und gefährliche Anhänge
durch MailScanner untersucht und ist wahrscheinlich virenfrei.
MailScanner dankt transtec für die freundliche Unterstützung.



--
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen



--
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen