On Thu, Jul 30, 2020 at 2:45 PM Yedidyah Bar David <didi(a)redhat.com> wrote:
On Thu, Jul 30, 2020 at 1:47 PM Alex K
<rightkicktech(a)gmail.com> wrote:
>
>
> On Thu, Jul 30, 2020 at 1:30 PM Yedidyah Bar David <didi(a)redhat.com>
> wrote:
>
>> On Thu, Jul 30, 2020 at 1:20 PM Alex K <rightkicktech(a)gmail.com> wrote:
>> >
>> >
>> >
>> > On Thu, Jul 30, 2020 at 12:56 PM Yedidyah Bar David <didi(a)redhat.com>
>> wrote:
>> >>
>> >> On Thu, Jul 30, 2020 at 12:42 PM Alex K <rightkicktech(a)gmail.com>
>> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Thu, Jul 30, 2020 at 12:01 PM Yedidyah Bar David <
>> didi(a)redhat.com> wrote:
>> >> >>
>> >> >> On Thu, Jul 30, 2020 at 11:30 AM Alex K
<rightkicktech(a)gmail.com>
>> wrote:
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Tue, Jul 28, 2020 at 11:51 AM Anton Louw via Users <
>> users(a)ovirt.org> wrote:
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Hi All,
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Does somebody perhaps know the process of changing the
Hosted
>> Engine IP address? I see that it is possible, I am just not sure if it is a
>> straight forward process using ‘nmtui’ or editing the network config file.
>> I have also ensured that everything was configured using the FQDN.
>> >> >>>
>> >> >>> Since the FQDN is not changing you should not have issues
just
>> updating your DNS then changing manually the engine IP from the ifcfg-ethx
>> files then restart networking.
>> >> >>> What i find difficult and perhaps impossible is to change
engine
>> FQDN, as one will need to regenerate all certs from scratch (otherwise you
>> will have issues with several services: imageio proxy, OVN, etc) and there
>> is no such procedure documented/or supported.
>> >> >>
>> >> >>
>> >> >> I wonder - how/what did you search for, that led you to this
>> conclusion? Or perhaps you even found it explicitly written somewhere?
>> >> >
>> >> > Searching around and testing in LAB. I am testing 4.3 though not
>> 4.4. I used engine-rename tool and although was able to change fqdn for
>> hosts and engine, I observed that some certificates were left out (for
>> example OVN was still complaining about certificate issue with subject name
>> not agreeing with the new FQDN - checking/downloading the relevant cert was
>> still showing the previous FQDN). I do not deem successful the renaming of
>> not all services are functional.
>> >>
>> >> Very well.
>> >>
>> >> I'd find your above statement less puzzling if you wrote instead
"...
>> >> and the procedure for doing this is buggy/broken/incomplete"...
>> >
>> > I'm sorry for the confusion.
>>
>> No problem :-)
>>
>> >>
>> >>
>> >> >>
>> >> >>
>> >> >> There actually is:
>> >> >>
>> >> >>
>> >> >>
>>
https://www.ovirt.org/documentation/administration_guide/#sect-The_oVirt_...
>> >> >
>> >> >
>> >> > At this same link it reads:
>> >> > While the ovirt-engine-rename command creates a new certificate
for
>> the web server on which the Engine runs, it does not affect the certificate
>> for the Engine or the certificate authority. Due to this, there is some
>> risk involved in using the ovirt-engine-rename command, particularly in
>> environments that have been upgraded from Red Hat Enterprise Virtualization
>> 3.2 and earlier. Therefore, changing the fully qualified domain name of the
>> Engine by running engine-cleanup and engine-setup is recommended where
>> possible.
>> >> > explaining my above findings from the tests.
>> >>
>> >> No. These are two different things:
>> >>
>> >> 1. Bugs. All software has bugs. Hopefully we fix them over time. If
>> >> you find one, please file it.
>> >>
>> >> 2. Inherent design (or other) problems - the software works as
>> >> intended, but that's not what you want...
>> >
>> > I do not intend to blame anyone. I really appreciate the work you all
>> are doing with this great project and understand that the community stream
>> may have bugs and rough edges or simply I might not be well informed.
>> >>
>> >>
>> >> See also:
>> >>
>> >>
>>
https://www.ovirt.org/develop/networking/changing-engine-hostname.html
>> >>
>> >> >>
>> >> >>
>> >> >> That said, it indeed was somewhat broken for some time now -
some
>> fixed were only added quite recently, and are available only in current 4.4:
>> >> >
>> >> > This is interesting and needed for migration scenarios.
>> >>
>> >> Can you please elaborate?
>> >
>> > I am thinking about a scenario where one will need to migrate a DC
>> from one FQDN to a completely new one (say I currently have
>>
host1.domain1.com,
host2.domain1.com,
engine.domain1.com and want to
>> switch to
host1.domain2.com,
host2.domain2.com,
engine.domain2.com) I
>> am currently facing one such need. I need to migrate existing DC from
>>
domain1.com to
domain2.com. Tried the engine-rename tool and changed
>> IPs of engine and hosts but observed the OVN certificate issue with 4.3. In
>> case this is sorted with 4.4 then I will see if this resolves my issue.
>>
>> These are _names_, for the same machines, right? I'd call it a rename,
>> then, not a migration.
>>
> Indeed. It is a rename. Same dc/cluster with different names. (my setup
> is one DC which has one cluster)
>
>>
>> If it's migration (you have two sets of physical machines, and want to
>> migrate the VMs from one set to the other), indeed using storage
>> import is simpler (perhaps using the DR tool/doc).
>>
> I tested a storage domain import at a 4.3 virtual test environment at
> same renamed DC/cluster and found out that the VM snapshots where not
> retained.
> As per
>
https://www.ovirt.org/develop/release-management/features/storage/imports...
> docs it should have kept the snapshots metadata. I am wondering why this is
> as losing the snapshots will be a major issue for me.
>
I have no idea - please start a new thread, or file a bug and attach
relevant logs. Thanks.
>
> The steps I followed for the rename and data storage import are the
> following:
>
> *Assumptions: *
> We have an ovirt cluster (v4.3) with two hosts named v0 and v1.
> Also the "vms" storage domain does have the guest VMs. The guest VMs do
> have disk snapshots.
>
> *Steps: *
> 1. Set v1 ovirt host at maintenance then remove it.
>
2. At v1 install fresh CentOS7 using the new FQDN
>
You mean you have gluster on separate disks and do not wipe them during
reinstall, I guess. Reasonable, but a bit risky, depending on exactly how
you do this.
If this is important, I'd consider doing a test (perhaps with a copy of
your real data, if possible), and see that I can restore everything from v1
after this reinstallation (e.g. for the case where v0 dies right after or
during the reinstallation).
Or perhaps you mean that you do wipe everything? This means you have no
storage replication for the duration of the process (which probably takes
several hours?).
I mean I do a complete wipe of the host having first removed its gluster
bricks. The data are retained at the other host. Then later the same clean
host is added as gluster peer and the relevant storage domain is synced
back so as to repeat the wipe at the remaining host. Tested and all data
are retained, except snapshots of VMs are lost or not visible when
importing back the same storage domain, which does seem to not be related
with gluster. At test environment it takes only a few minutes to sync as I
have only one VM on this storage domain.
> 3. at v0, set global maintenance and shutdown engine. wipe the engine
> storage data from the relevant gluster mount. (the engine VM is completely
> deleted!)
> 4. at v0, remove bricks belonging to v1 and detach gluster peer v1.
> 5. On v1, prepare gluster service, reattach peer and add bricks from v0.
> At this phase all data from vms gluster volume will be synced to the new
> host. Verify with `gluster heal info vms`.
> 6. At freshly installed v1 install engine using the same clean gluster
> engine volume:
> `hosted-engine --deploy --config-append=/root/storage.conf
> --config-append=answers.conf` (use new FQDN!)
> 7. Upon completion of engine deployment and after having ensured the vms
> gluster volume is synced (step 5) remove bricks of v0 host (v0 now should
> not be visible at ovirt GUI) and detach gluster peer v0.
> 8. Install fresh CentOS7 on v0 and prepare it with ovirt node packages,
> networking and gluster.
> 9. At v0, attach gluster bricks from v1. Confirm sync with gluster volume
> heal info.
> 10. at engine, add entry for v0 host at /etc/hosts or update your DNS. At
> ovirt GUI, add v0 host
> 11. At ovirt GUI import vms gluster volume as vms storage domain.
> 12. At ovirt GUI, import VMs from vms storage domain.
>
> At step 11 I had to confirm the import as I received the following:
>
> [image: image.png]
>
Perhaps this is why (or related to) you did not get the snapshots? But I
really don't know. Just note this on the other thread, when you post it.
This is the only blocking issue to complete my renaming of the cluster. Im
happy with the long steps to wipe and setup from scratch, as long as
snapshots are retained. I will try to open a new thread.
> At step 12, I successfully imported the VM though observed that the VM
> had no any snapshots.
>
>
>
>> >>
>> >>
>> >> If it's DR migration, perhaps you want storage export/import, as is
>> >> done using the DR tool:
>> >>
>> >>
>>
https://www.ovirt.org/documentation/disaster-recovery-guide/disaster-reco...
>> >>
>> >> If you just want to use a new name, but do not need to completely
>> >> forget the old one, you can add it using SSO_ALTERNATE_ENGINE_FQDNS.
>> >
>> > I need to wipe out completely any reference to the old domain/FQDN.
>>
>> If it's indeed really completely, as in "if someone finds the old name
>> somewhere, it's going to be a problem/cost money/whatever", then the
>> rename tool is not for you. It's designed to impose minimal downtime
>> and use the new name wherever really important, but will keep the old
>> name e.g. in the CA (meaning, in the ca cert, and all the certs it
>> signed/signs). If that's a problem for you, the rename tool is not
>> solving it. If in current 4.4 you find an "important" place with the
>> old name, please file a bug. Thanks.
>>
> Yes, it must be wiped due to policy. I do not think there are other
> implications :)
>
:-)
> Speaking about the production, which is still 4.2, I will then have to
> upgrade to 4.3 and then 4.4 if my LAB tests confirm the full rename is
> sorted at 4.4.
>
Please note that 4.4 is EL8 only, both engine and hosts.
indeed. I would prefer at the moment to stick at 4.3.
> Thanx for pointing this out.
> I have to sort out also how to change the management network for which I
> will open a new thread.
>
> Thanx for your swift responses
>
YW, good luck!
>
>> >>
>> >>
>> >> > Also I am wondering if I can change in some way the management
>> network and make from untagged to VLAN tagged.
>> >>
>> >> Sorry, no idea. Perhaps start a different thread about this.
>> >
>> > I will. thanx.
>> >>
>> >>
>> >> Best regards,
>> >>
>> >> >>
>> >> >>
>> >> >>
>>
https://github.com/oVirt/ovirt-engine/commits/master/packaging/setup/plug...
>> >> >>
>> >> >> I do not think I am aware of currently still-open bugs. If you
>> find one, please file it in bugzilla. Thanks!
>> >> >>
>> >> >>>
>> >> >>> I might be able to soon test this engine IP change in a
virtual
>> environment and let you know.
>> >> >>
>> >> >>
>> >> >> Thanks and good luck!
>> >> >> --
>> >> >> Didi
>> >> >
>> >> > _______________________________________________
>> >> > Users mailing list -- users(a)ovirt.org
>> >> > To unsubscribe send an email to users-leave(a)ovirt.org
>> >> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> >> > oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> >> > List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R5ZWCNEL3HP...
>> >>
>> >>
>> >>
>> >> --
>> >> Didi
>> >>
>> > _______________________________________________
>> > Users mailing list -- users(a)ovirt.org
>> > To unsubscribe send an email to users-leave(a)ovirt.org
>> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KX4WAZR3CH...
>>
>>
>>
>> --
>> Didi
>>
>>
--
Didi