On 9/4/20, 4:50 AM, "Vojtech Juranek" <vjuranek(a)redhat.com> wrote:
On čtvrtek 3. září 2020 22:49:17 CEST Gillingham, Eric J (US 393D) via Users
wrote:
I recently removed a host from my cluster to upgrade it to 4.4, after
I
removed the host from the datacenter VMs started to pause on the second
system they all migrated to. Investigating via the engine showed the
storage domain was showing as "unknown", when I try to activate it via the
engine it cycles to locked then to unknown again.
/var/log/sanlock.log contains a repeating:
add_lockspace
e1270474-108c-4cae-83d6-51698cffebbf:1:/dev/e1270474-108c-4cae-83d6-51698cf
febbf/ids:0 conflicts with name of list1 s1
e1270474-108c-4cae-83d6-51698cffebbf:3:/dev/e1270474-108c-4cae-83d6-51698cf
febbf/ids:0
how do you remove the fist host, did you put it into maintenance first? I
wonder, how this situation (two lockspaces with conflicting names) can occur.
You can try to re-initialize the lockspace directly using sanlock command (see
man sanlock), but it would be good to understand the situation first.
Just as you said, put into maintenance mode, shut it down, removed it via the engine UI.