[Users] SPM problems after upgrade to 3.1

Darrell Budic darrell.budic at bigwells.net
Mon Aug 20 22:35:44 UTC 2012


After many trials and annoyances, I started tearing down my old dc and building a new one, which is working out ok, if slow. I wasn't able to remove the old one cleanly, but I am getting all my data back online. My symptoms resemble an exchange from a few months ago that I'll attach some of below. Not sure how I got there, but I had v3 storage domains on NFS that refused to activate. I found similar things in my logs to the lvm errors Rene encountered so I'm wondering if I had the same problem with the v3 update.

  -Darrell

----- Original Message -----
From: "Rene Rosenberger"<r.rosenber... at netbiscuits.com>
To: "Saggi Mizrahi"<smizr... at redhat.com>, rvak... at redhat.com
Cc: users at ovirt.org
Sent: Monday, April 2, 2012 2:30:16 AM
Subject: AW: AW: [Users] storage domain reactivate not working

Hi,

ok, but how can i delete it if nothing goes. I want to generate a new
storage domain.

-----Ursprüngliche Nachricht-----
Von: Saggi Mizrahi [mailto:smizr... at redhat.com]
Gesendet: Freitag, 30. März 2012 21:00
An: rvak... at redhat.com
Cc: users at ovirt.org; Rene Rosenberger
Betreff: Re: AW: [Users] storage domain reactivate not working

I am currently working on patches to fix the issues with upgraded
domains. I've been ill for the most part of last week so it is taking
a bit more time then it should.

----- Original Message -----
From: "Rami Vaknin"<rvak... at redhat.com>
To: "Saggi Mizrahi"<smizr... at redhat.com>, "Rene Rosenberger"
<r.rosenber... at netbiscuits.com>
Cc: users at ovirt.org
Sent: Thursday, March 29, 2012 11:57:08 AM
Subject: Fwd: AW: [Users] storage domain reactivate not working

Rene, VDSM can't read the storage domain's metadata, the problem is
that vdsm tries to read the metadata using 'dd' command which
applies to the old version of storage domains as in the new format
the metadata is saved as vg tags. Are you using storage domain
version lower that V2? Can you attach the full log?

Saggi, any thoughts on that?

-------- Original Message --------
Subject:      AW: [Users] storage domain reactivate not working
Date:         Thu, 29 Mar 2012 06:33:27 -0400
From:         Rene Rosenberger<r.rosenber... at netbiscuits.com>
To:   rvak... at redhat.com<rvak... at redhat.com>  , users at ovirt.org
<users at ovirt.org>

On Aug 20, 2012, at 5:09 AM, Darrell Budic wrote:

> I upgraded my overt setup to 3.1 and it went ok (following http://wiki.ovirt.org/wiki/OVirt_3.0_to_3.1_upgrade, I've got it running on centos 6.2. note a need to copy a few more files from /etc/pki/ovirt-engine-old, notable generatesshkeys, may have been due to my previous version) until I tried to upgrade one of the nodes to the latest vdsm as well. It happened to be the SPM at the time, and when I put it into maintenance, the SPM started bouncing between the other two nodes that were still up. The logs are full of these error messages, but this seems to be important bit: 
> 
> AcquireHostIdFailure: Cannot acquire host id: ('e6ba97ae-7ccc-42ed-8739-f05b7a90d82c', SanlockException(90, 'Sanlock lockspace add failure', 'Message too long'))
> 
> I've since finished updating the vdsm node and it's up and running, although it has the same issue. Additionally drops out of active with the message that it can't access one of the storage domains or the data center object. I've confirmed that all nodes can access all the data centers. In this case, I suspect it means the DC object, but I can't find any specific error messages to indicate that.
> 
> Any thought on repairing the issue? Let me know if you want more specific data. This vdsm.log excerpt is repeated on all 3 nodes. I have active vms on the two old nodes, so I'm hesitant to shut everything down and see if that helps, but if I've got to...

Darrell Budic
Bigwells Technology LLC
office: 312.529.7816
cell: 608.239.4628



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120820/4b0b8a86/attachment-0001.html>


More information about the Users mailing list