[Users] host goes to non-operational because interface down message

Moti Asayag masayag at redhat.com
Sun Apr 14 05:47:43 UTC 2013


On 04/11/2013 01:15 PM, Gianluca Cecchi wrote:
> On Thu, Apr 11, 2013 at 5:14 AM, Mark Wu  wrote:
>> On 04/11/2013 06:20 AM, Gianluca Cecchi wrote:
>>>
>>> Hello,
>>> I have a newly created datacenter in 3.2.1 with f18 host where
>>> ovirtmgmt is set as vlan.
>>> host is installed but after some minutes I get this messages:
>>>
>>> Host management02 moved to Non-Operational state because interfaces
>>> 'em1.311' are down but are needed by networks 'ovirtmgmt' in the
>>> current cluster
>>
>> What does 'vdsClient -s 0 getVdsStats' say on vdsm host?  Could you please
>> paste the whole line of 'network = {xxx}' in the output here?
>> Thanks!
> 
> Thanks for your input Mark.
> At the moment all is well again with the host; anyway I'm going to
> give a long explanation of what I did below and so possible reasons
> for that situation.
> There is also feed for developers answers in that ...
> 
> I cannot connect at the moment to the host to cut and paste in text
> but I have access to java console and here is a screenshot containing
> the line you requested (captured now that all is ok and host up and
> runnign with two VMs):
> 
> https://docs.google.com/file/d/0BwoPbcrMv8mvdmdIMUxTRVRVUzg/edit?usp=sharing
> 
> Which fields are expected to be indicators of problems, as I had yesterday?
> 
> 
> The story:
> 
> oVirt is 3.2.1
> - create a datacenter of type local_on_host
> - create a cluster
> - the ovirtmgmt is intended to be vlan tagged but I forget it... damn...
> - I add a host (f18) where I prepared network configured with vlan
> 
> ifcfg-em1
> ifcfg-em1.311
> 

I wasn't able to see any of the vlan nics in the output of the
getVdsStats. Perhaps you took that output in a different point of time
when the network was working properly, without configuring the vlan on
the host?

Once you'll be capable to connect to the host when the error appears in
the event log, please provide also the output of the getVdsCaps.

> - the host deploy completes successfully --> here in my opinion a
> possible bug because ovirtmgmt is not tagged?
> - host reboots and apparently all is ok
> - I can create local_on_host SD and ISO
> - all is up and running without any VM yet
> 
> The host is a blade inside an IMS enclosure and I discover that
> actually I can configure the storage assigned (second disk of the
> blade) as FC and that there is also multipath for this kind of servers
> (see the other thread of mine).
> 
> The optimal would be to deactivate all and redefine the DC as
> Fibrechannel type but it seems not so easy....
> 
> I probably make an error at this point because instead of directly
> putting host into maintenance I follow these steps in the gui:
> - expand my DC at the left
> - expand STORAGE of that DC on the left
> - select my storage domain (LSTORAGE) on the left
> 
> On the right pane  I select the line of LSTORAGE and in the bottom
> pane I select the "datacenter" label where I then select the line of
> my DC.
> I choose "maintenance"
> 
> --> what is it supposed I have done this way?
> Put storage domain in maintenance or DC or what?
> 
> Is it supposed to be a correct operation what I've done if the DC is
> Up and this is the only SD I have?
> Or would it be correct to receive an error from the system?
> 
> Because I don't receive any error and I see the SD down and host that
> keeps staying with the status "Preparing to maintenance"
> 
> I don't remember correctly my steps here, possibly restart of host
> that is recognized then as unassigned.
> After that I cannot "force remove" the DC and I cannot remove the
> cluster or host because it is not in maintenance.
> 
> After I also reboot the engine service and restart  the host, it comes
> up normally with all the DC with the old config.
> At this point I can put host in maintenance and force remove the DC
> I create a new DC with same name but with type FC and ovirtmgmt as
> vlan tagged and I see that the existing cluster is still present, so I
> attach it to the new DC (that has the same name as before).
> I also find that there is my host that seems to automatically be in
> this cluster.
> 
> But at this point when I activate it, it comes up but after a few
> minutes I get the message I posted at the beginning
> In host details, its network interfaces keep down.
> Tried different restarts of host.
> So I decide to put host into maintenance and select into the gui to
> "reinstall" it.
> After this all goes well again..
> 
> Sorry for the long story
> 
> Gianluca
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 




More information about the Users mailing list