Gianluca,
Thanks for your so detailed report. But unfortunately I can't find any
clue related to
this network interface status problem. Sorry for that. Maybe other guy
can add more comments.
At this moment, I suspect the link of em1.311 and probably its
underlying interface em1 went down occasionally and came up in a
short time. Is it possible? You could check the system log
'/var/log/messages' to find related messages.
Thanks for your input Mark.
At the moment all is well again with the host; anyway I'm going to
give a long explanation of what I did below and so possible reasons
for that situation.
There is also feed for developers answers in that ...
I cannot connect at the moment to the host to cut and paste in text
but I have access to java console and here is a screenshot containing
the line you requested (captured now that all is ok and host up and
runnign with two VMs):
https://docs.google.com/file/d/0BwoPbcrMv8mvdmdIMUxTRVRVUzg/edit?usp=sharing
Which fields are expected to be indicators of problems, as I had yesterday?
You can
check the 'state' filed in each interface.
The story:
oVirt is 3.2.1
- create a datacenter of type local_on_host
- create a cluster
- the ovirtmgmt is intended to be vlan tagged but I forget it... damn...
- I add a host (f18) where I prepared network configured with vlan
ifcfg-em1
ifcfg-em1.311
- the host deploy completes successfully --> here in my opinion a
possible bug because ovirtmgmt is not tagged?
- host reboots and apparently all is ok
- I can create local_on_host SD and ISO
- all is up and running without any VM yet
The host is a blade inside an IMS enclosure and I discover that
actually I can configure the storage assigned (second disk of the
blade) as FC and that there is also multipath for this kind of servers
(see the other thread of mine).
The optimal would be to deactivate all and redefine the DC as
Fibrechannel type but it seems not so easy....
I probably make an error at this point because instead of directly
putting host into maintenance I follow these steps in the gui:
- expand my DC at the left
- expand STORAGE of that DC on the left
- select my storage domain (LSTORAGE) on the left
On the right pane I select the line of LSTORAGE and in the bottom
pane I select the "datacenter" label where I then select the line of
my DC.
I choose "maintenance"
--> what is it supposed I have done this way?
Put storage domain in maintenance or DC or what?
Is it supposed to be a correct operation what I've done if the DC is
Up and this is the only SD I have?
Or would it be correct to receive an error from the system?
Because I don't receive any error and I see the SD down and host that
keeps staying with the status "Preparing to maintenance"
I don't remember correctly my steps here, possibly restart of host
that is recognized then as unassigned.
After that I cannot "force remove" the DC and I cannot remove the
cluster or host because it is not in maintenance.
After I also reboot the engine service and restart the host, it comes
up normally with all the DC with the old config.
At this point I can put host in maintenance and force remove the DC
I create a new DC with same name but with type FC and ovirtmgmt as
vlan tagged and I see that the existing cluster is still present, so I
attach it to the new DC (that has the same name as before).
I also find that there is my host that seems to automatically be in
this cluster.
But at this point when I activate it, it comes up but after a few
minutes I get the message I posted at the beginning
In host details, its network interfaces keep down.
Tried different restarts of host.
So I decide to put host into maintenance and select into the gui to
"reinstall" it.
After this all goes well again..
Sorry for the long story
Gianluca
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users