On Tue, Feb 12, 2019 at 10:58 PM Nir Soffer <nsoffer@redhat.com> wrote:


On Tue, Feb 12, 2019, 12:23 Sandro Bonazzola <sbonazzo@redhat.com wrote:
Moving the discussion to devel list on this scenario.

Il giorno mar 12 feb 2019 alle ore 11:16 Hetz Ben Hamo <hetz@hetz.biz> ha scritto:
Hi,

Well, there is a severe bug that I complained about it on 4.2 (or 4.1? I don't remember) and it's regarding "yanking the power cable".
Basically I'm performing a simple test: kill all hosts immediately to simulate a power loss without UPS.

For this test I have 2 nodes, and  4 storage domains: hosted_storage (that was setup during the HE installation), 1 iSCSI domain, 1 NAS domain and 1 ISO domain.

After all the nodes loose power, I power them on and the following procedure happens:
1. The node with HE finishes booting, and it takes few minutes until the HE is up. 
2. When the HE is up, all the storage domains comes back to life as online and VM's with high availability starting to boot.
3. Few minutes later, all (with the exception of  hosted_storage) storage domains are going down
4. After about 5 minutes, all the other storage domains which went down, are coming up, but by then, and VM's without high availability that are not hosted on hosted_storage remains down, you'll need to power them manually back.

This whole procedure takes about 15-25 minutes after booting the nodes, and this issue is always repeatable, just kill the power to the nodes, power them up again and see for yourself.

The solution would be to change the code and if a storage domain is up - leave it up, skip the check.


I also get something very similar when powering up my environment. Isn't this the same problem? https://bugzilla.redhat.com/show_bug.cgi?id=1651840
In my case, the Storage Domain is not really up, the hosts are not in fact connected to the storage (NFS/iSCSI), the status in the UI is wrong, they are not Up. Takes a while for the engine to recognize they are in fact down and send the connectStorageServer commands to bring them up.
 

Tal, Nir, what do you think about this?

This is not a severe bug. We will look at when we have time.




 
Thanks


On Tue, Feb 12, 2019 at 11:56 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Hi,
We are planning to release the first candidate of 4.3.1 on February 20th[1] and the final release on February 26th.
Please join us testing this release candidate right after it will be announced!
We are going to coordinate the testing effort with a public Trello board at https://trello.com/b/5ZNJgPC3
You'll find instructions on how to use the board there.

If you have an environment dedicated to testing, remember you can setup a few VMs and test the deployment with nested virtualization.
To ease the setup of such environment you can use Lago (https://github.com/lago-project)

The oVirt team will monitor the Trello board, the #ovirt IRC channel on irc.oftc.net server and the users@ovirt.org mailing list to assist with the testing.

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/URIGV3LPTE2RO2BJFXZDHE5H5BN5I4RM/


--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com   

_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CASCUAA54E4YOQ5QDKM77CXXZACFSWUB/