Il giorno ven 15 mar 2019 alle ore 14:00 Simon Coter <simon.coter@oracle.com> ha scritto:
Hi,

something that I’m seeing in the vdsm.log, that I think is gluster related is the following message:

2019-03-15 05:58:28,980-0700 INFO  (jsonrpc/6) [root] managedvolume not supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot import os_brick',) (caps:148)

os_brick seems something available by openstack channels but I didn’t verify.

Fred, I see you introduced above error in vdsm commit 9646c6dc1b875338b170df2cfa4f41c0db8a6525 back in November 2018.
I guess you are referring to python-os-brick.
Looks like it's related to cinderlib integration.
I would suggest to:
- fix error message pointing to python-os-brick
- add python-os-brick dependency in spec file if the dependency is not optional
- if the dependency is optional as it seems to be, adjust the error message to say so. I feel nervous seeing errors on missing packages :-)
 

Simon

On Mar 15, 2019, at 1:54 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:



Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov <hunter86_bg@yahoo.com> ha scritto:

>I along with others had GlusterFS issues after 4.3 upgrades, the failed to dispatch handler issue with bricks going down intermittently.  After some time it seemed to have corrected itself (at least in my enviornment) and I >hadn't had any brick problems in a while.  I upgraded my three node HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They will all be up running fine then all of a sudden a brick will randomly drop >and I have to force start the volume to get it back up. 
>
>Have any of these Gluster issues been addressed in 4.3.2 or any other releases/patches that may be available to help the problem at this time?
>
>Thanks!

Yep,

sometimes a brick dies (usually my ISO domain ) and then I have to "gluster volume start isos force".
Sadly I had several issues with 4.3.X - problematic OVF_STORE (0 bytes), issues with gluster , out-of-sync network - so for me 4.3.0 & 4.3.0 are quite unstable.

Is there a convention indicating stability ? Is 4.3.xxx means unstable , while 4.2.yyy means stable ?

No, there's no such convention. 4.3 is supposed to be stable and production ready.
The fact it isn't stable enough for all the cases means it has not been tested for those cases.
In oVirt 4.3.1 RC cycle testing (https://trello.com/b/5ZNJgPC3/ovirt-431-test-day-1 ) we got participation of only 6 people and not even all the tests have been completed.
Help testing during release candidate phase helps having more stable final releases.
oVirt 4.3.2 is at its second release candidate, if you have time and resource, it would be helpful testing it on an environment which is similar to your production environment and give feedback / report bugs.

Thanks

 

Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/


-- 
SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

sbonazzo@redhat.com   

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UPPMAKYNGWB6F4GPZTHOY4QC6GGO66CX/



--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com